content update October

This commit is contained in:
2021-10-31 15:18:54 +01:00
parent 48b63baa25
commit deef076a4f
13 changed files with 156 additions and 16 deletions

View File

@ -0,0 +1,132 @@
---
title: "Building With SVG 🖼"
date: 2021-08-28T11:53:54+02:00
draft: false
toc: true
tags:
- svg
- xml
- python
- code
---
SVG is generally my image format of choice having used it for illustrations,
chip diagrams, device specifications, and visual outputs generated by code.
SVG is plain text-baesd xml that is structured with some top level
object/properties followed by standardized objects that draw lines and shapes.
On a few occasions, I have scripted the generation of some SVG illustration
where some parameters are extracted from a database that are then visualized.
These scripts are generally quite simple since you just define some
pre-formatted shapes and place them inside the drawing region. Besides this
I think it useful to highlight some of the automated tools and libraries
that are useful with similar functionality.
## KGT: Kate's Grammar Tool
KGT is a pretty neat starting point to experiment with this kind of function.
It is relatively self contained and produces compact SVG objects from simple
statements.
### Build Instructions
Building `libfms` and `kgt` from source was not too much of a hassle although
the build / dependency documentation could be better. This was build with my
WLS-Ubuntu environment.
``` bash
apt install clang pmake
git clone --recursive "https://$libfsm_REPO/libfsm"
pushd libfsm; CC=clang PREFIX=$HOME pmake -r install; popd
git clone --recursive "https://$KGT_REPO/kgt"
pushd kgt/src; CC=clang PREFIX=$HOME pmake -r install; popd
```
The main issue is noticed
is the SVG being generated uses `path {rouded}` in its style definition which
the svg rasterizer from `librsvg2 2.40.20` complained about. Getting the latest
build however is quite involved requiring the latest cairo and proppler
libraries as well. Ideally generating pngs or rasterizing won't be needed.
### Example
Just to show a typical use case for making an illustration using the KGT tool,
below I generate the svg for one of the examples included by it's repository.
``` bash
KGT_DEF="<personal-part> ::= <first-name> | <initial> \".\" "
echo "$KGT_DEF" | kgt -l bnf -e svg | awk -vf1="$(<style.svg)" -f replace_style.awk > example_kgt.svg
```
The style is automatically introduced in the xml header section which is mostly
plain black. This has some legibility issues for dark themes so a short `awk`
script is used to replace the style with one that we define for this theme.
``` awk
BEGIN{style_flag=0}
/<style>/{style_flag=1}
{if(style_flag == 0) print $0}
/<\/style>/{style_flag=0;print f1}
```
For completeness we include the style definition below but this could be
added directly to KGT as a feature in future releases.
``` xml
<style>
rect, line, path { stroke-width: 1.5px; stroke: white; fill: transparent; }
rect, line, path { stroke-linecap: square; stroke-linejoin: rounded; }
path { fill: transparent; }
text { fill: white; font-family:'Trebuchet MS'; }
text.literal { }
line.ellipsis { stroke-dasharray: 1 3.5; }
tspan.hex { font-family: monospace; font-size: 90%; }
path.arrow { fill: white; }
</style>
```
The final result is shown below.
![example_kgt.svg](/images/example_kgt.svg)
## Tabatkins Railroad Diagrams
On the topic of rail-road diagrams there is also a repository from
[tabatkins](https://github.com/tabatkins/railroad-diagrams) which is a python
based code-base for generating similar SVG diagrams as KGT but without having
to deal with building or running binaries. I prefer monochrome diagrams with
plan formatting so again we are overriding the default style.
``` python
style = ( ''
+'\tsvg.railroad-diagram {\n\t\tbackground-color:none;\n\t}\n'
+'\tsvg.railroad-diagram path {\n\t\tstroke-width:1.5;\n\t\tstroke:white;\n\t\tfill:rgba(0,0,0,0);\n\t}\n'
+'\tsvg.railroad-diagram text {\n\t\tfont:bold 14px monospace;\n\t\tfill: white;\n\t\ttext-anchor:middle;\n\t}\n'
+'\tsvg.railroad-diagram text.label{\n\t\ttext-anchor:start;\n\t}\n'
+'\tsvg.railroad-diagram text.comment{\n\t\tfont:italic 12px monospace;\n\t}\n'
+'\tsvg.railroad-diagram rect{\n\t\tstroke-width:1.5;\n\t\tstroke:white;\n\t\tfill:none;\n\t}\n'
+'\tsvg.railroad-diagram rect.group-box {\n\t\tstroke: gray;\n\t\tstroke-dasharray: 10 5;\n\t\tfill: none;\n\t}\n'
)
```
Styling is best done on a case to case basis with various color-schemes such as
using white text/lines for dark themes. Since this is all handeled in python
the overall interface. Possibly including some kind of command-line utility
here would be quite good but it depends on the final flow for figure generation.
Using the style definition shown above, generating a similar example as before
would look like this:
``` python
import railroad
with open("./test.svg","w+") as file:
obj = railroad.Diagram("foo", railroad.Choice(0, "bar", "baz"), css=style)
obj.writeSvg(file.write)
```
The final result is shown below.
![example_kgt.svg](/images/example_trd.svg)
Note that this figure is quite a bit more compact but adding additional labels
or customizations outside the scope of the library will probably require
quite a bit of manual work. This could be a fun side project though.

View File

@ -0,0 +1,178 @@
---
title: "Calibre Physical Verification Hacks 🐛🐛"
date: 2021-09-14T11:30:11+02:00
draft: false
toc: true
tags:
- calibre
- config
- verification
---
This page details a variety of 'modifications' to the standard Calibre
verification flow I have used in the past to either modify the checks performed
tools in the physical verification flow. None of which are particularly clean
since they depart from what is usually an approved rule deck / verification
flow. Designs do need to pass the verification process in a meaningful way at
the end of the day so your mileage may vary.
## Extended Device Checks
It is generally good practice to be able to check for internal design
conventions when it comes to layout. Making a custom set of rules that does
exactly this is highly advised to yield better quality designs. For example
it could be required that varactor or mosfet primitives should never have
overlapping shapes with other devices of the same type. The rule below
will check for exactly this and report it as a "NVA0.VAR_OVLP" violation.
```tvf
NVA0.VAR_OVLP { @ Varactors / Tiles should not overlap
VARi AND > 1
}
```
There are other rules that are required or suggested by the DRM that simply
don't have a good DRC rule. For example requiring tear-shaped geometries on
the RDL layer near flip-chip balls. Getting an approximate rule check that
catches the more obvious issues is worthwhile including.
```tvf
NVA0.RDL.TEAR { @ Shape of RDL near pad: tear shape required
X0 = EXT RDL <1 ABUT <125 INTERSECTING ONLY REGION
X1 = EXT RDL <1 ABUT <180 INTERSECTING ONLY REGION
X2 = INT RDL <1 ABUT <180 INTERSECTING ONLY REGION
X3 = EXPAND EDGE (X1 NOT TOUCH INSIDE EDGE X0) BY 1 EXTEND BY 50
X4 = EXPAND EDGE (X2 NOT TOUCH INSIDE EDGE X0) BY 1 EXTEND BY 50
(X3 AND X0) OR (X4 AND X0)
}
```
The above rule finds regions with acute angles (internal and external)
near regions with obtuse angles where the latter is generally the rounded
RDL landing pad for the ball.
## Layer / Device Aliasing
Layer aliasing or remapping is another way to add indirection to the DRC rule
deck that will allow you to both run your own checks and device recognition
without interfering with the standard flow.
```tvf
LAYER MAP 107 DATATYPE 0 746
```
In the above scenario we allocated an additional layer in the Cadence design
to designate inductor recognition beside the standard inductors. This was
required since the standard inductors also implied metallization free regions
which is not always be acceptable. By adding this layer and mapping it to
the same inductor recognition data type during LVS these inductors would still
be recognized but did not trigger the associated metallization rules during DRC.
## Adding New Device Primitives
Another good know how is the process behind device recognition when you run
the Calibre LVS process. The code snippets below take us through a process of
defining a new device for LVS recognition that is bound together with a spice
definition to produce the extracted netlist. This should allow you to define
custom layers and define custom devices on those layers while still getting
LVS clean at the end of the day. This example will define a custom resistor.
```tvf
LAYER RESLYR 450
LAYER MAP 215 DATATYPE 21 450 // layer to form memresistor
XTERM = RESLYR AND M4
XCDTR = RESLYR NOT M4
CONNECT metal4 MEMRESLYRT
DEVICE XDEVICE XCDTR XTERM(PORT1) XTERM(PORT2) netlist model xdevice
```
The section of code above are LVS rule statements that first define a named
layer `RESLYR` and then map a data type onto that layer. The data type should
correspond to what ever you new layer you used to define the device in the
layout editor. Then we define the terminals of this device when ever this layer
overlaps and connects with metal 4 otherwise it is the resistive section.
Finally you specify a device in terms of the relevant layers and how they map
to the actual model.
Notice the device maps to a netlist model called `xdevice` with named ports.
This model is defined below. Note that we haven´t extracted any parameters
but this could be done in the rule deck definition. Also note that here
we also specify the mapping of this `xdevice` to a cell in the design library.
```lisp
(xdevice
(DEVICE_LIB DEVICE_CELL DEVICE_VIEW)
(
(PORT1 PIN1)
(PORT2 PIN2)
)
(
(nil multi 1)
(nil m 1)
)
)
```
Finally a spice definition must also be included in order to run the netlist
comparison. Assuming the cell in the design library correctly netlists with a
auCDL view. The spice definition below presumes both the layout and schematic
perform black-boxed comparison of this new resistor.
```spice
.SUBCKT xdevice PORT1 PORT2
.ENDS
```
## Extending Connectivity Layers
In some occasions it could be that certain extra layers are defined in the DRC
deck but not in the LVS deck. For example there are optional metallization
layers for your process. Adding connectivity is rather strait forward.
The main challenge here is to choose the correct data type mappings as to
avoid conflicts with the original rule statements.
```tvf
LAYER PM1i 5001
LAYER MAP 5 DATATYPE 1 5001
LAYER Cu_PPIi 7410
LAYER MAP 74 DATATYPE 10 7410
LAYER UBM 170
LAYER MAP 170 DATATYPE 0 170
LAYER PM2i 5002
LAYER MAP 5 DATATYPE 2 5002
VIA8 = COPY CB2
metal9 = COPY Cu_PPIi
VIA9 = COPY PM2i
metal10 = COPY UBM
```
Once these layers are defined we can go ahead and specify the order of
connectivity. Notice that we can´t directly operate / manipulate layer
definitions so simply running a `COPY` statement resolves this. Below we
see also that adding ports and text labels connectivity for the relevant
layers is also needed for your pins to connect.
``` tvf
CONNECT metal9 metal8 BY VIA8
CONNECT metal10 metal9 BY VIA9
TEXT LAYER 140 ATTACH 140 metal9
PORT LAYER TEXT 140
TEXT LAYER 141 ATTACH 141 metal10
PORT LAYER TEXT 141
TEXT LAYER 125 ATTACH 125 metal10
PORT LAYER TEXT 125
```
## Hot fixing LVS comparison
Finally the rule statements below are global deck adjustments. They are
for the most part self explanatory except for `CULL` which actually
removes empty spice sub-circuits that are identified by a hierarchical LVS run
but does not actually contain active devices (i.e. a dummy digital filler cell).
```tvf
LVS SPICE CULL PRIMITIVE SUBCIRCUITS YES
VIRTUAL CONNECT NAME "POWER"
TEXT "NET_NAME" LOCX LOCY DATATYPE
LAYOUT RENAME TEXT "/DATA\\[(.*)\\]/DATA<-1>/M-"
```

View File

@ -0,0 +1,11 @@
---
title: "Configure Nginx 🧩"
date: 2021-10-31T15:08:33+01:00
draft: false
toc: false
images:
tags:
- untagged
---
This is a test

View File

@ -0,0 +1,54 @@
---
title: "Domain Setup ☄💻"
date: 2021-09-19T17:14:03+02:00
draft: false
---
## DNS Records
The main part of setting up a domain is configuring your
[DNS Records](https://en.wikipedia.org/wiki/List_of_DNS_record_types). This
basically dictates how your physical machine address is mapped to your human
readable service names. I mainly use this domain for web services together
self hosted email. As such I outlined the relevant records below that these
services require.
| Name | Description
| ----------------------------------------------- | -----------------------
| **A** Address record | physical IPv4 address associated with this domain
| **CNAME** Canonical name record | Alias name for A record name. This is generally for subdomains (i.e. other.domain.xyz as alias for domain.xyz both served the same machine)
| **CAA** Certification Authority Authorization | DNS Certification Authority Authorization, constraining acceptable CAs for a host/domain.
| **DS** Delegation signer | The record used to identify the DNSSEC signing key of a delegated zone
| **MX** Mail exchange record | Maps a domain name to a list of message transfer agents for that domain
| **TXT** Text record | Carries machine-readable data, such as specified by RFC 1464, opportunistic encryption, Sender Policy Framework, DKIM, DMARC, DNS-SD, etc.
The essential records for web services are the A and CNAME records which enable
correct name look up when outside you private network. Nowadays SSL should be
part and so specifying which certification authority you use should be set in
the CAA record. Most likely this will be `letsencrypt.org` which pretty much
provides SSL certificate signing free of charge securing your traffic to some
extent. In combination there should be a DS record here that presents your
public signing key used by your machine's SSL setup and allows you to
setup DNSSEC on your domain.
The other records are required for secure email transfer. First you need the
equivalent of a name record, the MX record which should point to another A
record and may or may not the same machine / physical address as the domain
hosting your web-services. Signing your email is similar to SSL encryption
should be an essential part of your setup. A SMTP set-up with postfix
can do so by using [openDKIM](http://www.opendkim.org/). This will require
you to similarly provide your public signing key as a TXT record.
```bash
"v=DKIM1;k=rsa;p=${key_part1}"
"${key_part2}"
```
The TXT record will look something like the above statement. There are some
inconveniences unfortunately when using RSA in combination with a high entropy
which yields a long public key. You need to break this key up into multiple
strings which the `openkdim` tool may or may not do by default as there is a
maximum character length for each TXT entry element. As long as no semi-colons
are inserted this should just work as expected.

View File

@ -0,0 +1,19 @@
---
title: "Hugo Integration 🧭"
date: 2021-10-30T15:42:22+02:00
draft: true
toc: false
images:
tags:
- untagged
---
## This is Word in Progress
The hope here is that we can call a predefined go procedure that parses
some section of markdown source code and instantiates the corresponding svg file
under our static folder that is then referenced.
``` go
{{/* a comment */}}
```

View File

@ -0,0 +1,122 @@
---
title: "Unified Modelling Language 🐬"
date: 2021-10-30T15:42:47+02:00
draft: false
toc: false
images:
tags:
- svg
- uml
- code
---
## Mermaid CLI
[Mermaid](https://mermaid-js.github.io/mermaid) is a JS based diagram and
charting tool which aspires to generate diagrams in a markdown fashion. The
main advantage here is that Mermaid is well integrated into quite a few
editing and content management
[packages](https://mermaid-js.github.io/mermaid/#/./integrations).
There is command-line node-package that installs on both linux and WSL
environments. You do need NPM version 10+ so installing in
[Windows](https://docs.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-wsl)
takes a few extra steps in order to get the latest version.
```bash
npm install @mermaid-js/mermaid-cli
```
Additionally this
package will sandbox a instance of chromium which doesn't operate correctly
with WLS version 1. Upgrading to WLS version 2 will allow you to run the
following example using `mmdc -i input.mmd -o output.svg -c config.json`.
```text
graph LR
S[fa:fa-dot] --> A
A{foo}
A --> B(bar)
A --> C(baz)
style S fill:none, stroke:none
```
This example generates the diagram show below.
![example_mermaid.svg](/images/example_mermaid.svg)
There are four base themes: dark, default, forest, neutral. Additional
[customization](https://mermaid-js.github.io/mermaid/#/theming) is possible.
The `config.json` is shown below which sets similar styling as
[before]({{< relref "building-svg.md" >}} "building svg") using the other
command-line tools.
```json
{
"theme": "neutral",
"themeVariables": {
"fontFamily":"monospace",
"fontSize":"14px",
"classText" : "white",
"nodeBorder" : "white",
"nodeTextColor" : "white",
"lineColor" : "white",
"arrowheadColor" : "white",
"mainBkg" : "none"
}
}
```
## UML diagrams
Mermaid is quite a bit more versatile and is geared towards making structured
diagrams of classes and inter-related structures. For example the UML diagram below presents the overall composition of
[pyviewer]({{< relref "pyside.md" >}} "pyside") which is image simple
browsing utility for compressed archives.
![example_pyviewer.svg](/images/example_pyviewer.svg)
This does quite well at illustrating how classes are composed and which methods
are available at various scopes. It also helps organizing and structuring a
code-base when there means to reason in a visual way. The source code for this
diagram is shown below for reference.
```text
classDiagram
class ApplicationWindow{
int[2] layout
int max_count
navigate(keyPress)
update()
}
class PyViewer{
signal image_changed
load_file_map(str path)
load_archive(int index)
set_max_count(int max_count)
}
class ArchiveLoader{
generate_map(str path)
extract_current_index()
check_media()
}
class ArchiveManager{
int max_count
Qbyte[] images
load_archive(str archive_path)
}
class TagManager{
int index
dict media_map
dict tag_filter
list tag_history
update_filter(str tag, bool state)
undo_last_filter()
adjust_index(int change)
tag_at(int index)
set_index(str tag_name)
}
ArchiveLoader <|-- TagManager
ArchiveLoader <|-- ArchiveManager
PyViewer <-- ArchiveLoader
PyViewer <-- ApplicationWindow
```

View File

@ -0,0 +1,23 @@
---
title: "Mile Stones 📚"
date: 2021-08-28T16:13:52+02:00
draft: false
tags:
- content
- plan
---
This is a list of topics that I may include at some point in time:
1. SSL and NGINX setup guide
2. Postfix setup guide
3. Danbooru setup guide
4. pyside notes from pyviewer project
5. other fun stuff
I also want to share some of the IC design work building up a technical profile:
1. Chip gallery
2. Academic topics
3. ADC stuff
4. Time domain processing
5. Skill and Cadence utilities
6. Design flow and scripts

View File

@ -0,0 +1,83 @@
---
title: "Binding QML with Python: PyViewer 👾"
date: 2021-08-29T12:53:19+02:00
draft: false
toc: true
tags:
- python
- qml
- gui
- code
---
[PyViewer](https://git.leene.dev/lieuwe/pyviewer) is a example project which
implements a simple image browser / viewer in a scrollable grid array. This main
objective here was using QML to define a graphical layout and bind it to a
python code-base. Note that this code base is compatible with both Pyside2 and
Pyside6. This is because while Pyside6 is preferred it is not readily available
on all platforms. Running Pyside6 instead only recommend the qml library version
requirements to omitted.
Please take a look at the git repository for exact implementation details. A
brief summary of this interaction is presented below.
## Emitting QML Calls
Creating a `QObject` and adding `PySide2.QtCore.Slot` decorators to its methods
will allow a python object to be added to the qml context as a referenceable
object. For example here we add "viewer" to the qml context which is a
"PyViewer" python object.
```Python
pyviewer = PyViewer()
engine.rootContext().setContextProperty("viewer", pyviewer)
```
This way we can call the object's python procedure "update_tag_filter" from
within the QML script as follows:
```QML
viewer.update_tag_filter(false);
```
Further using the `PySide2.QtCore.Property` decorator further allows us to call
states in our python object and manipulate them as it were a qml object.
```QML
viewer.path.split("::")
```
## Emitting Python Calls
Once this context is working we can create a `PySide2.QtCore.Signal` object to
call QML methods from within the python context. A python procedure could then
"emit" this signal and thereby prompt any connected qml methods.
```python
self.path_changed.emit()
```
In the qml contect we can connect the signals from the python "viewer" object
to a qml function call "swipe.update_paths" for example.
```qml
viewer.path_changed.connect(swipe.update_paths)
```
## Downside
Debugging and designing QML in this environment is limited since the pyside
python library does not support all available QML/QT6 functionality. In most
cases you are looking at C++ Qt documentation for how the pyside data-types
and methods are supposed to behave without good hinting.
Also the variety in data types that can be passed from one context to the other
is constrained although in this case I was able to manage with strings and byte
objects.
## Other Notes: TODO
```python
ImageCms.profileToProfile(img, 'USWebCoatedSWOP.icc',
'sRGB Color Space Profile.icm', renderingIntent=0, outputMode='RGB')
```

View File

@ -0,0 +1,100 @@
---
title: "Python Urllib ⬇📜"
date: 2021-10-26T20:02:07+02:00
draft: false
toc: true
tags:
- python
- scraping
- code
---
I had to pull some meta data from a media data base and since this tends to
be a go to setup when I use urllib with python. I thought I would make a quick
note regarding cookies and making POST/GET requests accordingly.
## Setting up a HTTP session
The urllib python library allows you to get global session parameters directly
by calling the `build_opener` and `install_opener` methods accordingly. Usually
if you make HTTP requests with empty headers or little to no session data
any script will tend to be blocked when robots are not welcome so while setting
these parameters mitigates such an issue it is advised to be a responsible
end-user.
```python
mycookies = http.cookiejar.MozillaCookieJar()
mycookies.load("cookies.txt")
opener = urllib.request.build_opener(
urllib.request.HTTPCookieProcessor(mycookies)
)
opener.addheaders = [
(
"User-agent",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36"
+ "(KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
),
(
"Accept",
"text/html,application/xhtml+xml,application/xml;q=0.9,"
+ "image/avif,image/webp,image/apng,*/*;q=0.8,"
+ "application/signed-exchange;v=b3;q=0.9",
),
]
urllib.request.install_opener(opener)
```
The above code snippet sets a user agent and what kind of data the session
is willing to accept. This is generic and simply taken from one of my own
browser sessions. Additionally I load in `cookies.txt` which are the session
cookies that I exported to a file for a given domain from my browser.
## HTTP POST request
Web based APIs will have various methods for interacting but POST requests with
JSON type input/output and occasionally XML but given python's native support
for JSON this is generally the way to do things.
``` python
url = f"{host_name}/api.php"
data = json.dumps(post_data).encode()
req = urllib.request.Request(url, data=data)
meta = urllib.request.urlopen(req)
return json.loads(meta.read())
```
The above code snippet prepares a `req` object for particular `host_name` and
`post_data` which is a dictionary that is encoded to a JSON string. Calling
urlopen on this request will perform a POST request accordingly where if
all works as expected should return a JSON string that is mapped to a python
collection.
In the scenario where the data is returned as an XML string / document, there
is a `xmltodict` python library that will return a python collection. The
downside here is the xml has quite a deep hierarchy that is difficult to
appreciate unless the we get into large xml data structures that can be queried.
For reference the xml parsing will look something like this:
```python
xmltodict.parse(meta.read())
```
## HTTP GET request with BeautifulSoup
Performing GET requests is usually much much most simply since you just need
to determine the appropriate url. Here I included an example where the
`BeautifulSoup` python library is used to container the HTTP response and
search through any links within the response that march a regular expression.
```python
query_url = f"{host_name}/?f_search={tag_name}"
resp_data = urllib.request.urlopen(query_url)
resp_soup = BeautifulSoup(resp_data)
return [ link["href"]
for link in resp_soup.find_all("a", href=True)
if re.match( f"{host_name}/g/([0-9a-z]+)/([0-9a-z]+)", link["href"] )
]
```
This is probably the most common use case for the `BeautifulSoup` library and
it is very effective instead of sifting through any html data.

View File

@ -0,0 +1,83 @@
---
title: "Setting Up a New Site 🌃"
date: 2021-08-24T10:24:27+02:00
draft: false
toc: true
tags:
- website
- config
- hugo
- git
---
Previously I tried using Grav with the intention to serve a simple website as
it is quite easy to setup and the interface seemed quite nice. However the
editing environment didn't feel good and after googling around a bit hugo
already seemed a lot more appealing. It renders from markdown with some html/css
config files and can serve content statically or dynamically without superfluous
features.
So far it looks like I will stick with hugo and in any case a markdown source is
highly portable.
## Building Hugo
Hugo is actually provided by ubuntu and centos repositories but building from
source is equally trivial. I went a head a built hugo from the main repository
using: `go version go1.15.14 linux/amd64` and placed the binary in
`/usr/local/bin`.
```bash
git clone https://github.com/gohugoio/hugo.git
cd hugo
go install
```
I started off with the hermit theme and initialized a repository for this site
and the theme to track changes separately. I will probably adjust the colour and
type-setting to some extent. Then eventually adjusting the actual layouts and
templates as we go.
## Git filter
Currently I setup two branches: `master` which is deployed statically on
`leene.dev`, and `dev` which is just for local development as I try out different
things. I setup a clean-smudge git filter to manage deployment on a site-basis:
``` toml
[filter "hostmgmt"]
smudge = sed 's@\\$HOSTNAME\\$@http://localhost@'
clean = sed 's@http://localhost@\\$HOSTNAME\\$@'
```
Note if we make a change to just the filter we can re-apply it by resetting our
index and checking out HEAD again.
``` bash
rm .git/index
git checkout HEAD -- "$(git rev-parse --show-toplevel)"
```
But looking closer at the hugo documentation, it would be better to prepare a
similar development and production configuration. We'll see if this can evaluate
system environment variables. Alternatively you can also specify the server
parameters directly.
``` bash
hugo server --bind=0.0.0.0 --baseURL=http://zathura --port=1313
```
## Planned features and content
I usually document most of my system administration work so that will be a large
part of the content here but I am planning to include some technical and
non-technical topics how I see fit.
First I want to setup a clean flow of generating and serving svg content for
well formatted illustrations. Ideally the source code is contained in the
markdown and evaluated by hugo calling some external processing components but
we will see how that works. I will make some milestones as part of the
repository.
Secondly I want to try a few style changes for the hermit template. Most of it
is pretty good but there are a few things that I'd rather customise such as the
main page and the footer.

View File

@ -0,0 +1,69 @@
---
title: "Spice Monkey 💻🐒"
date: 2021-10-29T18:54:32+02:00
draft: false
toc: false
images:
tags:
- spice
- code
- verification
---
## Port Order Reshuffling
```bash
function getSortedOrder() {
local SOURCE=""
local SORTED=""
read -a SOURCE <<< "$1"
SORTED="${SOURCE[@]}"
if [ -z "${SORTED//*\[*}" ] ; then
SORTED=($(echo "${SOURCE[@]:2}" | tr " " "\n" | sed -r "s/\[([0-9]+)\]/ \1 /g" \
| sort -k 1,1 -k2,2nr | sed -r "s/ ([0-9]+) /\[\1\]/g" ))
else
SORTED=($(echo "${SOURCE[@]:2}" | tr " " "\n" | sed -r "s/<([0-9]+)>/ \1 /g" \
| sort -k 1,1 -k2,2nr | sed -r "s/ ([0-9]+) /<\1>/g" ))
fi
echo "${SOURCE[@]:0:2} ${SORTED[@]}"
}
function updatePortOrder() {
local TARGET="$1"
local CDL_FILE="$2"
local PORTORDER="$(awk -v target="subckt ${TARGET} " -f "catch.awk" "$CDL_FILE")"
local PORTREF=$(getSortedOrder "$PORTORDER")
local SWPDELIMITER=""
echo $TARGET
if [ -z "${PORTREF//*\[*}" ] ; then SWPDELIMITER="TRUE" ; fi
awk -v target="subckt ${TARGET} " -v release="$PORTREF" -v swpdelim="$SWPDELIMITER" \
-f "release.awk" "$CDL_FILE" > "${TARGET}.cdl"
[ ! -z "$(grep -m 1 "\[" "${TARGET}.cdl")" ] && [ ! -z "$(grep -m 1 "<" "${TARGET}.cdl")" ] \
&& echo "Error $CDL_FILE uses mixed delimiters"
}
```
```awk
BEGIN{ hold = ""; IGNORECASE = 1 }
NF {
if( $1 == "+" && hold != "")
{ for(i=2;i<=NF;i++) hold=hold " " $i }
else if( hold != "") { print hold; hold=""; exit }
};
$0 ~ target { hold = $0 };
```
```awk
BEGIN{output="";hold="";IGNORECASE=1};
NF{if($1!="+")hold=""}
$0~target{
hold=$0
n=split(release,ports," ")
for(i=n;i>0;i--){
if(swpdelim!=""){
gsub("<","[",ports[i])
gsub(">","]",ports[i])}
output=ports[i]" "output}
print output}
NF{if(hold=="")print $0}
```