WIP: Major content update

This commit is contained in:
2022-06-13 17:39:51 +02:00
parent 3ca7604967
commit 01d0a2c796
352 changed files with 84802 additions and 58 deletions

View File

@ -0,0 +1,161 @@
---
title: "Image Vectorization 🖇🍮"
date: 2021-12-08T19:26:46+01:00
draft: false
toc: true
tags:
- svg
- python
- code
- image
---
Automated vectorization and upscaling is useful in many scenarios particularly
for drawn art that can be segmented and is generally structured using strokes
and gradients. Here I will outline a methodology that is based around
structural analysis and direct regression methods then evaluate error metrics
for fine-tuning by comparing the output with non-vectorized up-scalers.
## Observations
Regarding image segmentation, I suspect the most common approach is directly
clustering the entire image using the colour and position. In
most scenarios this feature-space will be separable and is a well understood
problem statement. There result still poses some issues; first cluster enclosure
is difficult to resolve (convexity is not guaranteed), second gradient
components weaken the separability of the data. In addition we may need to
sub-sample the image since clustering is computationally expensive given the
mega-samples of image data.
## Outline
0. Coarse Analysis
- Cluster based on colour-space and |∇×F| |∇·F| normalized components
- Present image colour and segmentation complexity
- Partition image in delta space using histogram method
1. Pre-processing and edge thresholding
- Compute the YCbCr equivalent representation
- Map the input colour space based on k-means / SVM for maximum cluster separation
- Compute edges in images and fit spline segments with grouping
2. Fine image segmentation
- Use edges to initialize segments and segment regions based on colour deltas
- Complete segmentation by filling image
- Regression to fit colour for each segment
4. Image restructuring and grouping
- Simplify structures by creating region hierarchy
- SVG 2.0 supports mesh-gradients / Bezier surface composition
- Detect regular patterns with auto correlation / parameter comparison
3. Error evaluation and recalibration
- Use upscaled reference to evaluate error
- Identify segments that need to be restructured with more detail
## Action Items
- Colour surface fitting
- Given a set of samples, fit an appropriate colour gradient
- Image normalization and pre-processing
- Edge composition and image segmentation
- Define a model for segments of the image
- Define a model for closed and open edges
- Evaluate segment coverage of the image
- Heuristics for image hierarchy and optimizations
## Pre-processing Engine
Currently histogram binning has proven to be very effective for generating
initial clusters since it scales well with image size. This allows us to
quickly gauge the complexity of an image in terms of content separability.
These regions will be used to partition the image as edges are extracted
before more accurate mapping is performed.
The main challenge here is colour gradients that washout what should be
obvious centroids. Analysing the samples in batches can prevent this to some
extent but a more robust approach is needed. We could use a localized grouping
technique but accuracy may need be that important for this pre-processing step.
Another technique is that the image can first be clustered using the histogram
of derivative components followed by sub-classing a histogram for each gradient
cluster.
This idea for histogram-binning is surprisingly efficiently for artificial
images where the colour pallet is rich in features. A binary search for
parameterizing the local maxima detection will very quickly segment a wide
variety of images into 10 - 30 classifications.
## Edge extraction
At some point we will want to construct a model representing regions and shapes.
The principle component here is identifying edges segmenting the image. Edge
detection is relatively strait-forward as we only need look for extrema in the
derivative components. In most scenarios this is actually quite noisy and
it is not obvious how we should threshold for what is and what is not an edge.
Here we use the histogram-based clustering result for edge detection region to
region transitions are discretized and no adaptive threshold is required.
There will unavoidably be noisy regions where we see this boundary being spread
out or possibly just a select few pixels appearing in some sub-section due to
the clustering process. This can mostly be removed with a median filter if
necessary.
If the initial segmentation are generated based on k-means in the colour space,
two families of edges will be detected along segments: hard and soft edges.
Hard edges will correspond to the intended edges seen in the image where as
soft edges will arise due to the clustering technique. We can classify these
two families by looking at the norm of the derivative component along such an
edge. There will be more than one way to asses the correctness here but the
significance here is that soft edges present boundary conditions during colour
mapping while hard edges do not. Otherwise visual artefacts will arise are the
interface of two segments that originally was a smooth transition.
## Structural Variability
While we are targeting a-typical images for vectorization it is obvious that
'sharpness' in the final result depends on a subjective style that is difficult
to assess in terms of simple regress or interpolation. This problem however is
central to upscaling algorithms so the methodology here will be that a external
upscaling tool will guide the vectorization process. For example vectorizing
a pixel-art image yields better results 'nearest-neighbour' methods opposed to
Lanczos resampling.
## Regression over SVG colour surfaces
The SVG standard supports three methods for specifying colour profiles or
gradients: Flat, Linear, Radial. There are more advanced mechanisms through
embedding or meshing multiple components the aforementioned three readily
allow us to fit a first order colour contour through linear
regression. This will be our first objective for parameterizing
the colour for segments in our image. Another thing to note is that the gradient
can be transformed after being parameterized. This means that the a circular
gradient can be transformed to realize a family elliptical gradients.
Obviously this will not accommodate all colour contours that we find in images
but in such scenarios we may adopt piece-wise approximations or more accurate
masking of each component using the alpha channel. At some point we should
also be able to resolve mixtures and decompose the contour from a non-linear
or higher-order surface into multiple simpler contours. Again note that support
for advanced colour profiles is not well supported so composition through
these basic elements will yield the best support.
Using linear regression here with a second order polynomial kernel is a very
efficient method for directly quantifying the focal point of the colour
gradient if there is one.
## Contour estimation
My initial attempt to estimate the contour given a set of points was based on
using the convex-hull and recursively enclosing the outline in more detail
by interpolating in between the current outline and finding the closest point
orthogonal to the outline. This result yields a fast approximation of the
enclosing outline without many requirements on the set of points other than
having a dense outline. The drawback was that if it difficult to detect
incorrect interpolation and only resolves the outline with pixel-level
precision. If we pre-process the collection of points such that they
represent detected edges at sub-pixel resolution the later draw-back can be
addressed. Correctness or hypothesis testing could yield a more robust result
at the cost of increased complexity.

View File

@ -87,7 +87,7 @@ added directly to KGT as a feature in future releases.
The final result is shown below.
![example_kgt.svg](/images/example_kgt.svg)
{{< figure src="/images/posts/example_kgt.svg" title="example_kgt.svg" >}}
## Tabatkins Railroad Diagrams
@ -118,15 +118,29 @@ would look like this:
``` python
import railroad
with open("./test.svg","w+") as file:
with open("./posts/test.svg","w+") as file:
obj = railroad.Diagram("foo", railroad.Choice(0, "bar", "baz"), css=style)
obj.writeSvg(file.write)
```
The final result is shown below.
![example_kgt.svg](/images/example_trd.svg)
{{< figure src="/images/posts/example_trd.svg" title="example_trd.svg" >}}
Note that this figure is quite a bit more compact but adding additional labels
or customizations outside the scope of the library will probably require
quite a bit of manual work. This could be a fun side project though.
# Using Hugo Short Codes
``` go
{< python-svg dest="/images/posts/test.svg" title="This is a pyuthon-svg exmaple." >}
railroad.Diagram("foo", railroad.Choice(0, "bar", "baz"), css=style)
{< /python-svg >}
```
{{< python-svg dest="/images/posts/test.svg" title="This is a python-svg exmaple." >}}
railroad.Diagram("foo", railroad.Choice(0, "bar", "baz"), css=style)
{{< /python-svg >}}

View File

@ -1,9 +1,8 @@
---
title: "Setting up a NGINX configuration 🧩"
date: 2021-10-31T15:08:33+01:00
draft: false
draft: true
toc: false
images:
tags:
- website
- config

View File

@ -42,7 +42,7 @@ graph LR
This example generates the diagram show below.
![example_mermaid.svg](/images/example_mermaid.svg)
{{< figure src="/images/posts/example_mermaid.svg" title="example_mermaid.svg" >}}
There are four base themes: dark, default, forest, neutral. Additional
[customization](https://mermaid-js.github.io/mermaid/#/theming) is possible.
@ -73,7 +73,7 @@ diagrams of classes and inter-related structures. For example the UML diagram be
[pyviewer]({{< relref "pyside.md" >}} "pyside") which is image simple
browsing utility for compressed archives.
![example_pyviewer.svg](/images/example_pyviewer.svg)
{{< figure src="/images/posts/example_pyviewer.svg" title="example_pyviewer.svg" >}}
This does quite well at illustrating how classes are composed and which methods
are available at various scopes. It also helps organizing and structuring a
@ -153,7 +153,7 @@ function main() {
esac
# echo "IN:${ARGS[1]} OUT:${ARGS[3]}"
mmdc ${ARGS[@]} &> /dev/null
mogrify -trim "${ARGS[3]}"
mogrify -trim "${ARGS[3]}"
feh --reload 2 "${ARGS[3]}" &
sleep 0.1
inotifywait -qm --event modify --format '%w' "${ARGS[1]}" | \

View File

@ -0,0 +1,38 @@
---
title: "My 2018 Setup"
date: 2021-08-12T10:24:27+02:00
draft: false
toc: true
tags:
- website
- about
---
I mainly use RHEL flavours of linux having both CentOS and Fedora machines. Most
hosted services run on CentOS 8 at the moment albeit they are approaching
end-of-life. Overall the package repository for CentOS 7/8 is just right. I
rarely need to compile anything from source and packages are very stable.
I will eventually migrate to Fedora completely which is where I operate my
development environment.
This is a list of my most used self-hosted services:
- Gitea: Git server with web interface for repository mirrors and personal repos
- Plex: multi-media hosting service for streaming movies and tv-shows
- NextCloud: Cloud storage for synchronizing and sharing files
- Cockpit: Web base administration portal managing linux boxes
- RoundCube: Web based email client
- Postfix/Dovcot: Email stack providing SMTP for my domain
- NGINX: HTTP server serving as proxy for internal web services
- Danbooru: Ruby-on-rails based image hosting and tagging service
There are several others that I have tried but these really have been the things
I relied on the most in the past 5 years or so. I think the only thing that is
possibly missing from this list is possibly the equivalent of a centralized LDAP
service but I simply haven't had to manage more than handful of users.
Currently I develop quite a bit of python utilities for scraping, labelling, and
managing media in an automated fashion. In part I am preparing data for one of
my long term projects which is related to image classification based on
structural decomposition rather than textural features. The main idea here is
to analyse and extract structure in an image before performing in-depth analysis
such that said analysis is most specific to its context.

View File

@ -64,18 +64,62 @@ to a qml function call "swipe.update_paths" for example.
viewer.path_changed.connect(swipe.update_paths)
```
## Example: passing images as bindary data
For reference the code below outlines a simple example that loads an image from
a zip archive and makes the binary data available for QML to source. This
avoids the need for explicit file handles when generating or deflating images
that are needed for the QML front-end.
```python
class Archive(ZipFile):
"""Simple archive handler for loading data."""
@property
def binarydata(self) -> bytes:
"""Load file from archive by name."""
with self.open(self.source_file, "r") as file:
return file.read()
```
The example class above simply inherits from the zipfile standard library where
we read a image and store it as part of the `PyViewer` class shown below. This
class inherits from `QObject` such that the property is exposed to the qml
interface. In this case the `imageloader` is an `Archive` handler that is
shown above.
```python
class PyViewer(QObject):
"""QObject for binging user interface to python backend."""
@Property(QByteArray)
def image(self) -> QByteArray:
"""Return an image at index."""
return QByteArray(self.imageloader.binarydata).toBase64()
```
This setup allows a relatively clean call to the `viewer.image` property within
the QML context as shown below. Other data types such as `int`, `string`,
`float`, and booleans can be passed as expected without requiring the
QByteArray container.
```qml
Image {
anchors.fill: parent
fillMode: Image.PreserveAspectFit
mipmap: true
source = "data:image;base64," + viewer.image
}
```
## Downside
Debugging and designing QML in this environment is limited since the pyside
python library does not support all available QML/QT6 functionality. In most
cases you are looking at C++ Qt documentation for how the pyside data-types
and methods are supposed to behave without good hinting.
and methods are supposed to behave without good hinting. Having developed
native C++/QML projects previously helps a lot. The main advantage here is t
hat QML source code / frame-works can be reused.
Also the variety in data types that can be passed from one context to the other
is constrained although in this case I was able to manage with strings and byte
objects.
## Other Notes: TODO
## Other Notes:
```python
ImageCms.profileToProfile(img, 'USWebCoatedSWOP.icc',

View File

@ -0,0 +1,21 @@
---
title: "Super Resolution 🧙‍♂️"
date: 2021-09-19T13:30:00+02:00
draft: true
toc: true
math: true
tags:
- upscaling
- image-processing
- anime
- python
---
WIP: this is an on going effort for super-resolving images given learned context
and Super-Resolution Using a Generative Adversarial Network (SRGAN). [^1]
$$ y_t = \beta_0 + \beta_1 x_t + \epsilon_t $$
now inline math \\( x + y \\) =]
[^1]: And that's the footnote.

View File

@ -0,0 +1,75 @@
---
title: "Latex to Markdown"
date: 2022-04-28T13:42:40+02:00
draft: false
tags:
- markdown
- latex
- code
- python
- hugo
---
Recently I started porting some of my latex articles to markdown as they would
make a fine contribution to this website in simpler format. Making a simple
parser python isn't that bad and I could have used [Pandoc](https://pandoc.org/index.html)
but I wanted a particular format for rendering a hugo markdown page. So I
prepared several regex-based functions in python to dereference and construct
a hugo-compatible markdown file.
``` python3
class LatexFile:
def __init__(self, src_file: Path):
sys_path = path.abspath(src_file)
src_dir = path.dirname(sys_path)
src_file = path.basename(sys_path)
self.tex_src = self.flatten_input("\\input{" + src_file + "}", src_dir)
self.filter_tex(sys_path.replace(".tex", ".bbl"))
def filter_tex(self, bbl_file: Path) -> None:
"""Default TEX filterting proceedure."""
self.strip_tex()
self.preprocess()
self.replace_references(bbl_file)
self.replace_figures()
self.replace_tables()
self.replace_equations()
self.replace_sections()
self.postprocess()
```
The general process for converting a Latex document is outlined above. The
principle here is to create a flat text source which we then incrementally
format such that Latex components are translated correctly.
## Latex Components
In order to structure the python code I created several named-tuples for
self-contained Latex contexts such as figures, tables, equations, etc. then
by adding a `markdown` property we can replace these sections with hugo
friendly syntax using short-codes where appropriate.
``` python3
class Figure(NamedTuple):
"""Structured Figure Item."""
span: Tuple[int, int]
index: int
files: List[str]
caption: str
label: str
@property
def markdown(self) -> str:
"""Markdown string for this figure."""
fig_str = ""
for file in self.files[:-1]:
fig_str += "{{" + f'< figure src="{file}" width="500" >' + "}}\n"
fig_str += (
"{{"
+ f'< figure src="{self.files[-1] if self.files else ""}" title="Figure {self.index}: {self.caption}" width="500" >'
+ "}}\n"
)
return fig_str
```

View File

@ -0,0 +1,271 @@
---
title: "Synthesizing Sinusoids"
date: 2022-05-17T13:17:04+02:00
draft: false
toc: true
math: true
tags:
- signal-processing
- delta-sigma-modulation
- digital-circuits
- python
---
Here I will go over a hardware efficient digital-technique for synthesizing a
high-fidelity sinusoidal tone for self-test and electrical characterization
purposes. This will be a application of several state-of-the-art hardware
techniques to minimize hardware complexity while readily
generating a precise tone with well over 100 dB of dynamic range. Further more
the resulting output bit-stream is delta-sigma modulated enabling the use of a
low-complexity 4 bit digital-to-analogue-converter that employs
dynamic-element-matching.
## Synthesizer
``` goat
+-----------------------+ 16b +--------------+ 16b +-------------+ 4b
| 32 bit Recursive | / | 1:16 | / | 3rd Order | /
| Discrete-Time +---+-->| Rotated CIC +---+-->| Delta-Sigma +---+--> Output Bitstream
| Sinusoidal Oscillator | / | Interpolator | / | Modulator | /
+-----------------------+ +--------------+ +-------------+
```
The overall system composition is illustrated above and consists of three
modules. The first module is a recursive digital oscillator and operates at a
higher precision but lower clock rate to generate the target test-tone. The
proceeding modules encode this high-resolution digital signal into a low
resolution digital bit-stream there the quantization noise is shaped towards
the high-frequency band that can then be filtered out in the analogue-domain.
## Digital Oscillation
There are numerous all-digital methods for synthesizing a sinusoidal signal
precisely. The most challenging aspect here is the trigonometric functions that
are difficult compute given limited hardware resource. A common approach to
avoid this is to use a look-up table representing `cos(x)` for mapping phase to
amplitude but this generally requires a significant amount of memory.
Alternatively a recursive feedback mechanism can be used that will oscillate
with a known frequency and amplitude given a set of parameters. The later
approach has negligible memory requirements but instead requires full-precision
multiplication. However considering that we are required to perform delta-sigma
encoding at the output this feedback mechanism can run at a reduced clock-rate
allowing this multiplication to be performed in a pipelined fashion which is
considerably more affordable.
``` goat
- .-. .-.
.->| Σ +------*----->| Σ +---> Digital Sinusoid
| '-' | '-'
| ^ v ^
| | .-----. |
| | | z⁻¹ | '----- Offset
| | . '--+--'
| | /| |
| '+ |<---*
| K \| |
| ' v
| .-----.
'----------+ z⁻¹ |
'-----'
```
The biquad feedback configuration shown above is one of several oscillating
structures presented in [^1] with the equivalent a python model presented below.
The idea here is to perform full-precision synthesis at 64 or 32 bit with a
pipelined multiplier such that this loop runs at 1/M times the modulator clock
speed where M is the oversampling ratio that is chosen to optimize the
multiplier pipeline. In this case M=16 and we will be using 32 bit frequency
precision.
``` python3
class Resonator:
def __init__(self, frequency: float = 0.1, amplitude: float = 0.5):
K = 2 * np.cos(2 * np.pi * frequency)
self.A = 0.0
self.B = amplitude * np.sqrt(2.0 - K)
self.K = K
def update(self) -> int:
self.A, self.B = (self.A * self.K - self.B, self.A)
return self.A
```
The coefficient K determines the frequency of oscillation as a ratio relative to
the operating clock speed. Using \\( K = 2 cos( 2 \pi freq )\\) such that the
oscillation occurs as \\( freq \cdot fclk \\). The initial condition of the two
registers will determine the oscillation amplitude. Setting the first register
to zero and the second to \\( A \sqrt{2 - K} \\) will yield amplitude of \\(A\\)
around zero. We can then offset this signal to specify the level around which
tone oscillates.
## Band-Select Interpolation
The main drawback of not synthesizing the sinusoid at a fractional clock rate
is that we must take care of the aliased components when we increase the
data-rate. Fortunately there are a family of filters that are extremely
efficient at up-sampling a signal while rejecting the aliasing components known
as cascaded integrator-comb filters (CIC)[^2]. These filters consist of several
simple accumulators and differentiators that can be configured to reject
aliasing components.
$$ H(z) = \left( \frac{ 1 - z^{-M} }{ 1 - z^{-1} } \right)^N $$
The transfer function of such a filter is formulated above. This shows that a
CIC structures of order \\( N \\) operating at a oversampling ratio
\\( M \\) will distribute M zeros uniformly around the unit circle. This
completely removes any DC components that end up at the aliasing tones at
multiples of \\( fclk / M\\). However we know priori that we will introduce
aliasing components at integer multiples of the input tone when up-sampling:
\\( freq \cdot fclk / M \\). Making a slight modification to this structure
as discussed in [^3] allows us to further optimize a second-order CIC filter to
specifically reject these components instead.
$$ H(z) = \frac{ 1 - K \cdot z^{-M} + z^{-2M} }{ 1 - K_M z^{-1} + z^{-2} } $$
Notice that the coefficient K from the resonator structure is reused here and
we introduce a new scaling coefficient \\(K_M = 2 \cdot 2 * cos(2 \pi * freq / M )\\)
which we will approximate by tailor expansion to avoid the multiplication
requirement as this factor does not require high precision. Again a python
implementation is shown below for reference.
``` python3
class Interpolator:
def __init__(self, frequency: float = 0.1, osr: int = 32):
K = 2 * np.cos(2 * np.pi * frequency)
KM = 2 * np.cos(2 * np.pi * frequency / osr )
self.fir_coef = np.array([1, -K, 1]) # FIR coefficients
self.irr_coef = np.array([-KM, 1]) # IRR coefficients
self.comb_integrator = np.zeros((2,), dtype=float)
self.comb_decimator = np.zeros((3,), dtype=float)
self.osr = osr
self.count = 0
def update(self, new_val: float) -> float:
self.comb_integrator = np.append(
np.dot(self.fir_coef, self.comb_decimator)
+ np.dot(-self.irr_coef, self.comb_integrator),
self.comb_integrator[:-1],
)
if self.count == 0:
self.comb_decimator = np.append(new_val, self.comb_decimator[1:])
self.count = (self.count + 1) % self.osr
return self.comb_integrator[0]
```
Combing the two feedback mechanisms we can construct a second-order CIC based
digital resonator with a interpolated output that fully rejects aliasing
components. This configuration is shown below. Now let us use Taylor
approximation to resolve the coefficient KM such that it is reduced to
two-component addition. The first two non-zero coefficients for cos are
\\( cos(x) = 1 - x^2 / 2 \\). Hence we can approximate as follows
\\( KM = 2 - 1 >> \lfloor 2 \log_2( 2 \pi * freq / M ) \rfloor \\) where
\\(>>\\) is the binary shift-left operator.
``` goat
Fractional Clock Rate <+ +> Full Clock Rate
- .-. .-. .-.
.->| Σ +------*---------->| Σ +----->| Σ +-------*-------> Interpolated Sinusoid
| '-' | '-' '-' |
| ^ v - ^ ^ - ^ ^ . v
| | .-----. / | / | /| .--+--.
| | | z⁻ᴹ | | | | '+ |<-+ z⁻¹ |
| | . '--+--' . | | | KM \| '--+--'
| | /| | |\ | | | ' |
| '+ |<---*--->| +-' | | v
| K \| | |/ K | | .--+--.
| ' v ' | '---------+ z⁻¹ |
| .-----. | '-----'
'----------+ z⁻ᴹ +---------'
'-----'
```
The resulting configuration only requires one multiplication to be computed at
a fractional clock-rate. Note that practically a hardware implementation will
stagger the computation in time for each of the processing stages.
## Sigma-Delta Modulation
The purpose of digital sigma-delta modulation is primarily to reduce the
hardware requirements for signal-processing in the analogue-domain. Digitizing
a high resolution 16 bit signal is exceedingly expensive once we consider
component variation requirements if we want to preserve the fidelity of our
signal. The main idea here is the reduce the resolution of the output bitstream
while modulating the quantization noise such that accuracy is preserved in the
lower frequencies while noise due to the truncation of the digital bits is only
present at higher frequencies. This allows us to use a low resolution
digital-to-analogue converter that employs mismatch cancellation techniques
at low cost to further remove the impact of component imperfection from
corrupting the precision in-band.
A popular approach here is the use of multistage noise-shaping modulator
topologies. Here we will employ a special maximum-sequence-length configuration
from [^4] which avoids any unwanted periodicity commonly found in the output
of conventional modulators when processing certain static signals. A python
realization of this modulation process is shown below in the case of a first
order modulator.
``` python3
class Modulator:
def __init__(self, resolution: int = 16, coupling: int = 0) -> None:
self.acc = 0
self.coupling = coupling
self.resolution = resolution
def update(self, new_val: int) -> bool:
last_val = self.acc & 1
pre_calc = self.acc + new_val + (self.coupling if last_val else 0)
self.acc = pre_calc % (2 ** self.resolution)
return last_val
```
The third-order configuration of the modulator is shown below. Here the Nx[n]
components represent the coupling factor α and simply use the Cx[n-1] bitstream
from the last cycle. This factor is a small integer chosen such that
2^N-α is a prime number given a fixed modulator resolution N.
``` goat
. C1[n] .-.
D[n] |\ .-------------------------------->| Σ +--> Q[n]
--->+ + '-'
| \ ^ ^
\ | . C2[n] .-------. / |
N1[n]| | S1[n] |\ .---->+ 1-z⁻¹ +-----' |
--->+ +--*--->+ + '-------' |
| | | | \ |
/ | | \ | . C3[n] .-----+----.
| / | N2[n]| | S2[n] |\ .---->+ (1-z⁻¹)² |
.->+ / | --->+ +--*--->+ + '----------'
| |/ | | | | | \
| ' | / | | \ |
| .-----. | | / | N3[n]| | S3[n]
'-+ z⁻¹ +' .->+ / | --->+ +-.
'-----' | |/ | | | |
| ' | / | |
| .-----. | | / |
'-+ z⁻¹ +' .->+ / |
'-----' | |/ |
| ' |
| .-----. |
'-+ z⁻¹ +'
'-----'
```
The output Q[n] will represent a multi-bit quantization result that increases in
bit-depth as the modulator order increases as the derivative components of CX[n]
increase in dynamic range for higher order derivatives. This has a rather
unfortunate side-effect that the signal dynamic range is only a fraction of the
total output dynamic range in this case 1/8. Fortunately these components are
exclusively high-frequency and so including a 3-tap Bartlett-Window FIR a the
output alleviates this problem by amplifying the signal-band and rejecting
the quantization-noise. In that scenario the signal dynamic range uses a little
under half the full dynamic range of the signal seen at the output.
## References:
[^1]: C. S. Turner, ''Recursive discrete-time sinusoidal oscillators,'' IEEE Signal Process. Mag, vol. 20, no. 3, pp. 103-111, May 2003. [Online]: http://dx.doi.org/10.1109/MSP.2003.1203213.
[^2]: E. Hogenauer, ''An economical class of digital filters for decimation and interpolation,'' IEEE Trans. Acoust., Speech, Signal Process., vol. 29, no. 2, pp. 155-162, April 1981. [Online]: http://dx.doi.org/10.1109/TASSP.1981.1163535.
[^3]: L. Lo Presti, ''Efficient modified-sinc filters for sigma-delta A/D converters,'' IEEE Trans. Circuits Syst. II, vol. 47, no. 11, pp. 1204-1213, Nov. 2000. [Online]: http://dx.doi.org/10.1109/82.885128.
[^4]: K. Hosseini and M. P. Kennedy, ''Maximum Sequence Length MASH Digital DeltaSigma Modulators,'' IEEE Trans. Circuits Syst. I, vol. 54, no. 12, pp. 2628-2638, Dec. 2007. [Online]: http://dx.doi.org/10.1109/TCSI.2007.905653.