Productive day

This commit is contained in:
Marius Drechsler 2024-08-06 18:29:05 +02:00
parent b372d043e6
commit ce74294d05
8 changed files with 6088 additions and 7 deletions

View file

@ -1,3 +1,17 @@
#import "@preview/glossarium:0.4.1": *
= Boundary Adaptive Clustering with Helper Data
Instead of generating helper-data to improve the quantization process itself, like in #gls("smhdt"), we can also try to find helper-data before performing enrollment that will optimize our input values before the quantization step to minimize the risk of bit and symbol errors during the reconstruction phase.
Since this #gls("hda") modifies the input values before the quantization takes place, we will consider the input values as zero-mean Gaussian distributed and not use a CDF to transform these values into the tilde-domain.
== Optimizing a 1-bit sign-based quantization
Before we take a look at the higher order quantization cases, we will start with a very basic method of quantization: a quantizer, that only returns a symbol with a width of $1$ bit and uses the sign of the input value to determine the resulting bit symbol.
#figure(
include("./../graphics/quantizers/bach/sign-based-overlay.typ"),
caption: [Nice graph]
)

View file

@ -366,6 +366,8 @@ Because we want to analyze the performance of the S-Metric method over different
We will have measurements of $50$ FPGA boards available with $1600$ and $1696$ ring oscillators each. To obtain the values to be processed, we subtract them in pairs, yielding $800$ and $848$ ring oscillator frequency differences _df_.\
Since the frequencies _f_ are normal distributed, the difference _df_ can be assumed to be zero-mean Gaussian distributed.
To apply the values _df_ to our implementation of the S-Metric method, we will first transform them into the Tilde-Domain using an inverse CDF, resulting in uniform distributed values $tilde(italic("df"))$.
Our resulting dataset consists of #glspl("ber") for quantization symbol widths of up to $6 "bits"$ evaluated with generated helper-data from up to $100 "metrics"$.
We chose not to perform simulations for bit widths higher than $6 "bits"$, as we will see later that we have already reached a bit error rate of approx. $10%$ for these configurations.
=== Discussion
@ -401,7 +403,7 @@ From $m >= 6$ onwards, $(x_"1" (m)) / (x_"100" (m))$ approaches $~1$, which mean
// caption: [Yoink]
//)
=== Impact of temperature
=== Impact of temperature<sect:impact_of_temperature>
We will now take a look at the impact on the error rates of changing the temperature both during the enrollment and the reconstruction phase.
@ -424,11 +426,42 @@ We can observe this property well in detail in @fig:global_diffs.
caption: [#glspl("ber") for different enrollment and reconstruction temperatures. The lower number in the operating configuration is assigned to the enrollment phase, the upper one to the reconstruction phase. The correlation between the #gls("ber") and the temperature is clearly visible here]
)<fig:global_diffs>]
Here, we compared the asymptotic performance of @smhdt for different temperatures both during enrollment and reconstruction. First we can observe that the optimum temperature for the operation of @smhdt in both phases for the dataset @dataset is $35°C$.
Furthermore, the @ber is almost directly correlated with the absolute temperature difference, especially at higher temperature differences, showing that the further apart the temperatures of the two phases are, the higher the @ber.
Here, we compared the asymptotic performance of @smhdt for different temperatures both during enrollment and reconstruction. First we can observe that the optimum temperature for the operation of @smhdt in both phases for the dataset @dataset is $35°C$ instead of the expected $25°C$.
Furthermore, the @ber seems to be almost directly correlated with the absolute temperature difference, especially at higher temperature differences, showing that the further apart the temperatures of the two phases are, the higher the @ber.
=== Gray coding
In @sect:smhd_improvements, we discussed how a gray coded labelling for the quantizer could improve the bit error rates of the S-Metric method.
//#inline-note[Hier: auch Auswertung über die Temperatur, oder kann man die eigenschaften einfach übernehmen aus der vorherigen Section? (Sie translaten einfach)]
Because we only change the labelling of the quantizing bins and do not make any changes to #gls("smhdt") itself, we can assume that the effects of temperature on the quantization process are directly translated to the gray-coded case.
Therefore, we will not perform this analysis again here.
@fig:smhd_gray_coding shows the comparison of applying #gls("smhdt") at room temperature for both naive and gray-coded labels.
There we can already observe the improvement of using gray-coded labelling, but the impact of this change of labels can really be seen in @tab:gray_coded_impact.
As we can see, the improvement rises rapidly to a peak at a bit width of M=3 and then falls again slightly.
This effect can be explained with the exponential rise of the #gls("ber") for higher bit widths $M$.
For $M>3$ the rise of the #gls("ber") predominates the possible improvement by applying a gray-coded labelling.
#figure(
table(
columns: (7),
align: center + horizon,
inset: 7pt,
[*M*],[1],[2],[3],[4], [5], [6],
[*Improvement*], [$0%$], [$24.75%$], [$47.45%$], [$46.97%$], [$45.91%$], [$37.73%$]
),
caption: [Improvement of using gray-coded instead of naive labelling, per bit width]
)<tab:gray_coded_impact>
#figure(
image("./../graphics/plots/gray_coding/3dplot.svg"),
caption: [Comparison between #glspl("ber") using naive labelling and gray-coded labelling]
)<fig:smhd_gray_coding>
Using our dataset, we can estimate the average improvement for using gray-coded labelling to be at around $33%$.
=== Usage of an @ecdf
- eCDF kann die Gleichverteilung der quantisierten Symbole verbessern, da keine standardabweichung geschätzt werden muss, dafür komplexer zum ausrechnen
- Vergleich mit zwei histogrammen für die Gleichverteilung der Symbole?
- BER auswerten, ist wahrscheinlich schlechter

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 274 KiB

View file

@ -18,7 +18,7 @@ df = pd.DataFrame(data)
df['BER'] = df['BER'].apply(lambda x: '{:.10f}'.format(x))
# Save to CSV
file_path = "sorted_configurations_with_diff.csv"
file_path = "sorted_configurations_with_diff_header.csv"
df.to_csv(file_path, index=False)
file_path

View file

@ -32,7 +32,7 @@
y-format: formatter,
y-tick-step: 0.5,
axis-style: "scientific-auto",
size: (16,8),
size: (16,6),
plot.add(errorrate, axes: ("x", "y"), style: (stroke: (paint: red))),
plot.add-hline(1)
)
@ -41,7 +41,7 @@
y2-label: "Temperature difference",
y2-tick-step: 10,
axis-style: "scientific-auto",
size: (16,8),
size: (16,6),
plot.add(diff, axes: ("x1","y2")),
)
})

View file

@ -0,0 +1,37 @@
config,BER,diff
35 35,0.0000290000,0
25 25,0.0000360000,0
35 45,0.0000480000,10
5 5,0.0000480000,0
45 45,0.0000540000,0
25 35,0.0000610000,10
55 55,0.0000740000,0
45 55,0.0000790000,10
25 15,0.0000790000,10
45 35,0.0000880000,10
55 45,0.0000920000,10
15 15,0.0000930000,0
35 25,0.0000980000,10
35 15,0.0001070000,20
5 15,0.0001190000,10
15 5,0.0001300000,10
25 5,0.0001530000,20
15 25,0.0001570000,10
15 35,0.0001950000,20
55 35,0.0002340000,20
45 25,0.0002390000,20
5 25,0.0002520000,20
35 55,0.0003260000,20
25 45,0.0003540000,20
35 5,0.0007120000,30
45 15,0.0011030000,30
55 25,0.0011090000,30
5 35,0.0012320000,30
15 45,0.0013870000,30
25 55,0.0014930000,30
45 5,0.0032300000,40
55 15,0.0038480000,40
5 45,0.0046050000,40
15 55,0.0050040000,40
55 5,0.0088830000,50
5 55,0.0105450000,50
1 config BER diff
2 35 35 0.0000290000 0
3 25 25 0.0000360000 0
4 35 45 0.0000480000 10
5 5 5 0.0000480000 0
6 45 45 0.0000540000 0
7 25 35 0.0000610000 10
8 55 55 0.0000740000 0
9 45 55 0.0000790000 10
10 25 15 0.0000790000 10
11 45 35 0.0000880000 10
12 55 45 0.0000920000 10
13 15 15 0.0000930000 0
14 35 25 0.0000980000 10
15 35 15 0.0001070000 20
16 5 15 0.0001190000 10
17 15 5 0.0001300000 10
18 25 5 0.0001530000 20
19 15 25 0.0001570000 10
20 15 35 0.0001950000 20
21 55 35 0.0002340000 20
22 45 25 0.0002390000 20
23 5 25 0.0002520000 20
24 35 55 0.0003260000 20
25 25 45 0.0003540000 20
26 35 5 0.0007120000 30
27 45 15 0.0011030000 30
28 55 25 0.0011090000 30
29 5 35 0.0012320000 30
30 15 45 0.0013870000 30
31 25 55 0.0014930000 30
32 45 5 0.0032300000 40
33 55 15 0.0038480000 40
34 5 45 0.0046050000 40
35 15 55 0.0050040000 40
36 55 5 0.0088830000 50
37 5 55 0.0105450000 50

View file

@ -0,0 +1,21 @@
#import "@preview/cetz:0.2.2": canvas, plot
#let line_style = (stroke: (paint: black, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#canvas({
plot.plot(size: (8,4),
x-tick-step: none,
x-ticks: ((0, [0]), (100, [0])),
y-label: $cal(Q)(1, x)$,
x-label: $x$,
y-tick-step: 1,
axis-style: "left",
x-min: -3,
x-max: 3,
y-min: 0,
y-max: 1,{
plot.add(((-3,0), (0,0), (0,1), (3,1)), style: line_style)
})
})

BIN
main.pdf

Binary file not shown.