Finished with improvements, continued on experimental resuls
This commit is contained in:
parent
f494a2cf61
commit
b372d043e6
11 changed files with 265 additions and 59 deletions
|
|
@ -1,4 +1,5 @@
|
||||||
#import "@preview/drafting:0.2.0": *
|
#import "@preview/drafting:0.2.0": *
|
||||||
|
#import "@preview/glossarium:0.4.1": *
|
||||||
|
|
||||||
= S-Metric Helper Data Method <chap:smhd>
|
= S-Metric Helper Data Method <chap:smhd>
|
||||||
|
|
||||||
|
|
@ -38,12 +39,12 @@ But since we generated helper data during enrollment as depicted in @fig:tmhd_en
|
||||||
)]
|
)]
|
||||||
|
|
||||||
Publications @tmhd1 and @tmhd2 find all the relevant bounds for the enrollment and reconstruction phases under the assumption that the PUF readout (our input value $x$) is zero-mean Gaussian distributed.
|
Publications @tmhd1 and @tmhd2 find all the relevant bounds for the enrollment and reconstruction phases under the assumption that the PUF readout (our input value $x$) is zero-mean Gaussian distributed.
|
||||||
//Because the parameters for symbol width and number of metrics always stays the same, it is easier to calculate #margin-note[obdA annehmen hier] the bounds for 8 equi-probable areas with a standard deviation of $sigma = 1$ first and then multiplying them with the estimated standard deviation of the PUF readout.
|
//Because the parameters for symbol width and number of metrics always stays the same, it is easier to calculate #m//argin-note[obdA annehmen hier] the bounds for 8 equi-probable areas with a standard deviation of $sigma = 1$ first and then multiplying them with the estimated standard deviation of the PUF readout.
|
||||||
Because the parameters for symbol width and number of metrics always stays the same, we can -- without loss of generality -- assume the standard deviation as $sigma = 1$ and calculate the bounds for 8 equi-probable areas for this distribution.
|
Because the parameters for symbol width and number of metrics always stays the same, we can -- without loss of generality -- assume the standard deviation as $sigma = 1$ and calculate the bounds for 8 equi-probable areas for this distribution.
|
||||||
This is done by finding two bounds $a$ and $b$ such, that
|
This is done by finding two bounds $a$ and $b$ such, that
|
||||||
$ integral_a^b f_X(x) \dx = 1/8 $
|
$ integral_a^b f_X(x) \dx = 1/8 $
|
||||||
This operation yields 9 bounds defining these areas $-infinity$, $-\T1$, $-a$, $-\T2$, $0$, $\T2$, $a$, $\T1$ and $+infinity$.
|
This operation yields 9 bounds defining these areas $-infinity$, $-\T1$, $-a$, $-\T2$, $0$, $\T2$, $a$, $\T1$ and $+infinity$.
|
||||||
During the enrollment phase, we will use $plus.minus a$ as our quantizing bounds, returning $0$ if the #margin-note[Rück-\ sprache?] absolute value is smaller than $a$ and $1$ otherwise.
|
During the enrollment phase, we will use $plus.minus a$ as our quantizing bounds, returning $0$ if the //#margin-note[Rück-\ sprache?] absolute value is smaller than $a$ and $1$ otherwise.
|
||||||
The corresponding metric is chosen based on the following conditions:
|
The corresponding metric is chosen based on the following conditions:
|
||||||
|
|
||||||
$ M = cases(
|
$ M = cases(
|
||||||
|
|
@ -52,7 +53,7 @@ $ M = cases(
|
||||||
)space.en. $
|
)space.en. $
|
||||||
|
|
||||||
@fig:tmhd_enroll shows the curve of a quantizer $cal(Q)$ that would be used during the Two-Metric enrollment phase.
|
@fig:tmhd_enroll shows the curve of a quantizer $cal(Q)$ that would be used during the Two-Metric enrollment phase.
|
||||||
At this point we will still assume that our input value $x$ is zero-mean Gaussian distributed. #margin-note[Als Annahme nach vorne verschieben]
|
At this point we will still assume that our input value $x$ is zero-mean Gaussian distributed. //#margin-note[Als Annahme nach vorne verschieben]
|
||||||
#scale(x: 90%, y: 90%)[
|
#scale(x: 90%, y: 90%)[
|
||||||
#grid(
|
#grid(
|
||||||
columns: (1fr, 1fr),
|
columns: (1fr, 1fr),
|
||||||
|
|
@ -80,7 +81,7 @@ The advantage of this method comes from moving the point of uncertainty away fro
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
=== S-Metric Helper Data Method
|
=== #gls("smhdt", long: true)
|
||||||
|
|
||||||
Going on, the Two-Metric Helper Data Method can be generalized as shown in @smhd.
|
Going on, the Two-Metric Helper Data Method can be generalized as shown in @smhd.
|
||||||
This generalization allows for higher-order bit quantization and the use of more than two metrics.
|
This generalization allows for higher-order bit quantization and the use of more than two metrics.
|
||||||
|
|
@ -253,8 +254,8 @@ For application, we calculate $phi$ based on $S$ and $M$ using @eq:offset. The r
|
||||||
)<alg:find_offsets>
|
)<alg:find_offsets>
|
||||||
|
|
||||||
==== Offset properties<par:offset_props>
|
==== Offset properties<par:offset_props>
|
||||||
#inline-note[Diese section ist hier etwas fehl am Platz, ich weiß nur nicht genau wohin damit. Außerdem ist sie ein bisschen durcheinander geschrieben]
|
//#inline-note[Diese section ist hier etwas fehl am Platz, ich weiß nur nicht genau wohin damit. Außerdem ist sie ein bisschen durcheinander geschrieben]
|
||||||
Lets look deeper into the properties of the offset value $phi$.\
|
Before we go on and experimentally test this realization of the S-Metric method, let's look deeper into the properties of the metric offset value $phi$.\
|
||||||
Comparing @fig:smhd_2_2_reconstruction, @fig:smhd_3_2_reconstruction and their respective values of @eq:offset, we can observe, that the offset $Phi$ gets smaller the more metrics we use.
|
Comparing @fig:smhd_2_2_reconstruction, @fig:smhd_3_2_reconstruction and their respective values of @eq:offset, we can observe, that the offset $Phi$ gets smaller the more metrics we use.
|
||||||
|
|
||||||
#figure(
|
#figure(
|
||||||
|
|
@ -268,10 +269,10 @@ Comparing @fig:smhd_2_2_reconstruction, @fig:smhd_3_2_reconstruction and their r
|
||||||
),
|
),
|
||||||
caption: [Offset values for 2-bit configurations]
|
caption: [Offset values for 2-bit configurations]
|
||||||
)<tab:offsets>
|
)<tab:offsets>
|
||||||
As previously stated, we will need to move the enrollment quantizer $s/2$ times to the left and $s/2$ times to the right.
|
As previously stated, we will need to define $S$ quantizers, $S/2$ times to the left and $S/2$ times to the right.
|
||||||
For example, setting parameter $s$ to $4$ means we will need to move the enrollment quantizer $lr(s/2 mid(|))_(s=4) = 2$ times to the left and right.
|
For example, setting parameter $S$ to $4$ means we will need to move the enrollment quantizer $lr(S/2 mid(|))_(S=4) = 2$ times to the left and right.
|
||||||
As we can see in @fig:4_2_offsets, $phi$ for the indices $i = plus.minus 2$ are identical to the offsets of a 2-bit 2-metric configuration.
|
As we can see in @fig:4_2_offsets, $phi$ for the maximum metric indices $i = plus.minus 2$ are identical to the offsets of a 2-bit 2-metric configuration.
|
||||||
In fact, this property carries on for higher even numbers of metrics.
|
In fact, this property carries on for higher even numbers of metrics, as shown in @fig:6_2_offsets.
|
||||||
|
|
||||||
#grid(
|
#grid(
|
||||||
columns: (1fr, 1fr),
|
columns: (1fr, 1fr),
|
||||||
|
|
@ -301,44 +302,46 @@ In fact, this property carries on for higher even numbers of metrics.
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
At $s=6$ metrics, the biggest offset we encounter is $phi = 1/16$ at $i = plus.minus 3$.\
|
At $s=6$ metrics, the biggest metric offset we encounter is $phi = 1/16$ at $i = plus.minus 3$.\
|
||||||
In conclusion, the maximum offset for a 2-bit configuration $phi$ is $1/16$ and we will introduce smaller offsets in between if we use a higher even number of metrics. More formally, we can define the maximum offset for an even number of metrics as follows:
|
This biggest (or maximum) offset is of particular interest to us, as it tells us how far we deviate from the original quantizer used during enrollment.
|
||||||
$ phi_("max,even") = frac(frac(s,2), 2^n dot s dot 2) = frac(1, 2^n dot 4) $<eq:max_offset_even>
|
The maximum offset for a 2-bit configuration $phi$ is $1/16$ and we will introduce smaller offsets in between if we use a higher even number of metrics.
|
||||||
|
|
||||||
Here, we multiply @eq:offset with the maximum offsetting index $i_"max" = s/2$.
|
More formally, we can define the maximum metric offset for an even number of metrics as follows:
|
||||||
|
$ phi_("max,even") = frac(frac(S,2), 2^M dot S dot 2) = frac(1, 2^M dot 4) $<eq:max_offset_even>
|
||||||
|
|
||||||
|
Here, we multiply @eq:offset by the maximum metric index $i_"max" = S/2$.
|
||||||
|
|
||||||
Now, if we want to find the maximum offset for a odd number of metrics, we need to modify @eq:max_offset_even, more specifically its numerator.
|
Now, if we want to find the maximum offset for a odd number of metrics, we need to modify @eq:max_offset_even, more specifically its numerator.
|
||||||
We know, that we need to keep the original quantizer for a odd number of metrics.
|
|
||||||
Besides that, the method stays the same.
|
|
||||||
For that reason, we will decrease the parameter $m$ by $1$, that way we will still perform a division without remainder:
|
For that reason, we will decrease the parameter $m$ by $1$, that way we will still perform a division without remainder:
|
||||||
|
|
||||||
$
|
$
|
||||||
phi_"max,odd" &= frac(frac(s-1, 2), 2^n dot s dot 2)\
|
phi_"max,odd" &= frac(frac(S-1, 2), 2^n dot S dot 2)\
|
||||||
&= lr(frac(s-1, 2^n dot s dot 4)mid(|))_(n=2, s=3) = 1/24
|
&= lr(frac(S-1, 2^M dot S dot 4)mid(|))_(M=2, S=3) = 1/24
|
||||||
$
|
$
|
||||||
|
|
||||||
It is important to note, that $phi_"max,odd"$, unlike $phi_"max,even"$, is dependent on the parameter $s$ as we can see in @tb:odd_offsets.
|
It is important to note, that $phi_"max,odd"$, unlike $phi_"max,even"$, is dependent on the parameter $S$ as we can see in @tb:odd_offsets.
|
||||||
|
|
||||||
#figure(
|
#figure(
|
||||||
table(
|
table(
|
||||||
columns: (5),
|
columns: (5),
|
||||||
align: center + horizon,
|
align: center + horizon,
|
||||||
inset: 7pt,
|
inset: 7pt,
|
||||||
[*s*],[3],[5],[7],[9],
|
[*S*],[3],[5],[7],[9],
|
||||||
[$bold(phi_"max,odd")$],[$1/24$],[$1/20$],[$3/56$],[$1/18$]
|
[$bold(phi_"max,odd")$],[$1/24$],[$1/20$],[$3/56$],[$1/18$]
|
||||||
),
|
),
|
||||||
caption: [2-bit maximum offsets, odd]
|
caption: [2-bit maximum offsets, odd]
|
||||||
)<tb:odd_offsets>
|
)<tb:odd_offsets>
|
||||||
|
|
||||||
The higher $m$ is chosen, the closer we approximate $phi_"max,even"$ as shown in @eq:offset_limes.
|
The higher $S$ is chosen, the closer we approximate $phi_"max,even"$ as shown in @eq:offset_limes.
|
||||||
This means, while also keeping the original quantizer during the reconstruction phase, the maximum offset for an odd number of metrics will always be smaller than for an even number.
|
This means, while also keeping the original quantizer during the reconstruction phase, the maximum offset for an odd number of metrics will always be smaller than for an even number.
|
||||||
//We will be able to observe this property later on in
|
|
||||||
|
|
||||||
$
|
$
|
||||||
lim_(s arrow.r infinity) phi_"max,odd" &= frac(s-1, 2^n dot s dot 4) #<eq:offset_limes>\
|
lim_(S arrow.r infinity) phi_"max,odd" &= frac(S-1, 2^M dot S dot 4) #<eq:offset_limes>\
|
||||||
&= frac(1, 2^n dot 4) = phi_"max,even"
|
&= frac(1, 2^M dot 4) = phi_"max,even"
|
||||||
$
|
$
|
||||||
|
|
||||||
|
Because $phi_"max,odd"$ only approximates $phi_"max,even"$ if $S arrow.r infinity$ we can assume, that configurations with an even number of metrics will always perform marginally better than configurations with odd numbers of metrics because the bigger maximum offset allows for better reconstructing capabilities. //#margin-note[Sehr unglücklich mit der formulierung hier]
|
||||||
|
|
||||||
== Improvements<sect:smhd_improvements>
|
== Improvements<sect:smhd_improvements>
|
||||||
|
|
||||||
The by @smhd proposed S-Metric Helper Data Method can be improved by using gray coded labels for the quantized symbols instead of naive ones.
|
The by @smhd proposed S-Metric Helper Data Method can be improved by using gray coded labels for the quantized symbols instead of naive ones.
|
||||||
|
|
@ -348,10 +351,12 @@ The by @smhd proposed S-Metric Helper Data Method can be improved by using gray
|
||||||
include("../graphics/quantizers/two-bit-enroll-gray.typ"),
|
include("../graphics/quantizers/two-bit-enroll-gray.typ"),
|
||||||
caption: [Gray Coded 2-bit quantizer]
|
caption: [Gray Coded 2-bit quantizer]
|
||||||
)<fig:2-bit-gray>]]
|
)<fig:2-bit-gray>]]
|
||||||
@fig:2-bit-gray shows a 2-bit quantizer with gray coded labelling.
|
@fig:2-bit-gray shows a 2-bit quantizer with gray-coded labelling.
|
||||||
In this example, we have an advantage at $tilde(x) = ~ 0.5$, because a quantization error only returns one wrong bit instead of two.
|
In this example, we have an advantage at $tilde(x) = ~ 0.5$, because a quantization error only returns one wrong bit instead of two.
|
||||||
|
|
||||||
== Helper data volume
|
Furthermore, the transformation into the Tilde-Domain could also be performed using the @ecdf to achieve a more precise uniform distribution because we do not have to estimate a standard deviation of the input values.
|
||||||
|
|
||||||
|
//#inline-note[Hier vielleicht noch eine Grafik zur Visualisierung?]
|
||||||
|
|
||||||
== Experiments
|
== Experiments
|
||||||
|
|
||||||
|
|
@ -369,7 +374,7 @@ For this analysis, enrollment and reconstruction were both performed at room tem
|
||||||
|
|
||||||
#figure(
|
#figure(
|
||||||
image("../graphics/25_25_all_error_rates.svg", width: 95%),
|
image("../graphics/25_25_all_error_rates.svg", width: 95%),
|
||||||
caption: [Bit error rates for same temperature execution]
|
caption: [Bit error rates for same temperature execution. Here we can already observe the asymptotic loss of improvement in #glspl("ber") for higher metric numbers]
|
||||||
)<fig:global_errorrates>
|
)<fig:global_errorrates>
|
||||||
|
|
||||||
We can observe two key properties of the S-Metric method in @fig:global_errorrates.
|
We can observe two key properties of the S-Metric method in @fig:global_errorrates.
|
||||||
|
|
@ -380,7 +385,7 @@ At a symbol width of $m >= 6$ bits, no further improvement through the S-Metric
|
||||||
|
|
||||||
#figure(
|
#figure(
|
||||||
include("../graphics/plots/errorrates_changerate.typ"),
|
include("../graphics/plots/errorrates_changerate.typ"),
|
||||||
caption: [Asymptotic performance of S-Metric]
|
caption: [Asymptotic performance of @smhdt]
|
||||||
)<fig:errorrates_changerate>
|
)<fig:errorrates_changerate>
|
||||||
|
|
||||||
This tendency can also be shown through @fig:errorrates_changerate.
|
This tendency can also be shown through @fig:errorrates_changerate.
|
||||||
|
|
@ -405,34 +410,25 @@ Since we wont always be able to recreate lab-like conditions during the reconstr
|
||||||
|
|
||||||
#figure(
|
#figure(
|
||||||
include("../graphics/plots/temperature/25_5_re.typ"),
|
include("../graphics/plots/temperature/25_5_re.typ"),
|
||||||
caption: [Reconstruction at different temperatures]
|
caption: [#glspl("ber") for reconstruction at different temperatures. Generally, the further we move away from the enrollment temperature, the worse the #gls("ber") gets. ]
|
||||||
)<fig:smhd_tmp_reconstruction>
|
)<fig:smhd_tmp_reconstruction>
|
||||||
|
|
||||||
@fig:smhd_tmp_reconstruction shows the results of this experiment conducted with a 2-bit configuration.\
|
@fig:smhd_tmp_reconstruction shows the results of this experiment conducted with a 2-bit configuration.\
|
||||||
As we can see, the further we move away from the temperature of enrollment, the higher the bit error rates turns out to be.\
|
As we can see, the further we move away from the temperature of enrollment, the higher the bit error rates turns out to be.\
|
||||||
Going more into detail, we can look at the exact bit error rates in @tab:smhd_tmp_differences.
|
|
||||||
|
|
||||||
|
We can observe this property well in detail in @fig:global_diffs.
|
||||||
|
|
||||||
|
#scale(x: 90%, y: 90%)[
|
||||||
#figure(
|
#figure(
|
||||||
table(
|
include("../graphics/plots/temperature/global_diffs/global_diffs.typ"),
|
||||||
columns: (4),
|
caption: [#glspl("ber") for different enrollment and reconstruction temperatures. The lower number in the operating configuration is assigned to the enrollment phase, the upper one to the reconstruction phase. The correlation between the #gls("ber") and the temperature is clearly visible here]
|
||||||
align: center + horizon,
|
)<fig:global_diffs>]
|
||||||
inset: 7pt,
|
|
||||||
[*Temperature*], [*No helper data*],[*Two-Metric*],[*s=100 Metric*],
|
|
||||||
[-20°C], [$3.9 dot 10^(-2)$], [$2.1 dot 10^(-3)$], [$1.5 dot 10^(-4)$],
|
|
||||||
[-10°C], [$3.7 dot 10^(-2)$], [$1.4 dot 10^(-3)$], [$0.8 dot 10^(-4)$],
|
|
||||||
[$plus.minus 0$°C], [$2 dot 10^(-2)$], [$0.008 dot 10^(-3)$], [$0.035 dot 10^(-4)$],
|
|
||||||
[+10°C], [$3.7 dot 10^(-2)$], [$1.3 dot 10^(-3)$], [$0.6 dot 10^(-4)$],
|
|
||||||
[+20°C], [$4.4 dot 10^(-2)$], [$3.1 dot 10^(-3)$], [$3.5 dot 10^(-4)$],
|
|
||||||
[+30°C], [$5.2 dot 10^(-2)$], [$7 dot 10^(-3)$], [$15 dot 10^(-4)$],
|
|
||||||
),
|
|
||||||
caption: [BERs 2-bit configuration at 25°C enrollment]
|
|
||||||
)<tab:smhd_tmp_differences>
|
|
||||||
|
|
||||||
Comparing the absolute temperature difference pairs of @tab:smhd_tmp_differences, we can generally conclude, that a higher temperature during reconstruction has a higher impact on the bit error rate than a lower one.
|
|
||||||
|
|
||||||
|
Here, we compared the asymptotic performance of @smhdt for different temperatures both during enrollment and reconstruction. First we can observe that the optimum temperature for the operation of @smhdt in both phases for the dataset @dataset is $35°C$.
|
||||||
|
Furthermore, the @ber is almost directly correlated with the absolute temperature difference, especially at higher temperature differences, showing that the further apart the temperatures of the two phases are, the higher the @ber.
|
||||||
|
|
||||||
=== Gray coding
|
=== Gray coding
|
||||||
|
|
||||||
In @sect:smhd_improvements, we discussed how a gray coded labelling for the quantizer could improve the bit error rates of the S-Metric method.
|
In @sect:smhd_improvements, we discussed how a gray coded labelling for the quantizer could improve the bit error rates of the S-Metric method.
|
||||||
|
|
||||||
#inline-note[Hier: auch Auswertung über die Temperatur, oder kann man die eigenschaften einfach übernehmen aus der vorherigen Section? (Sie translaten einfach)]
|
//#inline-note[Hier: auch Auswertung über die Temperatur, oder kann man die eigenschaften einfach übernehmen aus der vorherigen Section? (Sie translaten einfach)]
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,46 @@
|
||||||
|
import pandas as pd
|
||||||
|
import glob
|
||||||
|
|
||||||
|
# Function to find the best configuration based on parameter value 99
|
||||||
|
def find_best_configuration(file_paths):
|
||||||
|
dfs = []
|
||||||
|
for file_path in file_paths:
|
||||||
|
df = pd.read_csv(file_path, header=None, names=['Parameter', 'Bit Error Rate'])
|
||||||
|
# Extract configuration from file name
|
||||||
|
config = file_path.split('_')[1:3]
|
||||||
|
df['config'] = f"{config[0]}_{config[1]}"
|
||||||
|
# Filter by parameter value 99
|
||||||
|
df = df[df['Parameter'] == 99]
|
||||||
|
dfs.append(df)
|
||||||
|
|
||||||
|
# Concatenate all dataframes
|
||||||
|
combined_df = pd.concat(dfs, ignore_index=True)
|
||||||
|
|
||||||
|
# Find the configuration with the overall best (lowest) bit error rate for parameter 99
|
||||||
|
|
||||||
|
#best_config = combined_df.groupby('config')['Bit Error Rate'].mean().idxmin()
|
||||||
|
|
||||||
|
sorted_configs = combined_df.groupby('config')['Bit Error Rate'].mean().sort_values().reset_index()
|
||||||
|
return sorted_configs
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
|
||||||
|
tmps = ["5", "15", "25", "35", "45", "55"]
|
||||||
|
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
for tmp1 in tmps:
|
||||||
|
for tmp2 in tmps:
|
||||||
|
paths.append("./errorrates_" + tmp1 + "_" + tmp2 + "_" + "processed.csv")
|
||||||
|
|
||||||
|
file_paths = [
|
||||||
|
"/path/to/errorrates_5_5_processed.csv",
|
||||||
|
"/path/to/errorrates_5_15_processed.csv",
|
||||||
|
"/path/to/errorrates_5_25_processed.csv",
|
||||||
|
"/path/to/errorrates_5_35_processed.csv",
|
||||||
|
"/path/to/errorrates_5_45_processed.csv"
|
||||||
|
]
|
||||||
|
|
||||||
|
best_config, combined_df = find_best_configuration(paths)
|
||||||
|
print("Best configuration:", best_config)
|
||||||
|
|
||||||
|
|
@ -0,0 +1,44 @@
|
||||||
|
import pandas as pd
|
||||||
|
|
||||||
|
# Function to find configurations based on parameter value 99 and sort them
|
||||||
|
def find_sorted_configurations(file_paths):
|
||||||
|
dfs = []
|
||||||
|
for file_path in file_paths:
|
||||||
|
df = pd.read_csv(file_path, header=None, names=['Parameter', 'Bit Error Rate'])
|
||||||
|
# Extract configuration from file name
|
||||||
|
config = file_path.split('_')[1:3]
|
||||||
|
df['config'] = f"{config[0]}_{config[1]}"
|
||||||
|
# Filter by parameter value 99
|
||||||
|
df = df[df['Parameter'] == 99]
|
||||||
|
dfs.append(df)
|
||||||
|
|
||||||
|
# Concatenate all dataframes
|
||||||
|
combined_df = pd.concat(dfs, ignore_index=True)
|
||||||
|
|
||||||
|
# Group by configuration and calculate the mean bit error rate
|
||||||
|
sorted_configs = combined_df.groupby('config')['Bit Error Rate'].mean().sort_values().reset_index()
|
||||||
|
|
||||||
|
# Add a column for the absolute difference of the two configuration parameters
|
||||||
|
sorted_configs['Abs Difference'] = sorted_configs['config'].apply(lambda x: abs(int(x.split('_')[0]) - int(x.split('_')[1])))
|
||||||
|
|
||||||
|
return sorted_configs
|
||||||
|
|
||||||
|
tmps = ["5", "15", "25", "35", "45", "55"]
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
for tmp1 in tmps:
|
||||||
|
for tmp2 in tmps:
|
||||||
|
paths.append("./errorrates_" + tmp1 + "_" + tmp2 + "_processed.csv")
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
file_paths = [
|
||||||
|
"/path/to/errorrates_5_5_processed.csv",
|
||||||
|
"/path/to/errorrates_5_15_processed.csv",
|
||||||
|
"/path/to/errorrates_5_25_processed.csv",
|
||||||
|
"/path/to/errorrates_5_35_processed.csv",
|
||||||
|
"/path/to/errorrates_5_45_processed.csv"
|
||||||
|
]
|
||||||
|
|
||||||
|
sorted_configurations = find_sorted_configurations(paths)
|
||||||
|
print("Sorted configurations:\n", sorted_configurations)
|
||||||
|
|
||||||
|
|
@ -43,6 +43,10 @@
|
||||||
devShell = pkgs.mkShell {
|
devShell = pkgs.mkShell {
|
||||||
buildInputs = with pkgs; [
|
buildInputs = with pkgs; [
|
||||||
typst
|
typst
|
||||||
|
python312
|
||||||
|
python312Packages.pandas
|
||||||
|
python312Packages.glob2
|
||||||
|
python312Packages.matplotlib
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,4 +5,7 @@
|
||||||
|
|
||||||
#print-glossary((
|
#print-glossary((
|
||||||
(key: "hda", short: "HDA", plural: "HDAs", long: "helper data algorithm", longplural: "helper data algorithms"),
|
(key: "hda", short: "HDA", plural: "HDAs", long: "helper data algorithm", longplural: "helper data algorithms"),
|
||||||
|
(key: "ecdf", short: "eCDF", plural: "eCDFs", long: "empirical Cumulative Distribution Function", longplural: "empirical Cumulative Distribution Functions"),
|
||||||
|
(key: "ber", short: "BER", plural: "BERs", long: "bit error rate", longplural: "bit error rates"),
|
||||||
|
(key: "smhdt", short: "SMHD", plural: "SMHDs", long: "S-Metric Helper Data Method")
|
||||||
))
|
))
|
||||||
|
|
|
||||||
25
graphics/plots/temperature/global_diffs/createcsv.py
Normal file
25
graphics/plots/temperature/global_diffs/createcsv.py
Normal file
|
|
@ -0,0 +1,25 @@
|
||||||
|
import pandas as pd
|
||||||
|
|
||||||
|
# Create the DataFrame from the given data
|
||||||
|
data = {
|
||||||
|
'config': ['35 35', '25 25', '35 45', '5 5', '45 45', '25 35', '55 55', '45 55', '25 15', '45 35',
|
||||||
|
'55 45', '15 15', '35 25', '35 15', '5 15', '15 5', '25 5', '15 25', '15 35', '55 35',
|
||||||
|
'45 25', '5 25', '35 55', '25 45', '35 5', '45 15', '55 25', '5 35', '15 45', '25 55',
|
||||||
|
'45 5', '55 15', '5 45', '15 55', '55 5', '5 55'],
|
||||||
|
'BER': [0.000029, 0.000036, 0.000048, 0.000048, 0.000054, 0.000061, 0.000074, 0.000079, 0.000079, 0.000088,
|
||||||
|
0.000092, 0.000093, 0.000098, 0.000107, 0.000119, 0.000130, 0.000153, 0.000157, 0.000195, 0.000234,
|
||||||
|
0.000239, 0.000252, 0.000326, 0.000354, 0.000712, 0.001103, 0.001109, 0.001232, 0.001387, 0.001493,
|
||||||
|
0.003230, 0.003848, 0.004605, 0.005004, 0.008883, 0.010545],
|
||||||
|
'diff': [0, 0, 10, 0, 0, 10, 0, 10, 10, 10, 10, 0, 10, 20, 10, 10, 20, 10, 20, 20, 20, 20, 20, 20, 30, 30, 30, 30, 30, 30, 40, 40, 40, 40, 50, 50]
|
||||||
|
}
|
||||||
|
|
||||||
|
df = pd.DataFrame(data)
|
||||||
|
|
||||||
|
df['BER'] = df['BER'].apply(lambda x: '{:.10f}'.format(x))
|
||||||
|
|
||||||
|
# Save to CSV
|
||||||
|
file_path = "sorted_configurations_with_diff.csv"
|
||||||
|
df.to_csv(file_path, index=False)
|
||||||
|
|
||||||
|
file_path
|
||||||
|
|
||||||
47
graphics/plots/temperature/global_diffs/global_diffs.typ
Normal file
47
graphics/plots/temperature/global_diffs/global_diffs.typ
Normal file
|
|
@ -0,0 +1,47 @@
|
||||||
|
#import "@preview/cetz:0.2.2"
|
||||||
|
|
||||||
|
#let data = csv("./sorted_configurations_with_diff.csv")
|
||||||
|
|
||||||
|
#let errorrate = data.enumerate().map(
|
||||||
|
row => (row.at(0),calc.log(float(row.at(1).at(1))))
|
||||||
|
)
|
||||||
|
#let diff = data.enumerate().map(
|
||||||
|
row => (row.at(0),float(row.at(1).at(2)))
|
||||||
|
)
|
||||||
|
|
||||||
|
#let conf = data.enumerate().map(
|
||||||
|
row => (row.at(0), row.at(1).at(0))
|
||||||
|
)
|
||||||
|
|
||||||
|
#let formatter(v) = [$10^#v$]
|
||||||
|
|
||||||
|
|
||||||
|
#cetz.canvas({
|
||||||
|
import cetz.draw: *
|
||||||
|
import cetz.plot
|
||||||
|
|
||||||
|
set-style(
|
||||||
|
axes: (bottom: (tick: (label: (angle: 90deg, offset: 0.5))))
|
||||||
|
)
|
||||||
|
|
||||||
|
plot.plot(
|
||||||
|
y-label: $"Bit error rate"$,
|
||||||
|
x-label: "Operating configuration",
|
||||||
|
x-tick-step: none,
|
||||||
|
x-ticks: conf,
|
||||||
|
y-format: formatter,
|
||||||
|
y-tick-step: 0.5,
|
||||||
|
axis-style: "scientific-auto",
|
||||||
|
size: (16,8),
|
||||||
|
plot.add(errorrate, axes: ("x", "y"), style: (stroke: (paint: red))),
|
||||||
|
plot.add-hline(1)
|
||||||
|
)
|
||||||
|
|
||||||
|
plot.plot(
|
||||||
|
y2-label: "Temperature difference",
|
||||||
|
y2-tick-step: 10,
|
||||||
|
axis-style: "scientific-auto",
|
||||||
|
size: (16,8),
|
||||||
|
plot.add(diff, axes: ("x1","y2")),
|
||||||
|
)
|
||||||
|
})
|
||||||
|
|
@ -0,0 +1,36 @@
|
||||||
|
35 35,0.0000290000,0
|
||||||
|
25 25,0.0000360000,0
|
||||||
|
35 45,0.0000480000,10
|
||||||
|
5 5,0.0000480000,0
|
||||||
|
45 45,0.0000540000,0
|
||||||
|
25 35,0.0000610000,10
|
||||||
|
55 55,0.0000740000,0
|
||||||
|
45 55,0.0000790000,10
|
||||||
|
25 15,0.0000790000,10
|
||||||
|
45 35,0.0000880000,10
|
||||||
|
55 45,0.0000920000,10
|
||||||
|
15 15,0.0000930000,0
|
||||||
|
35 25,0.0000980000,10
|
||||||
|
35 15,0.0001070000,20
|
||||||
|
5 15,0.0001190000,10
|
||||||
|
15 5,0.0001300000,10
|
||||||
|
25 5,0.0001530000,20
|
||||||
|
15 25,0.0001570000,10
|
||||||
|
15 35,0.0001950000,20
|
||||||
|
55 35,0.0002340000,20
|
||||||
|
45 25,0.0002390000,20
|
||||||
|
5 25,0.0002520000,20
|
||||||
|
35 55,0.0003260000,20
|
||||||
|
25 45,0.0003540000,20
|
||||||
|
35 5,0.0007120000,30
|
||||||
|
45 15,0.0011030000,30
|
||||||
|
55 25,0.0011090000,30
|
||||||
|
5 35,0.0012320000,30
|
||||||
|
15 45,0.0013870000,30
|
||||||
|
25 55,0.0014930000,30
|
||||||
|
45 5,0.0032300000,40
|
||||||
|
55 15,0.0038480000,40
|
||||||
|
5 45,0.0046050000,40
|
||||||
|
15 55,0.0050040000,40
|
||||||
|
55 5,0.0088830000,50
|
||||||
|
5 55,0.0105450000,50
|
||||||
|
BIN
main.pdf
BIN
main.pdf
Binary file not shown.
13
main.typ
13
main.typ
|
|
@ -1,7 +1,7 @@
|
||||||
#import "@preview/cetz:0.2.2"
|
#import "@preview/cetz:0.2.2"
|
||||||
#import "@preview/fletcher:0.5.1"
|
#import "@preview/fletcher:0.5.1"
|
||||||
#import "@preview/gentle-clues:0.9.0"
|
#import "@preview/gentle-clues:0.9.0"
|
||||||
#import "@preview/glossarium:0.4.1": make-glossary
|
#import "@preview/glossarium:0.4.1": *
|
||||||
#import "@preview/lovelace:0.3.0"
|
#import "@preview/lovelace:0.3.0"
|
||||||
#import "@preview/tablex:0.0.8"
|
#import "@preview/tablex:0.0.8"
|
||||||
#import "@preview/unify:0.6.0"
|
#import "@preview/unify:0.6.0"
|
||||||
|
|
@ -9,6 +9,7 @@
|
||||||
#import "@preview/equate:0.2.0": equate
|
#import "@preview/equate:0.2.0": equate
|
||||||
#import "@preview/drafting:0.2.0": *
|
#import "@preview/drafting:0.2.0": *
|
||||||
|
|
||||||
|
|
||||||
#show: equate.with(breakable: true, sub-numbering: true)
|
#show: equate.with(breakable: true, sub-numbering: true)
|
||||||
#set math.equation(numbering: "(1.1)")
|
#set math.equation(numbering: "(1.1)")
|
||||||
|
|
||||||
|
|
@ -22,8 +23,6 @@
|
||||||
|
|
||||||
#set document(title: "Towards Efficient Helper Data Algorithms for Multi-Bit PUF Quantization", author: "Marius Drechsler")
|
#set document(title: "Towards Efficient Helper Data Algorithms for Multi-Bit PUF Quantization", author: "Marius Drechsler")
|
||||||
|
|
||||||
#set-page-properties()
|
|
||||||
|
|
||||||
|
|
||||||
#show: doc => conf(
|
#show: doc => conf(
|
||||||
title: "Towards Efficient Helper Data Algorithms for Multi-Bit PUF Quantization",
|
title: "Towards Efficient Helper Data Algorithms for Multi-Bit PUF Quantization",
|
||||||
|
|
@ -36,7 +35,13 @@
|
||||||
submitted: "22.07.2024",
|
submitted: "22.07.2024",
|
||||||
doc
|
doc
|
||||||
)
|
)
|
||||||
|
#set page(footer: locate(
|
||||||
|
loc => if calc.even(loc.page()) {
|
||||||
|
align(right, counter(page).display("1"));
|
||||||
|
} else {
|
||||||
|
align(left, counter(page).display("1"));
|
||||||
|
}
|
||||||
|
))
|
||||||
#include "content/introduction.typ"
|
#include "content/introduction.typ"
|
||||||
#include "content/SMHD.typ"
|
#include "content/SMHD.typ"
|
||||||
#include "content/BACH.typ"
|
#include "content/BACH.typ"
|
||||||
|
|
|
||||||
|
|
@ -111,13 +111,13 @@
|
||||||
footer: none,
|
footer: none,
|
||||||
)
|
)
|
||||||
|
|
||||||
set page(footer: locate(
|
//set page(footer: locate(
|
||||||
loc => if calc.even(loc.page()) {
|
//loc => if calc.even(loc.page()) {
|
||||||
align(right, counter(page).display("1"));
|
// align(right, counter(page).display("1"));
|
||||||
} else {
|
//} else {
|
||||||
align(left, counter(page).display("1"));
|
// align(left, counter(page).display("1"));
|
||||||
}
|
//}
|
||||||
))
|
//))
|
||||||
|
|
||||||
doc
|
doc
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue