Fixed page numbering, danke Janis <3, started working on the remaining issues

This commit is contained in:
Marius Drechsler 2024-08-26 18:06:07 +02:00
parent 2be84b715f
commit b8a0ee46f5
20 changed files with 519 additions and 169 deletions

View file

@ -55,3 +55,56 @@
organization={IEEE}
}
@article{PUFChartRef,
title={Variable-length bit mapping and error-correcting codes for higher-order alphabet pufs—extended version},
author={Immler, Vincent and Hiller, Matthias and Liu, Qinzhi and Lenz, Andreas and Wachter-Zeh, Antonia},
journal={Journal of Hardware and Systems Security},
volume={3},
pages={78--93},
year={2019},
publisher={Springer}
}
@article{PUFIntro2,
title={Physically Unclonable Functions: Constructions, Properties and Applications (Fysisch onkloonbare functies: constructies, eigenschappen en toepassingen)},
author={Maes, Roel},
year={2012}
}
@inproceedings{fuzzycommitmentpaper,
title={Achieving secure fuzzy commitment scheme for optical pufs},
author={Ignatenko, Tanya and Willems, Frans},
booktitle={2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing},
pages={1185--1188},
year={2009},
organization={IEEE}
}
@article{ruchti2021decoder,
title={When the Decoder Has to Look Twice: Glitching a PUF Error Correction},
author={Ruchti, Jonas and Gruber, Michael and Pehl, Michael},
journal={Cryptology ePrint Archive},
year={2021}
}
@article{delvaux2014helper,
title={Helper data algorithms for PUF-based key generation: Overview and analysis},
author={Delvaux, Jeroen and Gu, Dawu and Schellekens, Dries and Verbauwhede, Ingrid},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
volume={34},
number={6},
pages={889--902},
year={2014},
publisher={IEEE}
}
@inproceedings{maes2009soft,
title={A soft decision helper data algorithm for SRAM PUFs},
author={Maes, Roel and Tuyls, Pim and Verbauwhede, Ingrid},
booktitle={2009 IEEE international symposium on information theory},
pages={2101--2105},
year={2009},
organization={IEEE}
}

34
charts/PUF.typ Normal file
View file

@ -0,0 +1,34 @@
#import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#import fletcher.shapes: diamond
#diagram(
node-stroke: 1pt,
edge-stroke: 1pt,
//node-inset: 2pt,
node((0,0), [PUF], corner-radius: 2pt, name: <PUF>),
edge(<PUF>, <init_quant>, "->", $nu$),
node((1,0), [Initial quantization], name: <init_quant>, width: 10em),
edge(<init_quant>, <encod>, "->", $k$),
node((2,0), [Encoding], name: <encod>, width: 8em),
node((1,1), [Helper data\ generation], name: <quant_hd>, width: 10em),
edge(<init_quant>, <quant_hd>, "->"),
node((2.25, -0.5), [Enrollment], name: <enrollment_node>, stroke: none),
node(enclose: (<init_quant>, <encod>, <enrollment_node>), stroke: (dash: "dashed"), inset: 10pt),
node((0, 2), [PUF], corner-radius: 2pt, name: <PUF2>),
node((1, 2), [Repeated quantization], name: <quant2>),
node((2, 2), [Error correction], name: <ecc>),
node((3, 1), [$kappa = kappa^*$?],name: <result>),
node((2, 1), [Error correction helper data], name: <ecc_hd>, width: 8em),
node((2.25, 2.5), [Reconstruction], stroke: none, name: <reconstruction_node>),
node(enclose: (<quant2>, <ecc>, <reconstruction_node>), stroke: (dash: "dashed"), inset: 10pt),
edge(<quant_hd>, <quant2>, "->", $h$),
edge(<PUF2>, <quant2>, "->", $nu^*$),
edge(<quant2>, <ecc>, "->", $k^*$),
edge(<ecc_hd>, <ecc>, "->"),
edge(<encod>, "r,d", "->", $kappa$, label-pos: 0.3),
edge(<ecc>, "r,u", "->", $kappa^*$, label-pos: 0.4),
edge(<encod>, <ecc_hd>, "->")
)

View file

@ -4,9 +4,11 @@
Instead of generating helper-data to improve the quantization process itself, like in #gls("smhdt"), or using some kind of error correcting code after the quantization process, we can also try to find helper-data before performing the quantization that will optimize our input values before quantizing them to minimize the risk of bit and symbol errors during the reconstruction phase.
Since this #gls("hda") modifies the input values before the quantization takes place, we will consider the input values as zero-mean Gaussian distributed and not use a CDF to transform these values into the tilde-domain.
== Optimizing a 1-bit sign-based quantization<sect:1-bit-opt>
== Optimizing single-bit sign-based quantization<sect:1-bit-opt>
Before we take a look at the higher order quantization cases, we will start with a very basic method of quantization: a quantizer, that only returns a symbol with a width of $1$ bit and uses the sign of the input value to determine the resulting bit symbol.
@ -17,7 +19,7 @@ Before we take a look at the higher order quantization cases, we will start with
If we overlay the PDF of a zero-mean Gaussian distributed variable $X$ with a sign-based quantizer function as shown in @fig:1-bit_normal, we can see that the expected value of the Gaussian distribution overlaps with the decision threshold of the sign-based quantizer.
Considering that the margin of error of the value $x$ is comparable with the one shown in @fig:tmhd_example_enroll, we can conclude that values of $X$ that reside near $0$ are to be considered more unreliable than values that are further away from the x-value 0.
This means that the quantizer used here is very unreliable without generated helper-data.
This means that the quantizer used here is very unreliable as is.
Now, to increase the reliability of this quantizer, we can try to move our input values further away from the value $x = 0$.
To do so, we can define a new input value $z$ as a linear combination of two realizations of $X$, $x_1$ and $x_2$ with a set of weights $h_1$ and $h_2$:

View file

@ -21,14 +21,16 @@ Contrary to @tmhd1, @tmhd2 and @smhd, which display relevant areas as equi-proba
It has to be mentioned, that instead of transforming all values of the PUF readout into the Tilde-Domain, we could also use an inverse CDF to transform the bounds of our evenly spaced areas into the real domain with (normal) distributed values, which can be assessed as remarkably less computationally complex.#margin-note[Das erst später]
*/
=== Two-Metric Helper Data Method <sect:tmhd>
The most simple form of a metric-based @hda is the Two-Metric Helper Data Method, since the quantization only yields symbols of 1-bit width and uses the least amount of metrics possible if we want to use more than one metric.
The simplest form of a metric-based @hda is the Two-Metric Helper Data Method.
Its quantization only yields symbols of 1-bit width and it only uses a single bit of helper data to store the choice of metric.
@fig:tmhd_example_enroll and @fig:tmhd_example_reconstruct illustrate an example enrollment and reconstruction process.
We would consider the marked point the value of the initial measurement and the marked range our margin of error.
Consider the marked point the value of the initial measurement and the marked range our margin of error.
If we now were to use the original quantizer shown in @fig:tmhd_example_enroll during both the enrollment and the reconstruction phases, we would risk a bit error, because the margin of error overlaps with the lower quantization bound $-a$, which we can call a point of uncertainty.
But since we generated helper data during enrollment as depicted in @fig:tmhd_enroll, we can make use of a different quantizer $cal(R)(1, 2, x)$ whose boundaries do not overlap with the error margin.
To alleviate this we generated helper data during enrollment as depicted in @fig:tmhd_enroll, we can make use of a different quantizer $cal(R)(1, 2, x)$ whose boundaries do not overlap with the error margin.
#scale(x: 90%, y: 90%)[
#grid(
#figure(
grid(
columns: (1fr, 1fr),
[#figure(
include("../graphics/quantizers/two-metric/example_enroll.typ"),
@ -36,15 +38,16 @@ But since we generated helper data during enrollment as depicted in @fig:tmhd_en
[#figure(
include("../graphics/quantizers/two-metric/example_reconstruct.typ"),
caption: [Example reconstruction]) <fig:tmhd_example_reconstruct>]
)]
),
caption: [Example enrollment and reconstruction of @tmhdt. The window function describes the quantizer used to define the resulting bit. The red dot shows a possible @puf readout measurement with its blue marked strip as margin of error.])]
Publications @tmhd1 and @tmhd2 find all the relevant bounds for the enrollment and reconstruction phases under the assumption that the PUF readout (our input value $x$) is zero-mean Gaussian distributed.
//Because the parameters for symbol width and number of metrics always stays the same, it is easier to calculate #m//argin-note[obdA annehmen hier] the bounds for 8 equi-probable areas with a standard deviation of $sigma = 1$ first and then multiplying them with the estimated standard deviation of the PUF readout.
Because the parameters for symbol width and number of metrics always stays the same, we can -- without loss of generality -- assume the standard deviation as $sigma = 1$ and calculate the bounds for 8 equi-probable areas for this distribution.
Because the parameters for symbol width and number of metrics always stay the same, we can -- without loss of generality -- assume the standard deviation as $sigma = 1$ and calculate the bounds for 8 equi-probable areas for this distribution.
This is done by finding two bounds $a$ and $b$ such, that
$ integral_a^b f_X(x) \dx = 1/8 $
This operation yields 9 bounds defining these areas $-infinity$, $-\T1$, $-a$, $-\T2$, $0$, $\T2$, $a$, $\T1$ and $+infinity$.
During the enrollment phase, we will use $plus.minus a$ as our quantizing bounds, returning $0$ if the //#margin-note[Rück-\ sprache?] absolute value is smaller than $a$ and $1$ otherwise.
During the enrollment phase, we will use $plus.minus a$ as our quantizing bounds, returning $0$ if the absolute value of $x$ is smaller than $a$ and $1$ otherwise.
The corresponding metric is chosen based on the following conditions:
$ M = cases(
@ -53,7 +56,6 @@ $ M = cases(
)space.en. $
@fig:tmhd_enroll shows the curve of a quantizer $cal(Q)$ that would be used during the Two-Metric enrollment phase.
At this point we will still assume that our input value $x$ is zero-mean Gaussian distributed. //#margin-note[Als Annahme nach vorne verschieben]
#scale(x: 90%, y: 90%)[
#grid(
columns: (1fr, 1fr),
@ -67,7 +69,7 @@ At this point we will still assume that our input value $x$ is zero-mean Gaussia
]
As previously described, each of these metrics correspond to a different quantizer.
Now, we can use the generated helper data in the reconstruction phase and define a reconstructed bit based on the chosen metric as follows:
In the reconstruction phase, we can use the generated helper data and define a reconstructed bit based on the chosen metric as follows:
$ #grid(
columns: (1fr, 1fr),
@ -77,7 +79,7 @@ $ #grid(
) $
@fig:tmhd_reconstruct illustrates the basic idea behind the Two-Metric method. Using the helper data, we will move the bounds of the original quantizer (@fig:tmhd_example_enroll) one octile to each side, yielding two new quantizers.
The advantage of this method comes from moving the point of uncertainty away from our readout position.
The advantage of this method comes from moving the point of uncertainty away from our enrollment-time readout.
@ -113,7 +115,8 @@ The generalization consists of two components:
== Realization<sect:smhd_implementation>
We will now propose a specific realization of the S-Metric Helper Data Method. \
//As shown in @sect:dist_independency, we can use a CDF to transform our random distributed variable $X$ into an $tilde(X)$ in the tilde domain.
Instead of using the @puf readout directly for @smhdt, we can use a @cdf to transform these values into the tilde domain.
The only requirement we would need to meet here is that the @cdf of the probability distribution used is known.
This allows us to use equi-distant bounds for the quantizer instead of equi-probable ones.
From now on we will use the following syntax for quantizers that use the S-Metric Helper Data Method:
@ -142,7 +145,7 @@ Right now, this quantizer wouldn't help us generating any helper data.
To achieve that, we will need to divide a symbol step -- one, that returns the corresponding quantized symbol - into multiple sub-steps.
Using $S$, we can define the step size $Delta_S$ as the division of $Delta$ by $S$:
$ Delta_S = frac(Delta, S) = frac(frac(1, 2^M), S) = frac(1, 2^M dot S) $<eq:delta_s>
$ Delta_S = frac(Delta, S) = frac(1, 2^M dot S) $<eq:delta_s>
/*After this definition #margin-note[Absatz nochmal neu], we need to make an adjustment to our previously defined quantizer function, because we cannot simply return the quantized value based on a quantizer with step size $Delta_s$.
That would just increase the amounts of bits we will extract out of one measurement.
@ -178,22 +181,23 @@ In that sense, increasing the number of metrics will increase the number of sub-
We can now perform the enrollment of a full PUF readout.
Each measurement will be quantized with out quantizer $cal(E)$, returning a tuple consisting of the quantized symbol and helper data.
$ K_i = cal(E)(s, m, tilde(x_i)) = (k, h)_i space.en. $ <eq:smhd_quant>
$ kappa_i = cal(E)(s, m, tilde(x_i)) = (k, h)_i space.en. $ <eq:smhd_quant>
Performing the operation of @eq:smhd_quant for our whole set of measurements will yield a vector of tuples $bold(K)$.
Performing the operation of @eq:smhd_quant for our whole set of measurements will yield a vector of tuples $bold(kappa)$.
=== Reconstruction
We already demonstrated the basic principle of the reconstruction phase in section @sect:tmhd, which showed the advantage of using more than one quantizer during reconstruction.
We will call our repeated measurement of $tilde(x)$ that is subject to a certain error $tilde(x^*)$.
To perform reconstruction with $tilde(x^*)$, we will first need to find all $S$ quantizers for which we generated the helper data in the previous step.
To perform reconstruction with $tilde(x^*)$, we will first need to find all $S$ quantizers for which we generated the helper data in the previous step and then choose the one corresponding to the saved metric.
We have to distinguish the two cases, that $S$ is either even or odd:\
If $S$ is even, we need to define $S$ quantizers offset by some distance $phi$.
We can define the ideal position for the quantizer bounds based on its corresponding metric as centered around the center of the related metric.
If $S$ is even, we need to define $S$ quantizers offset by multiples of $phi$.
We can define the ideal position for the quantizer bounds based on its corresponding metric as centered around the center of the metric.
We can find these new bounds graphically as depicted in @fig:smhd_find_bound_graph. We first determine the x-values of the centers of a metric (here M1, as shown with the arrows). We can then place the quantizer steps with step size $Delta$ (@eq:delta) evenly spaced around these points.
If the resulting quantizer bound is smaller than $0$ or bigger than $1$, we will either add or subtract $1$ from its value so it stays in the defined range of the tilde domain.
With these new points for the vertical steps of $cal(Q)$, we can draw the new quantizer for the first metric in @fig:smhd_found_bound_graph.
@ -236,7 +240,7 @@ Analytically, the offset we are applying to $cal(E)(2, 2, tilde(x))$ can be defi
$ Phi = lr(frac(1, 2^M dot S dot 2)mid(|))_(M=2, S=2) = 1 / 16 space.en. $<eq:offset>
$Phi$ is the constant that we will multiply with a certain metric index $i$ to obtain the metric offset $phi$, which is used to define each of the $S$ different quantizers for reconstruction.
$Phi$ is the constant that we will multiply with a certain metric index $i in [- S/2, ..., S/2]$ to obtain the metric offset $phi$, which is used to define each of the $S$ different quantizers for reconstruction.
//This is also shown in @fig:smhd_2_2_reconstruction, as our quantizer curve is moved $1/16$ to the left and the right.
In @fig:smhd_2_2_reconstruction, the two metric indices $i = plus.minus 1$ will be multiplied with $Phi$, yielding two quantizers, one moved $1/16$ to the left and one moved $1/16$ to the right.
@ -245,7 +249,7 @@ If a odd number of metrics is given, the offset can still be calculated using @e
To find all metric offsets for values of $S > 3$, we can use @alg:find_offsets.
For application, we calculate $phi$ based on $S$ and $M$ using @eq:offset. The resulting list of offsets is correctly ordered and can be mapped to the corresponding metrics in ascending order.// as we will show in @fig:4_2_offsets and @fig:6_2_offsets.
We can calculate $phi$ based on $S$ and $M$ using @eq:offset. The resulting list of offsets is correctly ordered and can be mapped to the corresponding metrics in ascending order.// as we will show in @fig:4_2_offsets and @fig:6_2_offsets.
#figure(
kind: "algorithm",
@ -255,7 +259,7 @@ For application, we calculate $phi$ based on $S$ and $M$ using @eq:offset. The r
==== Offset properties<par:offset_props>
//#inline-note[Diese section ist hier etwas fehl am Platz, ich weiß nur nicht genau wohin damit. Außerdem ist sie ein bisschen durcheinander geschrieben]
Before we go on and experimentally test this realization of the S-Metric method, let's look deeper into the properties of the metric offset value $phi$.\
Before we go on and experimentally test this realization of the S-Metric method, let's look deeper into the properties of the metric offset value $phi$.
Comparing @fig:smhd_2_2_reconstruction, @fig:smhd_3_2_reconstruction and their respective values of @eq:offset, we can observe, that the offset $Phi$ gets smaller the more metrics we use.
#figure(
@ -270,7 +274,7 @@ Comparing @fig:smhd_2_2_reconstruction, @fig:smhd_3_2_reconstruction and their r
caption: [Offset values for 2-bit configurations]
)<tab:offsets>
As previously stated, we will need to define $S$ quantizers, $S/2$ times to the left and $S/2$ times to the right.
For example, setting parameter $S$ to $4$ means we will need to move the enrollment quantizer $lr(S/2 mid(|))_(S=4) = 2$ times to the left and right.
For example, setting the parameter $S$ to $4$ means we will need to move the enrollment quantizer $2$ times to the left and right.
As we can see in @fig:4_2_offsets, $phi$ for the maximum metric indices $i = plus.minus 2$ are identical to the offsets of a 2-bit 2-metric configuration.
In fact, this property carries on for higher even numbers of metrics, as shown in @fig:6_2_offsets.
@ -304,12 +308,12 @@ In fact, this property carries on for higher even numbers of metrics, as shown i
At $s=6$ metrics, the biggest metric offset we encounter is $phi = 1/16$ at $i = plus.minus 3$.\
This biggest (or maximum) offset is of particular interest to us, as it tells us how far we deviate from the original quantizer used during enrollment.
The maximum offset for a 2-bit configuration $phi$ is $1/16$ and we will introduce smaller offsets in between if we use a higher even number of metrics.
The maximum offset for a 2-bit configuration $phi$ is $1/16$ and we only introduce smaller offsets in between if we use a higher even number of metrics.
More formally, we can define the maximum metric offset for an even number of metrics as follows:
$ phi_("max,even") = frac(frac(S,2), 2^M dot S dot 2) = frac(1, 2^M dot 4) $<eq:max_offset_even>
Here, we multiply @eq:offset by the maximum metric index $i_"max" = S/2$.
Here, we multiply $phi$ from @eq:offset by the maximum metric index $i_"max" = S/2$.
Now, if we want to find the maximum offset for a odd number of metrics, we need to modify @eq:max_offset_even, more specifically its numerator.
For that reason, we will decrease the parameter $m$ by $1$, that way we will still perform a division without remainder:
@ -344,7 +348,7 @@ Because $phi_"max,odd"$ only approximates $phi_"max,even"$ if $S arrow.r infinit
== Improvements<sect:smhd_improvements>
The by @smhd proposed S-Metric Helper Data Method can be improved by using gray coded labels for the quantized symbols instead of naive ones.
The S-Metric Helper Data Method proposed by Fischer in @smhd can be improved by using Gray-coded labels for the quantized symbols instead of naive labelling.
#align(center)[
#scale(x: 80%, y: 80%)[
#figure(
@ -352,7 +356,7 @@ The by @smhd proposed S-Metric Helper Data Method can be improved by using gray
caption: [Gray Coded 2-bit quantizer]
)<fig:2-bit-gray>]]
@fig:2-bit-gray shows a 2-bit quantizer with gray-coded labelling.
In this example, we have an advantage at $tilde(x) = ~ 0.5$, because a quantization error only returns one wrong bit instead of two.
In this example, we have an advantage at $tilde(x) approx 0.5$, because a quantization error only returns one wrong bit instead of two.
Furthermore, the transformation into the Tilde-Domain could also be performed using the @ecdf to achieve a more precise uniform distribution because we do not have to estimate a standard deviation of the input values.
@ -360,30 +364,30 @@ Furthermore, the transformation into the Tilde-Domain could also be performed us
== Experiments<sect:smhd_experiments>
We tested the implementation of @sect:smhd_implementation with the temperature dataset of @dataset.
We tested the implementation of @sect:smhd_implementation with the dataset of @dataset.
The dataset contains counts of positives edges of a toggle flip flop at a set evaluation time $D$. Based on the count and the evaluation time, the frequency of a ring oscillator can be calculated using: $f = 2 dot frac(k, D)$.
Because we want to analyze the performance of the S-Metric method over different temperatures, both during enrollment and reconstruction, we are limited to the second part of the experimental measurements of @dataset.
Because we want to analyze the performance of the S-Metric method over different temperatures, both during enrollment and reconstruction, we are limited to the experimental measurements of @dataset which varied the temperature during the FPGA operation.
We will have measurements of $50$ FPGA boards available with $1600$ and $1696$ ring oscillators each. To obtain the values to be processed, we subtract them in pairs, yielding $800$ and $848$ ring oscillator frequency differences _df_.\
Since the frequencies _f_ are normal distributed, the difference _df_ can be assumed to be zero-mean Gaussian distributed.
To apply the values _df_ to our implementation of the S-Metric method, we will first transform them into the Tilde-Domain using an inverse CDF, resulti/invite <mxid>ng in uniform distributed values $tilde(italic("df"))$.
Because we can assume that the frequencies _f_ are i.i.d., the difference _df_ can also be assumed to be i.i.d.
To apply the values _df_ to our implementation of the S-Metric method, we will first transform them into the Tilde-Domain using an inverse CDF, resulting in uniform distributed values $tilde(x)$.
Our resulting dataset consists of #glspl("ber") for quantization symbol widths of up to $6 "bits"$ evaluated with generated helper-data from up to $100 "metrics"$.
We chose not to perform simulations for bit widths higher than $6 "bits"$, as we will see later that we have already reached a bit error rate of approx. $10%$ for these configurations.
//We chose not to perform simulations for bit widths higher than $6 "bits"$, as we will see later that we have already reached a bit error rate of approx. $10%$ for these configurations.
=== Results & Discussion
The bit error rate of different S-Metric configurations for naive labelling can be seen in @fig:global_errorrates.
For this analysis, enrollment and reconstruction were both performed at room temperature and the quantizer was naively labelled.
For this analysis, enrollment and reconstruction were both performed at room temperature. //and the quantizer was naively labelled.
#figure(
image("../graphics/25_25_all_error_rates.svg", width: 95%),
caption: [Bit error rates for same temperature execution. Here we can already observe the asymptotic loss of improvement in #glspl("ber") for higher metric numbers]
caption: [Bit error rates for same-temperature execution. Here we can already observe the asymptotic #glspl("ber") for higher metric numbers. The error rate is scaled logarithmically here.]
)<fig:global_errorrates>
We can observe two key properties of the S-Metric method in @fig:global_errorrates.
The error rate in this plot is scaled logarithmically.\
The exponential growth of the error rate of classic 1-metric configurations can be observed through the linear increase of the error rates.
Also, as we expanded on in @par:offset_props, using more metrics will, at some point, not further improve the bit error rate of the key.
At a symbol width of $m >= 6$ bits, no further improvement through the S-Metric method can be observed.
//The exponential growth of the error rate of classic 1-metric configurations can be observed through the increase of the error rates.
The exponential growth of the @ber can be observed if we set $S=1$ and increase $M$ up to $6$.
Also, as we expanded on in @par:offset_props, at some point using more metrics will no longer improve the bit error rate of the key.
At a symbol width of $M >= 6$ bits, no further improvement through the S-Metric method can be observed.
#figure(
include("../graphics/plots/errorrates_changerate.typ"),
@ -392,12 +396,12 @@ At a symbol width of $m >= 6$ bits, no further improvement through the S-Metric
This tendency can also be shown through @fig:errorrates_changerate.
Here, we calculated the quotient of the bit error rate using one metric and 100 metrics.
From $m >= 6$ onwards, $(x_"1" (m)) / (x_"100" (m))$ approaches $~1$, which means, no real improvement is possible anymore through the S-Metric method.
From $M >= 6$ onwards, $(op("BER")(1, 2^M)) / (op("BER")(100, 2^M))$ approaches $~1$, which means, no real improvement is possible anymore through the S-Metric method.
==== Helper Data Volume Impact
==== Helper data volume impact
The amount of helper data bits required by @smhdt is defined as a function of the amount of metrics as $log_2(S)$.
The overall extracted-bits to helper-data-bits ratio can be defined here as $cal(r) = lr(frac(n dot M, log_2(S))mid(|))_(n=800) = frac(800 dot M, log_2(S))$
The amount of helper data bits required by @smhdt is defined as a function of the number of metrics as $log_2(S)$.
The overall extracted-bits to helper-data-bits ratio can be defined here as $cal(r) = frac(M, log_2(S))$
#figure(
table(
@ -405,7 +409,7 @@ The overall extracted-bits to helper-data-bits ratio can be defined here as $cal
inset: 7pt,
align: center + horizon,
[$bold(M)$], [$1$], [$2$], [$3$], [$4$], [$5$], [$6$],
[*Errorrate*], [$0.012$], [$0.9 dot 10^(-4)$], [$0.002$], [$0.025$], [$0.857$], [$0.148$],
[*@ber*], [$0.012$], [$0.9 dot 10^(-4)$], [$0.002$], [$0.025$], [$0.857$], [$0.148$],
),
caption: [S-Metric performance with same bit-to-metric ratios]
)<fig:smhd_ratio_performance>
@ -428,8 +432,7 @@ Since we wont always be able to recreate lab-like conditions during the reconstr
)<fig:smhd_tmp_reconstruction>
@fig:smhd_tmp_reconstruction shows the results of this experiment conducted with a 2-bit configuration.\
As we can see, the further we move away from the temperature of enrollment, the higher the bit error rates turns out to be.\
As we can see, the further we move away from the temperature of enrollment, the higher the #glspl("ber").
We can observe this property well in detail in @fig:global_diffs.
#scale(x: 90%, y: 90%)[
@ -439,14 +442,13 @@ We can observe this property well in detail in @fig:global_diffs.
)<fig:global_diffs>]
Here, we compared the asymptotic performance of @smhdt for different temperatures both during enrollment and reconstruction. First we can observe that the optimum temperature for the operation of @smhdt in both phases for the dataset @dataset is $35°C$ instead of the expected $25°C$.
Furthermore, the @ber seems to be almost directly correlated with the absolute temperature difference, especially at higher temperature differences, showing that the further apart the temperatures of the two phases are, the higher the @ber.
Furthermore, the @ber seems to be almost directly determined by the absolute temperature difference, especially at higher temperature differences, showing that the further apart the temperatures of the two phases are, the higher the @ber.
==== Gray coding
In @sect:smhd_improvements, we discussed how a gray coded labelling for the quantizer could improve the bit error rates of the S-Metric method.
Because we only change the labelling of the quantizing bins and do not make any changes to #gls("smhdt") itself, we can assume that the effects of temperature on the quantization process are directly translated to the gray-coded case.
Therefore, we will not perform this analysis again here.
@fig:smhd_gray_coding shows the comparison of applying #gls("smhdt") at room temperature for both naive and gray-coded labels.
There we can already observe the improvement of using gray-coded labelling, but the impact of this change of labels can really be seen in @tab:gray_coded_impact.
@ -456,11 +458,11 @@ For $M>3$ the rise of the #gls("ber") predominates the possible improvement by a
#figure(
table(
columns: (7),
columns: (6),
align: center + horizon,
inset: 7pt,
[*M*],[1],[2],[3],[4], [5], [6],
[*Improvement*], [$0%$], [$24.75%$], [$47.45%$], [$46.97%$], [$45.91%$], [$37.73%$]
[1],[2],[3],[4], [5], [6],
[$0%$], [$24.75%$], [$47.45%$], [$46.97%$], [$45.91%$], [$37.73%$]
),
caption: [Improvement of using gray-coded instead of naive labelling, per bit width]
)<tab:gray_coded_impact>
@ -470,4 +472,4 @@ For $M>3$ the rise of the #gls("ber") predominates the possible improvement by a
caption: [Comparison between #glspl("ber") using naive labelling and gray-coded labelling]
)<fig:smhd_gray_coding>
Using our dataset, we can estimate the average improvement for using gray-coded labelling to be at around $33%$.
Using the dataset, we can estimate the average improvement for using gray-coded labelling to be at $33%$.

View file

@ -2,38 +2,50 @@
#import "@preview/bob-draw:0.1.0": *
= Introduction
In the field of cryptography, @puf devices are a popular tool for key generation and storage.
In the field of cryptography, @puf devices are a popular tool for key generation and storage @PUFIntro @PUFIntro2.
In general, a @puf describes a kind of circuit that issues due to minimal deviations in the manufacturing process slightly different behaviours during operation.
Since the behaviour of one @puf device is now only reproducible on itself and not on a device of the same type with the same manufacturing process, it can be used for secure key generation and/or storage.\
To improve the reliability of the keys generated and stored using the @puf, various #glspl("hda") have been introduced.
The general operation of a @puf with a @hda can be divided into two separate stages: _enrollment_ and _reconstruction_.
During enrollment, a @puf readout $v$ is generated upon which helper data $h$ is generated.
At reconstruction, a slightly different @puf readout $v^*$ is generated.
Using the helper data $h$ the new @puf readout $v^*$ can be improved to be less deviated from $v$ as before.
This process of helper-data generation is generally known as _Fuzzy Commitment_.
The general operation of a @puf with a @hda can be divided into two separate stages: _enrollment_ and _reconstruction_ as shown in @fig:puf_operation @PUFChartRef.
Previous works already introduced different #glspl("hda") with various strategies.
#figure(
include("../charts/PUF.typ"),
caption: [@puf model description using enrollment and reconstruction.]
)<fig:puf_operation>
The enrollment stage will usually be performed in near ideal, lab-like conditions i.e. at room temperature ($25°C$).
During this phase, a first @puf readout $nu$ with corresponding helper data $h$ is generated.
Going on, reconstruction can now be performed under varying conditions, for example at a higher temperature.
Here, slightly different @puf readout $nu^*$ is generated.
Using the helper data $h$ the new @puf readout $nu^*$ can be improved to be less deviated from $v$ as before.
One possible implementation of this principle is called _Fuzzy Commitment_ @fuzzycommitmentpaper @ruchti2021decoder.
Previous works already introduced different #glspl("hda") with various strategies @delvaux2014helper @maes2009soft.
The simplest form of helper-data one could generate is reliability information for every @puf bit.
Here, the @hda marks unreliable @puf bits that are then either discarded during reconstruction or rather corrected using a repetition code after the quantization process.
Here, the @hda marks unreliable @puf bits that are then either discarded during reconstruction or rather corrected using an error correction code after the quantization process.
Going on, publications @tmhd1, @tmhd2 and @smhd already introduced a metric-based @hda.
These #glspl("hda") generate helper data during enrollment to define multiple quantizers for the reconstruction phase to minimize the risk of bit errors.
Going on, publications @tmhd1 and @tmhd2 introduced a metric-based @hda as @tmhdt.
The main goal of such a @hda is to improve the reliability of the @puf during the quantization step of the enrollment phase.
To achieve that, helper data is generated to define multiple quantizers for the reconstruction phase to minimize the risk of bit errors.
A generalization outline to extend @tmhdt for higher order bit quantization has already been proposed by Fischer in @smhd.
As a newly proposed @hda, we will propose a method to shape the input values of a @puf to better fit inside the bounds of a multi-bit quantizer.
We will explore the question which of these two #glspl("hda") provides the better performance for higher order bit cases using the least amount of helper-data bits possible.
In the course of this work, we will first take a closer look at @smhdt as proposed by Fischer @smhd and provide a concrete realization for this method.
We will also propose a method to shape the input values of a @puf to better fit within the bounds of a multi-bit quantizer which we call @bach.
We will investigate the question which of these two #glspl("hda") provides the better performance for higher order bit cases with the least amount of helper data bits.
== Notation
To ensure a consistent notation of functions and ideas, we will now introduce some conventions and definitions.
Random distributed variables will be notated with a capital letter, i.e. $X$, its realization will be the corresponding lower case letter, $x$.
Vectors will be written in bold text: $bold(k)$ represents a vector of quantized symbols.
Matrices are denoted with a bold capital letter: $bold(M)$
Random distributed variables will be notated with a capital letter, i.e. $X$.
Realizations will be the corresponding lower case letter, $x$.
Values of $x$ subject to some kind of error are marked with a $*$ in the exponent e.g., $x^*$.
Vectors will be written in bold text: e.g., $bold(k)$ represents a vector of quantized symbols.
Matrices are denoted with a bold capital letter: $bold(M)$.
We will call a quantized symbol $k$. $k$ consists of all possible binary symbols, i.e. $0, 01, 110$.
A quantizer will be defined as a function $cal(Q)(x, bold(a))$ that returns a quantized symbol $k$.
We also define the following special quantizers for metric based HDAs:
We also define the following special quantizers for metric based #glspl("hda"):
A quantizer used during the enrollment phase is defined by a calligraphic $cal(E)$.
For the reconstruction phase, a quantizer will be defined by a calligraphic $cal(R)$
@example-quantizer shows the curve of a 2-bit quantizer that receives $tilde(x)$ as input. In the case, that the value of $tilde(x)$ equals one of the four bounds, the quantized value is chosen randomly from the relevant bins.
@ -49,25 +61,28 @@ $ cal(Q)(S,M) , $<eq-1>
where $S$ determines the number of metrics and $M$ the bit width of the symbols.
The corresponding metric is defined through the lower case $s$, the bit symbol through the lower case $m$.
=== Tilde-Domain<tilde-domain>
=== Tilde Domain<tilde-domain>
As also described in @smhd, we will use a CDF to transform the real PUF values into the Tilde-Domain
The tilde domain describes the range of numbers between $0$ and $1$, which is defined by the image of a @cdf.
As also described in @smhd, we will use a @cdf to transform the real PUF values into the tilde domain.
This transformation can be performed using the function $xi = tilde(x)$. The key property of this transformation is the resulting uniform distribution of $x$.
Considering a normal distribution, the CDF is defined as
$ xi(frac(x - mu, sigma)) = frac(1, 2)[1 + \e\rf(frac(x - mu, sigma sqrt(2)))] $
$ xi(frac(x - mu, sigma)) = frac(1, 2)[1 + op("erf")(frac(x - mu, sigma sqrt(2)))] $
==== #gls("ecdf", display: "Empirical cumulative distribution function (eCDF)")
The @ecdf is constructed through sorting the empirical measurements of a distribution @dekking2005modern. Although less accurate, this method allows a more simple and less computationally complex way to transform real valued measurements into the Tilde-Domain. We will mainly use the eCDF in @chap:smhd because of the difficulty of finding an analytical description for the CDF of a Gaussian-Mixture.\
To apply it, we will sort the vector of realizations $bold(z)$ of a random distributed variable $Z$ in ascending order.
We will not always be able to find an analytical description of a probability distribution and its corresponding @cdf.
Alternatively, an @ecdf can be constructed through sorting the empirical measurements of a distribution @dekking2005modern.
Although less accurate, this method allows a more simple and less computationally complex way to transform real valued measurements into the tilde domain.
We will mainly use the @ecdf in @chap:smhd because of the difficulty of finding an analytical description for the @cdf of a weighted linear combination of random variables.
The function for an @ecdf can be defined as
$
xi_#gls("ecdf") (x) = frac("number of elements in " bold(z)", that" <= x, n) in [0, 1],
xi_#gls("ecdf") (x) = frac("number of elements in " bold(z)", s.t" <= x, n) in [0, 1],
$<eq:ecdf_def>
where $n$ defines the number of elements in the vector $bold(z)$.
If the vector $bold(z)$ were to contain the elements $[1, 3, 4, 5, 7, 9, 10]$ and $x = 5$, @eq:ecdf_def would result to $xi_#gls("ecdf") (5) = frac(4, 7)$.\
The application of @eq:ecdf_def on $X$ will transform its values into the empirical tilde-domain.
The application of @eq:ecdf_def on $X$ will transform its values into the empirical tilde domain.
We can also define an inverse @ecdf:
@ -77,3 +92,4 @@ $<eq:ecdf_inverse>
The result of @eq:ecdf_inverse is the index $i$ of the element $z_i$ from the vector of realizations $bold(z)$.
To apply the @ecdf to our numerical results later, we will sort the vector of realizations $bold(z)$ of a random distributed variable $Z$ in ascending order.

View file

@ -5,4 +5,4 @@
6,0.22165875000000004
7,0.17890425000000001
8,0.13274003759398498
9,0.09169562499999998
90.09169562499999998

1 2 2,0.3601829545454545 0.3601829545454545
5 6 6,0.22165875000000004 0.22165875000000004
6 7 7,0.17890425000000001 0.17890425000000001
7 8 8,0.13274003759398498 0.13274003759398498
8 9 90.09169562499999998 0.09169562499999998

View file

@ -0,0 +1,17 @@
addends,bit,errorrate
10,5,0.2965336
10,5,0.213798947368421
10,5,0.37411818181818185
10,5,0.312252030075188
10,5,0.23020799999999994
10,5,0.23144159999999997
10,5,0.18691639097744356
10,5,0.14719559999999998
10,6,0.1480327485380117
10,6,0.33373909774436084
10,6,0.14506700000000003
10,6,0.256454375
10,6,0.20765733333333333
10,6,0.2271983709273184
10,6,0.19503741666666666
10,6,0.1496060606060606
1 addends bit errorrate
2 10 5 0.2965336
3 10 5 0.213798947368421
4 10 5 0.37411818181818185
5 10 5 0.312252030075188
6 10 5 0.23020799999999994
7 10 5 0.23144159999999997
8 10 5 0.18691639097744356
9 10 5 0.14719559999999998
10 10 6 0.1480327485380117
11 10 6 0.33373909774436084
12 10 6 0.14506700000000003
13 10 6 0.256454375
14 10 6 0.20765733333333333
15 10 6 0.2271983709273184
16 10 6 0.19503741666666666
17 10 6 0.1496060606060606

View file

@ -5,8 +5,11 @@
#print-glossary((
(key: "hda", short: "HDA", plural: "HDAs", long: "helper data algorithm", longplural: "helper data algorithms"),
(key: "cdf", short: "CDF", plural: "CDFs", long: "cumulative distribution function", longplural: "cumulative distribution functions"),
(key: "ecdf", short: "eCDF", plural: "eCDFs", long: "empirical Cumulative Distribution Function", longplural: "empirical Cumulative Distribution Functions"),
(key: "ber", short: "BER", plural: "BERs", long: "bit error rate", longplural: "bit error rates"),
(key: "smhdt", short: "SMHD", plural: "SMHDs", long: "S-Metric Helper Data Method"),
(key: "puf", short: "PUF", plural: "PUFs", long: "physical unclonale function", longplural: "physical unclonale functions")
(key: "smhdt", short: "SMHD", plural: "SMHDs", long: "S-Metric Helper Data method"),
(key: "puf", short: "PUF", plural: "PUFs", long: "physical unclonable function", longplural: "physical unclonale functions"),
(key: "tmhdt", short: "TMHD", plural: "TMHDs", long: "Two Metric Helper Data method"),
(key: "bach", short: "BACH", long: "Boundary Adaptive Clustering with Helper data")
))

View file

@ -5,8 +5,8 @@
plot.plot(size: (10,4),
x-tick-step: none,
x-ticks: ((1, [1]), (2, [2]), (3, [3]), (4, [4]), (5, [5]), (6, [6])),
y-label: $(x_"1" (m)) / (x_"100" (m))$,
x-label: $m$,
y-label: $frac(op("BER")(1, 2^M),op("BER")(100, 2^M))$,
x-label: $2^M$,
y-tick-step: 500,
axis-style: "left",
x-min: 0,

View file

@ -25,15 +25,15 @@
)
plot.plot(
y-label: $"Bit error rate"$,
x-label: "Operating configuration",
y-label: "Bit error rate",
x-label: "Enrollment, reconstruction temperature",
x-tick-step: none,
x-ticks: conf,
y-format: formatter,
y-tick-step: 0.5,
axis-style: "scientific-auto",
size: (16,6),
plot.add(errorrate, axes: ("x", "y"), style: (stroke: (paint: red))),
plot.add(errorrate, axes: ("x", "y"), style: (stroke: (paint: red)), label: $op("BER")(100, 2^2)$),
plot.add-hline(1)
)

View file

@ -3,6 +3,8 @@
#let line_style = (stroke: (paint: black, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#canvas({
import draw: *
set-style(axes: (shared-zero: false))
plot.plot(size: (8,6),
x-tick-step: 0.25,
y-label: $cal(E)(2, 2, tilde(x))$,

View file

@ -3,6 +3,8 @@
#let line_style = (stroke: (paint: black, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#canvas({
import draw: *
set-style(axes: (shared-zero: false))
plot.plot(size: (8,6),
x-tick-step: 0.25,
y-label: $cal(E)(3, 2, tilde(x))$,

View file

@ -1,8 +1,10 @@
#import "@preview/cetz:0.2.2": canvas, plot
#import "@preview/cetz:0.2.2": canvas, plot, draw, palette
#let line_style = (stroke: (paint: black, thickness: 2pt))
#let dashed = (stroke: (dash: "dashed"))
#canvas({
import draw: *
set-style(axes: (shared-zero: false))
plot.plot(size: (8,6),
x-tick-step: 0.25,
y-label: $cal(Q)(2, 1, tilde(x))$,
@ -10,11 +12,12 @@
y-tick-step: none,
y-ticks: ((0.25, [00]), (0.5, [01]), (0.75, [10]), (1, [11])),
axis-style: "left",
x-min: 0,
//x-min: 0,
x-max: 1,
y-min: 0,
y-max: 1,{
plot.add(((0,0.25), (0.25,0.25), (0.5,0.5), (0.75,0.75), (1, 1)), line: "vh", style: line_style)
//plot.add(((0,0), (0,0)), style: (stroke: none))
plot.add-hline(0.25, 0.5, 0.75, 1, style: dashed)
plot.add-vline(0.25, 0.5, 0.75, 1, style: dashed)
})

BIN
main.pdf

Binary file not shown.

View file

@ -20,8 +20,7 @@
school: school
)
pagebreak()
pagebreak()
pagebreak(to: "odd")
title_page(
title: title,
@ -34,37 +33,16 @@
submitted: submitted
)
pagebreak()
//set math.equation(numbering: "(1)")
set page(
paper: "a4",
margin: (
top: 3cm,
bottom: 3cm,
x: 2cm,
),
header: [],
footer: [],
//numbering: "1"
)
set par(justify: true)
set align(left)
set text(
font: "Times New Roman",
size: 12pt,
)
set heading(numbering: "1.")
show heading: it => locate(loc => {
let levels = counter(heading).at(loc)
set text(font: "TUM Neue Helvetica")
if it.level == 1 [
#if levels.at(0) != 1 {
pagebreak(to: "odd")
}
#set text(size: 24pt)
#pagebreak()
#if levels.at(0) != 0 {
numbering("1", levels.at(0))
}
@ -90,16 +68,6 @@
]
})
set page(numbering: none)
contents_page()
set page(numbering: none)
pagebreak()
set page(
paper: "a4",
margin: (
@ -108,16 +76,30 @@
x: 2cm,
),
header: [],
footer: none,
footer: []
)
//set page(footer: locate(
//loc => if calc.even(loc.page()) {
// align(right, counter(page).display("1"));
//} else {
// align(left, counter(page).display("1"));
//}
//))
contents_page()
pagebreak(to: "odd")
set par(justify: true)
set align(left)
set text(
font: "Times New Roman",
size: 12pt
)
set page(
header: [],
footer: locate(loc =>
if calc.rem(loc.page(), 2) == 0 {
align(left, text(font: "TUM Neue Helvetica", size: 10pt, counter(page).display("1")));
} else {
align(right, text(font: "TUM Neue Helvetica", size: 10pt, counter(page).display("1")));
}
)
)
doc
}

124
template/conf.typ.back Normal file
View file

@ -0,0 +1,124 @@
#import "cover.typ": cover_page
#import "title.typ": title_page
#import "contents.typ": contents_page
#let conf(
title: "",
author: "",
chair: "",
school: "",
degree: "",
examiner: "",
supervisor: "",
submitted: "",
doc
) = {
cover_page(
title: title,
author: author,
chair: chair,
school: school
)
pagebreak()
pagebreak()
title_page(
title: title,
author: author,
chair: chair,
school: school,
degree: degree,
examiner: examiner,
supervisor: supervisor,
submitted: submitted
)
pagebreak()
//set math.equation(numbering: "(1)")
set page(
paper: "a4",
margin: (
top: 3cm,
bottom: 3cm,
x: 2cm,
),
header: [],
footer: [],
//numbering: "1"
)
set par(justify: true)
set align(left)
set text(
font: "Times New Roman",
size: 12pt,
)
set heading(numbering: "1.")
show heading: it => locate(loc => {
let levels = counter(heading).at(loc)
set text(font: "TUM Neue Helvetica")
if it.level == 1 [
#set text(size: 24pt)
#pagebreak()
#if levels.at(0) != 0 {
numbering("1", levels.at(0))
}
#it.body
#v(1em, weak: true)
] else if it.level == 2 [
#set text(size: 16pt)
#v(1em)
#numbering("1.1", levels.at(0), levels.at(1))
#it.body
#v(1em, weak: true)
] else if it.level == 3 [
#set text(size: 16pt)
#v(1em, weak: true)
#numbering("1.1.1", levels.at(0), levels.at(1), levels.at(2))
#it.body
#v(1em, weak: true)
] else [
#set text(size: 12pt)
#v(1em, weak: true)
#it.body
#v(1em, weak: true)
]
})
set page(numbering: none)
contents_page()
set page(numbering: none)
pagebreak()
set page(
paper: "a4",
margin: (
top: 3cm,
bottom: 3cm,
x: 2cm,
),
header: [],
footer: none,
)
//set page(footer: locate(
//loc => if calc.even(loc.page()) {
// align(right, counter(page).display("1"));
//} else {
// align(left, counter(page).display("1"));
//}
//))
doc
}

View file

@ -6,11 +6,7 @@
chair: "",
school: ""
) = {
set text(
font: "TUM Neue Helvetica"
)
set page(
page(
paper: "a4",
margin: (
top: 3cm,
@ -24,23 +20,24 @@
text(
fill: tum_blue,
size: 8pt,
font: "TUM Neue Helvetica",
[#chair \ #school \ Technical University of Munich]
),
align(bottom + right, image("resources/TUM_Logo_blau.svg", height: 50%))
)
],
footer: []
)
)[
#v(1cm)
v(1cm)
#align(top + left)[#text(font: "TUM Neue Helvetica", size: 24pt, [*#title*])]
set align(top + left)
text(size: 24pt, [*#title*])
#v(3cm)
v(3cm)
#text(font: "TUM Neue Helvetica", fill: tum_blue, size: 17pt, [*#author*])
text(fill: tum_blue, size: 17pt, [*#author*])
#align(bottom + right)[#image("resources/TUM_Tower.png", width: 60%)]
]
set align(bottom + right)
image("resources/TUM_Tower.png", width: 60%)
pagebreak()
}

46
template/cover.typ.back Normal file
View file

@ -0,0 +1,46 @@
#import "colour.typ": *
#let cover_page(
title: "",
author: "",
chair: "",
school: ""
) = {
set text(
font: "TUM Neue Helvetica"
)
set page(
paper: "a4",
margin: (
top: 3cm,
bottom: 1cm,
x: 1cm,
),
header: [
#grid(
columns: (1fr, 1fr),
rows: (auto),
text(
fill: tum_blue,
size: 8pt,
[#chair \ #school \ Technical University of Munich]
),
align(bottom + right, image("resources/TUM_Logo_blau.svg", height: 50%))
)
],
footer: []
)
v(1cm)
set align(top + left)
text(size: 24pt, [*#title*])
v(3cm)
text(fill: tum_blue, size: 17pt, [*#author*])
set align(bottom + right)
image("resources/TUM_Tower.png", width: 60%)
}

View file

@ -10,12 +10,7 @@
supervisor: "",
submitted: ""
) = {
set text(
font: "TUM Neue Helvetica",
size: 10pt
)
set page(
page(
paper: "a4",
margin: (
top: 5cm,
@ -29,36 +24,44 @@
text(
fill: tum_blue,
size: 8pt,
font: "TUM Neue Helvetica",
[#chair \ #school \ Technical University of Munich]
),
align(bottom + right, image("resources/TUM_Logo_blau.svg", height: 30%))
)
],
footer: []
)[
#set text(
font: "TUM Neue Helvetica",
size: 10pt
)
v(1cm)
#v(1cm)
set align(top + left)
text(size: 24pt, [*#title*])
#set align(top + left)
#text(size: 24pt, [*#title*])
v(3cm)
#v(3cm)
text(fill: tum_blue, size: 17pt, [*#author*])
#text(fill: tum_blue, size: 17pt, [*#author*])
v(3cm)
#v(3cm)
[Thesis for the attainment of the academic degree]
v(1em)
[*#degree*]
v(1em)
[at the #school of the Technical University of Munich.]
Thesis for the attainment of the academic degree
#v(1em)
*#degree*
#v(1em)
at the #school of the Technical University of Munich.
v(3cm)
#v(3cm)
[*Examiner:*\ #examiner]
v(0em)
[*Supervisor:*\ #supervisor]
v(0em)
[*Submitted:*\ Munich, #submitted]
*Examiner:*\ #examiner
#v(0em)
*Supervisor:*\ #supervisor
#v(0em)
*Submitted:*\ Munich, #submitted
]
pagebreak()
}

64
template/title.typ.back Normal file
View file

@ -0,0 +1,64 @@
#import "colour.typ": *
#let title_page(
title: "",
author: "",
chair: "",
school: "",
degree: "",
examiner: "",
supervisor: "",
submitted: ""
) = {
set text(
font: "TUM Neue Helvetica",
size: 10pt
)
set page(
paper: "a4",
margin: (
top: 5cm,
bottom: 3cm,
x: 2cm,
),
header: [
#grid(
columns: (1fr, 1fr),
rows: (auto),
text(
fill: tum_blue,
size: 8pt,
[#chair \ #school \ Technical University of Munich]
),
align(bottom + right, image("resources/TUM_Logo_blau.svg", height: 30%))
)
],
footer: []
)
v(1cm)
set align(top + left)
text(size: 24pt, [*#title*])
v(3cm)
text(fill: tum_blue, size: 17pt, [*#author*])
v(3cm)
[Thesis for the attainment of the academic degree]
v(1em)
[*#degree*]
v(1em)
[at the #school of the Technical University of Munich.]
v(3cm)
[*Examiner:*\ #examiner]
v(0em)
[*Supervisor:*\ #supervisor]
v(0em)
[*Submitted:*\ Munich, #submitted]
}