Final changes
This commit is contained in:
parent
ba3d9a45ec
commit
42a05f2d3a
5 changed files with 7 additions and 8 deletions
|
|
@ -295,7 +295,7 @@ We can lower the computational complexity of this approach by using the assumpti
|
|||
The end result of $bold(cal(o))$ can be calculated once for a specific device series and saved in the ROM of.
|
||||
During enrollment, only the vector $bold(h)_"opt"$ has to be calculated.
|
||||
|
||||
=== Impact of helper-data volume and amount of addends
|
||||
=== Helper-data size and amount of addends
|
||||
|
||||
The amount of helper data is directly linked to the symbol bit width $M$ and the amount of addends $N$ used in the linear combination.
|
||||
Because we can set the first helper data bit $h_1$ of a linear combination to $1$ to omit the random choice, the resulting extracted bit to helper data bit ratio $cal(r)$ can be defined as $cal(r) = frac(M, N-1)$, whose equation is similar tot he one we used in the @smhdt analysis.
|
||||
|
|
@ -391,10 +391,10 @@ We can also compare the performance of @bach using the center point approximatio
|
|||
caption: [#glspl("ber") for higher order bit quantization without helper data ]
|
||||
)<tab:no_hd>
|
||||
|
||||
Unfortunately, the comparison of #glspl("ber") of @tab:no_hd[Tables] and @tab:BACH_performance[] shows that our current realization of @bach does either ties the @ber in @tab:no_hd or is worse.
|
||||
Unfortunately, the comparison of #glspl("ber") of @tab:no_hd[Tables] and @tab:BACH_performance[] shows that our current realization of @bach either ties the @ber in @tab:no_hd or is worse.
|
||||
Let's find out why this happens.
|
||||
|
||||
==== Justification of the original idea
|
||||
==== Discussion
|
||||
|
||||
If we take a step back and look at the performance of the optimized single-bit sign-based quantization process of @sect:1-bit-opt, we can compare the following #glspl("ber"):
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ The general operation of a @puf with a @hda can be divided into two separate sta
|
|||
|
||||
#figure(
|
||||
include("../charts/PUF.typ"),
|
||||
caption: [@puf model description using enrollment and reconstruction.]
|
||||
caption: [@puf model description using enrollment and reconstruction @PUFChartRef]
|
||||
)<fig:puf_operation>
|
||||
|
||||
The enrollment stage will usually be performed in near ideal, lab-like conditions i.e. at room temperature ($25°C$).
|
||||
|
|
@ -31,8 +31,7 @@ To achieve that, helper data is generated to define multiple quantizers for the
|
|||
A generalization outline to extend @tmhdt for higher order bit quantization has already been proposed by Fischer in @smhd.
|
||||
|
||||
In the course of this work, we will first take a closer look at @smhdt as proposed by Fischer @smhd and provide a concrete realization for this method.
|
||||
We will also propose a method to shape the input values of a @puf to better fit within the bounds of a multi-bit quantizer which we call @bach.
|
||||
We will investigate the question which of these two #glspl("hda") provides the better performance for higher order bit cases with the least amount of helper data bits.
|
||||
We will also propose the idea of a method to shape the input values of a @puf to better fit within the bounds of a multi-bit quantizer which we call @bach and discuss how such a @hda can be successfully implemented in the future.
|
||||
|
||||
== Notation
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ Here we aimed to utilize the idea of moving our initial @puf measurement values
|
|||
Although this method posed promising results for a sign-based quantization yielding an improvement of $approx 96%$ in our testing, finding a good approach to generalize this concept turned out to be difficult.
|
||||
The first issue was the lack of an analytical description of the probability distribution resulting from the linear combinations.
|
||||
We accounted for that by using an algorithm that alternates between defining the quantizing bounds using an @ecdf and optimizing the weights for the linear combinations based on the found bounds.
|
||||
The loose definition of @eq:optimization to find an ideal linear combination which maximizes the distance to its nearest quantization bound did not result in a stable probability distribution over various iterations.
|
||||
The initial loose definition to find ideal linear combinations which maximize the distance to their nearest quantization bounds did not result in a stable probability distribution over various iterations.
|
||||
Thus, we proposed a different approach to approximate the linear combination to the centers between the quantizing bounds.
|
||||
This method resulted in a stable probability distribution, but did not provide any meaningful improvements to the @ber in comparison to not using any helper data at all.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue