Is this the end?

This commit is contained in:
Marius Drechsler 2024-08-28 20:50:39 +02:00
parent b8a0ee46f5
commit 90c6d75db4
24 changed files with 8191 additions and 4584 deletions

View file

@ -1,12 +1,26 @@
= Outlook
= Conclusion and Outlook
Upon the findings of this work, further topics might be investigated in the future.
During the course of this work, we took a closer look at an already introduced @hda, @smhdt and provided a concrete realization.
Our experiments showed that after a certain point, using more metrics $S$ won't improve the @ber any further as they behave asymptotically for $S arrow infinity$.
Furthermore, we concluded that for higher choices of the symbol width $M$, @smhdt will not be able to improve on the @ber, as the initial error is too high.
An interesting addition to our analysis provided the improvement of Gray-coded labelling for the quantizer as this resulted in an improvement of $approx 30%$.
Generally, the performances of both helper-data algorithms might be tested on larger datasets.
Going on, we introduced the idea of a new @hda which we called Boundary Adaptive Clustering with Helper data @bach.
Here we aimed to utilize the idea of moving our initial @puf measurement values away from the quantizer bound to reduce the @ber using weighted linear combinations of our input values.
Although this method posed promising results for a sign-based quantization yielding an improvement of $approx 96%$ in our testing, finding a good approach to generalize this concept turned out to be difficult.
The first issue was the lack of an analytical description of the probability distribution resulting from the linear combinations.
We accounted for that by using an algorithm that alternates between defining the quantizing bounds using an @ecdf and optimizing the weights for the linear combinations based on the found bounds.
The loose definition of @eq:optimization to find an ideal linear combination which maximizes the distance to its nearest quantization bound did not result in a stable probability distribution over various iterations.
Thus, we proposed a different approach to approximate the linear combination to the centers between the quantizing bounds.
This method resulted in a stable probability distribution, but did not provide any meaningful improvements to the @ber in comparison to not using any helper data at all.
Specifically concerning the BACH method, instead of using only $plus.minus 1$ as weights for the linear combinations, fractional weights could be used instead as they could provide more flexibility for the outcome of the linear combinations.
In the same sense, a more efficient method to find the optimal linear combination might exist.
Future investigations of the @bach idea might find a solution to the convergence of the bound distance maximization strategy.
Since the vector of bounds $bold(b)$ is updated every iteration of @bach, a limit to the deviation from the previous position of a bound might be set.
Furthermore, a recursive approach to reach higher order bit quantization inputs might also result in a converging distribution.
If we do not want to give up the approach using a vector of optimal points $bold(cal(o))$ as in the center point approximation, a way may be found to increase the distance between all optimal points $bold(cal(o))$ to achieve a better separation for the results of the linear combinations in every quantizer bin.
During the iterative process of the center point approximation in BACH, a way may be found to increase the distance between all optimal points $bold(cal(o))$ to achieve a better separation for the results of the linear combinations in every quantizer bin.
If a converging realization of @bach is found, using fractional weights instead of $plus.minus 1$ could provide more flexibility for the outcome of the linear combinations.
Ultimately, we can build on this in the future and provide a complete key storage system using @bach or @smhdt to improve the quantization process.
But in the end, the real quantizers were the friends we made along the way.