Fuzzy extractor

Fuzzy extractors convert data into strings, which makes it possible to apply cryptographic techniques for biometric security. They are used to and authenticate users records, with biometric inputs as a key. Historically, the first biometric system of this kind was designed by Juels and Wattenberg and was called “Fuzzy commitment”, where the cryptographic key is decommitted using biometric data. “Fuzzy”, in that context, implies that the value close to the original one can extract the committed value. Later, Juels and came up with Fuzzy vault schemes which are order invariant for the fuzzy commitment scheme but uses a Reed–Solomon . Codeword is evaluated by and the secret message is inserted as the coefficients of the polynomial. The polynomial is evaluated for different values of a set of features of the biometric data. So Fuzzy commitment and Fuzzy Vault were per-cursor to Fuzzy extractors. Fuzzy extractor is a biometric tool to authenticate a user using its own biometric template as a key. They extract uniform and random string <math> R </math> from its input <math> w </math> that has tolerance for noise. If the input changes to <math> w’ </math> but is still close to <math> w </math>, the string <math> R </math> can still be re-constructed. When <math> R </math> is used first time to re-construct, it outputs a helper string <math> P </math> which can be made public without compromising the security of <math> R </math> (used for encryption and authentication key) and <math> P </math> (helper string) is stored to recover <math> R </math>. They remain secure even when the adversary modifies <math> P </math> (key agreement between a user and a server based only on a biometric input). This article is based on the papers “Fuzzy Extractors: A Brief Survey of Results from 2004 to 2006” and “Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data” by Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin and Adam Smith

Contents

Motivation

As fuzzy extractors deal with how to generate strong keys from Biometrics and other Noisy Data, it applies cryptography paradigms to biometric data and that means (1) Make little assumptions about the biometric data (these data comes from variety of sources and don’t want adversary to exploit that so it is best to assume the input is unpredictable) (2) Apply cryptographic application techniques to the input. (for that fuzzy extractor converts biometric data into secret, uniformly random and reliably reproducible random string). According to “Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data” paper by Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin and Adam Smith – these techniques also have other broader applications (when noisy inputs are used) such as human , images used as passwords, keys from quantum channel. Based on the paper by Cynthia Dwork (ICALP 2006) – fuzzy extractors have application in the of strong notions of privacy for statistical databases.

Basic definitions

Predictability

Predictability indicates probability that adversary can guess a secret key. Mathematically speaking, the predictability of a random variable <math> A </math> is <math>max_{mathrm{a}}P[A = a]</math>. For example, if pair of random variable <math> A </math> and <math> B </math>, if the adversary knows <math> b </math> of <math> B </math>, then predictability of <math> A </math> will be <math>max_{mathrm{a}}P[A = a | B = b]</math>. So, Adversary can predict <math> A </math> with <math> E_{b leftarrow B}[max_{mathrm{a}} P[A = a | B = b]]</math>. Taking average over <math> B </math> as it is not under adversary control, but since knowing <math> b </math> makes <math> A </math> prediction adversarial, taking the worst case over <math> A </math>.

Min-entropy

indicates worst-case entropy. Mathematically speaking, it is defined as <math>H_infty(A) = – log(max_{mathrm{a}}P[A = a])</math> . Random variables with min-entropy at least <math> m </math> is called <math> m </math>-source.

Statistical distance

is measure of distinguishability. Mathematically speaking, it is between two probability distributions <math> A </math> and <math> B </math> is, <math> SD[A,B] </math> = <math>frac{1}{2}sum_{mathrm{v}} | P[A = v] – P[B = v] |</math>. In any system if <math> A </math> is replaced by <math> B </math>, it will behave as original system with probability at least <math> 1 – SD[A,B] </math> .

Definition 1 (strong extractor)

Set <math> M </math> is strong randomness extractor. Randomized function Ext: <math>M rightarrow {0,1}^l</math> with randomness of length <math> r</math> is an <math>(m,l,epsilon)</math> -strong extractor if for all <math> m </math>-sources (Random variables with min-entropy at least <math> m </math> is called <math> m </math>-source) <math> W </math> on <math> M (Ext(W;I), I) approx_epsilon (U_l, U_r), </math> where <math> I = U_r </math> is independent of <math> W </math>. Output of the extractor is a key generated from <math> w leftarrow W</math> with the seed <math> i leftarrow I </math>. It behaves independent of other parts of the system with the probability of <math>1 – epsilon </math>. Strong extractors can extract at most <math> l = m – 2 log frac {1} {epsilon} + O(1) </math> bits from arbitrary <math>m</math>-source.

Secure sketch

Secure sketch makes it possible to reconstruct noisy input, so if the input is <math> w </math>and sketch is <math> s </math>, given <math> s </math> and value <math> w’ </math> close to <math> w </math>, it is possible to recover <math> w </math>. But sketch <math> s </math> doesn’t give much information about <math> w </math>, so it is secure. If <math> mathbb{M} </math> is a metric space with distance function dis. Secure sketch recovers string <math> w in mathbb{M} </math> from any close string <math> w’ in mathbb{M} </math>without disclosing <math> w </math>.

Definition 2 (secure sketch)

An <math> (m,tilde{m},t) </math> secure sketch is a pair of efficient randomized procedures (SS – Sketch, Rec – Recover) such that – (1) The sketching procedure SS on input <math> w in mathbb{M} </math> returns a string <math> s in {{0,1}^*} </math>. The recovery procedure Rec takes an element <math> w’ in mathbb{M}</math> and <math>s in {{0,1}^*} </math>. (2) Correctness: If <math> dis(w,w’) leq t </math> then <math> Rec(w’,SS(w)) = w </math>. (3) Security: For any <math> m </math>-source over <math> M </math>, the min-entropy of <math> W </math> given <math> s </math> is high: for any <math> (W,E) </math>, if <math>tilde{H}_{mathrm{infty}}(W|E) geq m </math>, then <math>tilde{H}_{mathrm{infty}}(W|SS(W),E) geq tilde{m} </math>.

Fuzzy extractor

Fuzzy extractors do not recover the original input but generate string <math> R </math> (which is close to uniform) from <math> w </math> and its subsequent reproduction (using helper string <math> P </math>) given any <math> w’ </math> close to <math> w </math>. Strong extractors are a special case of fuzzy extractors when <math> t </math> = 0 and <math> P = I </math>.

Definition 3 (fuzzy extractor)

An <math> (m,l, t, epsilon) </math> fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that: (1) Gen, given <math> w in mathbb{M} </math>, outputs an extracted string <math> R in {mathbb{0,1}^l} </math> and a helper string <math> P in {mathbb{0,1}^*} </math>. (2) Correctness: If <math> dis(w,w’) leq t</math> and <math>(R,P) leftarrow Gen(w) </math>, then <math> Rep(w’,P) = R </math>. (3) Security: For all m-sources <math> W </math> over <math> M </math>, the string <math> R </math> is nearly uniform even given <math> P </math>, So <math>tilde{H}_{mathrm{infty}}(W|E) geq m</math>, then <math>(R,P,E) approx ( U_{mathrm{l}}, P, E) </math>.

So Fuzzy extractors output almost uniform random bits which is prerequisite for using cryptographic applications (in terms of secret keys). Since output bits are slightly non-uniform, it can decrease security, but not more than the distance <math> epsilon </math> from the uniform and as long as that distance is sufficiently small – security still remains robust.

Secure sketches and fuzzy extractors

Secure sketches can be used to construct fuzzy extractors. Like applying SS to <math> w </math> to obtain <math> s </math> and strong extractor Ext with randomness <math> x </math> to <math> w </math> to get <math> R </math>. <math> (s,x) </math> can be stored as helper string <math> P </math>. <math> R </math> can be reproduced by <math> w’ </math> and <math> P = (s,x) </math>. <math> Rec (w’,s) </math> can recover <math> w </math> and <math> Ext(w,x) </math> can reproduce <math> R </math>. Following Lemma formalize this.

Lemma 1 (fuzzy extractors from sketches)

Assume (SS,Rec) is an <math> (M,m,tilde{m},t) </math> secure sketch and let Ext be an average-case <math> (n,tilde{m},l,epsilon) </math> strong extractor. Then the following (Gen, Rep) is an <math> (M,m,l,t,epsilon) </math> fuzzy extractor: (1) Gen <math> (w,r,x): set P = (SS(w;r),x),R = Ext(w;x), </math> and output <math> (R,P) </math>. (2) Rep <math> (w’,(s,x)) </math>: recover <math> w = Rec(w’,s) </math> and output <math> R = Ext(w;x) </math>.

Proof: From the definition of secure sketch (Definition 2), <math>H_infty(W | SS(W)) geq tilde{m} </math>. And since Ext is an average-case <math> (n,m,l, epsilon)</math>-strong extractor. <math> SD (( Ext(W;X),SS(W),X),(U_l,SS(W),X)) = SD((R,P),(U_l,P)) leq epsilon. </math>

Corollary 1

If (SS,Rec) is an <math> (M,m,tilde{m},t) </math> – secure sketch and Ext is an <math> (n,tilde{m}-log(frac {1} {delta}),l,epsilon) </math> – strong extractor, then the above construction (Gen,Rep) is a <math> (M,m,l,t,epsilon + delta) </math> fuzzy extractor.

Reference paper “Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data” by Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin and Adam Smith (2008) includes many generic combinatorial bounds on secure sketches and fuzzy extractors

Basic constructions

Due to their error tolerant properties, a secure sketches can be treated, analyzed, and constructed like a <math>(n,k,d)_{mathcal{F}}</math> general error correcting code or <math>[n,k,d]_{mathcal{F}}</math> for codes, where <math>n</math> is the length of codewords, <math>k</math> is the length of the message to be codded, <math>d</math> is the distance between codewords, and <math>mathcal{F}</math> is the alphabet. If <math>mathcal{F}^n</math> is the universe of possible words then it may be possible to find an error correcting code <math>C in mathcal{F}^n</math> that has a unique codeword <math>c in C</math> for every <math>w in mathcal{F}^n</math> and have a of <math>dis_{Ham}(c,w) leq (d-1)/2</math>. The first step for constructing a secure sketch is determining the type of errors that will likely occur and then choosing a distance to measure.

Red is the code-offset construction, blue is the syndrome construction, green represents edit distance and other complex constructions.

Hamming distance constructions

When there is no chance of data being deleted and only being corrupted then the best measurement to use for error correction is Hamming distance. There are two common constructions for correcting Hamming errors depending on whether the code is linear or not. Both constructions start with an error correcting code that has a distance of <math>2t+1</math> where <math>{t}</math> is the number of tolerated errors.

Code-offset construction

When using a <math>(n,k,2t+1)_{mathcal{F}}</math> general code, assign a uniformly random codeword <math>c in C</math> to each <math>w</math>, then let <math>SS(w)=s=w-c</math> which is the shift needed to change <math>c</math> into <math>w</math>. To fix errors in <math>w'</math> subtract <math>s</math> from <math>w'</math> then correct the errors in the resulting incorrect codeword to get <math>c</math> and finally add <math>s</math> to <math>c</math> to get <math>w</math>. This means <math>Rec(w’,s)=s+dec(w’-s)=w</math>. This construction can achieve the best possible tradeoff between error tolerance and entropy loss when <math>mathcal{F} geq n</math> and a Reed–Solomon code is used resulting in an entropy loss of <math>2tlog(mathcal{F})</math>, and the only way to improve upon this is to find a code better than Reed–Solomon.

Syndrome construction

When using a <math>[n,k,2t+1]_{mathcal{F}}</math> linear code let the <math>SS(w)=s</math> be the of <math>w</math>. To correct <math>w'</math> find a vector <math>e</math> such that <math>syn(e)=syn(w’)-s</math>, then <math>w=w’-e</math>.

Set difference constructions

When working with a very large alphabet or very long strings resulting in a very large universe <math>mathcal{U}</math>, it may be more efficient to treat <math>w</math> and <math>w'</math> as sets and look at to correct errors. To work with a large set <math>w</math> it is useful to look at its characteristic vector <math>x_w</math>, which is a binary vector of length <math>n</math> that has a value of 1 when an element <math>a in mathcal{U}</math> and <math>a in w</math>, or 0 when <math>a notin w</math>. The best way to decrease the size of a secure sketch when <math>n</math> is large is make <math>k</math> large since the size is determined by <math>n-k</math>. A good code to base this construction on is a <math>[n,n-talpha,2t+1]_{2}</math> where <math>n=2^{alpha}-1</math> and <math>t ll n</math> so <math>k leq n-log{n choose{t}}</math>, it is also useful that BCH codes can be decode in sub-linear time.

Pin sketch construction

Let <math>SS(w)=s=syn(x_w)</math>. To correct <math>w'</math> first find <math>SS(w’)=s’=syn(x_w’)</math>, then find a set v where <math>syn(x_v)=s’-s</math>, finally compute the to get <math>Rec(w’,s)=w’ triangle v=w</math>. While this is not the only construction to use set difference it is the easiest one to use.

Edit distance constructions

When data can be corrupted or deleted the best measurement to use is . To make a construction based on edit distance it is easiest to start with a construction for set difference or hamming distance as an intermediate correction step and then build the edit distance construction around that.

Other distance measure constructions

There are many other types of errors and distances that can be measured which can be used to model other situations. Most of these other possible constructions are like edit distance constructions where they build upon simpler constructions.

Improving error-tolerance via relaxed notions of correctness

It is possible to show that the error-tolerance of a secure sketch can be improved by applying a to error correction and only needing errors to be correctable with a high probability. This will show that it is possible to exceed the which is limited to correcting <math>n/4</math> errors, and approach allowing for nearly <math>n/2</math> corrections. To achieve this better error correction a less restrictive error distribution model must be used.

Random errors

For this most restrictive model use a <math>_{p}</math> to create a <math>w'</math> that a probability <math>p</math> at each position in <math>w'</math> that the bit received is wrong. This model can show that entropy loss is limited to <math>nH(p)-o(n)</math>, where <math>H</math> is the , and if min-entropy <math>m geq n(H(frac{1}{2} – gamma)) + varepsilon</math> then <math>n(frac{1}{2} – gamma)</math> errors can be tolerated, for some constant <math>gamma > 0</math>.

Input-dependent errors

For this model errors do not have a known distribution and can be from an adversary, the only constraints are <math>dis_text{err} leq t</math> and that a corrupted word depends only on the input <math>w</math> and not on the secure sketch. It can be shown for this error model that there will never be more than <math>t</math> errors since this model can account for all complex noise processes, meaning that Shannon’s bound can be reached, to do this a random permutation is prepended to the secure sketch that will reduce entropy loss.

Computationally bounded errors

This differs from the input dependent model by having errors that depend on both the input <math>w</math> and the secure sketch, and an adversary is limited to polynomial time algorithms for introducing errors. Since algorithms that can run in better than polynomial time are not currently feasible in the real world, then a positive result using this error model would guarantee that any errors can be fixed. This is the least restrictive model the only known way to approach Shannon’s bound is to use list-decodable codes although this may not always be useful in practice since returning a list instead of a single codeword may not always be acceptable.

Privacy guarantees

In general a secure system attempts to leak as little information as possible to an . In the case of biometrics if information about the biometric reading is leaked the adversary may be able to learn personal information about a user. For example an adversary notices that there is a certain pattern in the helper strings that implies the ethnicity of the user. We can consider this additional information a function <math>f(W)</math>. If an adversary were to learn a helper string, it must be ensured that, from this data he can not infer any data about the person from which the biometric reading was taken.

Correlation between helper string and biometric input

Ideally the helper string <math>P</math> would reveal no information about the biometric input <math>w</math>. This is only possible when every subsequent biometric reading <math>w'</math> is identical to the original <math>w</math>. In this case there is actually no need for the helper string, so it is easy to generate a string that is in no way correlated to <math>w</math>.

Since it is desirable to accept biometric input <math>w'</math> similar to <math>w</math> the helper string <math>P</math> must be somehow correlated. The more different <math>w</math> and <math>w'</math> are allowed to be, the more correlation there will be between <math>P</math> and <math>w</math>, the more correlated they are the more information <math>P</math> reveals about <math>w</math>. We can consider this information to be a function <math>f(W)</math>. The best possible solution is to make sure the adversary can’t learn anything useful from the helper string.

Gen(W) as a probabilistic map

A probabilistic map <math> Y() </math> hides the results of functions with a small amount of leakage <math>epsilon</math>. The leakage is the difference in probability two adversaries have of guessing some function when one knows the probabilistic map and one does not. Formally:

<math>|Pr[A_1(Y(W)) = f(W)] – Pr[A_2() = f(W)]| leq epsilon </math>

If the function <math>Gen(W)</math> is a probabilistic map, then even if an adversary knows both the helper string <math>P</math> and the secret string <math> R </math> they are only negligibly more likely figure something out about the subject as if they knew nothing. The string <math>R</math> is supposed to kept secret, so even if it is leaked (which should be very unlikely) the adversary can still figure out nothing useful about the subject, as long as <math> epsilon </math> is small. We can consider <math> f(W) </math> to be any correlation between the biometric input and some physical characteristic of the person. Setting <math> Y = Gen(W) = R, P </math> in the above equation changes it to:

<math>|Pr[A_1(R, P) = f(W)] – Pr[A_2() = f(W)]| leq epsilon </math>

This means that if one adversary <math>A_1</math> has <math> (R,P) </math> and a second adversary <math>A_2</math> knows nothing, their best guesses at <math> f(W) </math> are only <math> epsilon </math> apart.

Uniform fuzzy extractors

Uniform fuzzy extractors are a special case of fuzzy extractors, where the output <math>(R,P)</math> of <math>Gen(W)</math> are negligibly different from strings picked from the uniform distribution, i.e. <math>(R, P) approx_epsilon (U_ell, U_{|P|}) </math>

Uniform secure sketches

Since secure sketches imply fuzzy extractors, constructing a uniform secure sketch allows for the easy construction of a uniform fuzzy extractor. In a uniform secure sketch the sketch procedure <math>SS(w)</math> is a randomness extractor <math>Ext(w;i)</math>. Where <math>w</math> is the biometric input and <math>i</math> is the . Since randomness extractors output a string that appears to be from a uniform distribution they hide all the information about their input.

Applications

Extractor sketches can be used to construct <math>(m, t, epsilon)</math>-fuzzy perfectly one-way hash functions. When used as a hash function the input <math>w</math> is the object you want to hash. The <math>P, R</math> that <math>Gen(w)</math> outputs is the hash value. If one wanted to verify that a <math>w'</math> within <math> t </math> from the original <math> w </math>, they would verify that <math>Rep(w’, P) = R</math>. <math>(m, t, epsilon)</math>-fuzzy perfectly one-way hash functions are special where they accept any input with at most <math>t</math> errors, compared to traditional hash functions which only accept when the input matches the original exactly. Traditional cryptographic hash functions attempt to guarantee that is it is computationally infeasible to find two different inputs that hash to the same value. Fuzzy perfectly one-way hash functions make an analogous claim. They make it computationally infeasible two find two inputs, that are more than <math>t</math> apart and hash to the same value.

Protection against active attacks

An active attack could be one where the adversary can modify the helper string <math>P</math>. If the adversary is able to change <math>P</math> to another string that is also acceptable to the reproduce function<math>Rep(W, P)</math>, it cause <math>Rep(W, P)</math> to output an incorrect secret string <math>tilde{R}</math>. Robust fuzzy extractors solve this problem by allowing the reproduce function to fail, if a modified helper string is provided as input.

Robust fuzzy extractors

One method of constructing robust fuzzy extractors is to use . This construction requires two hash functions <math>H_1</math> and <math>H_2</math>. The <math>Gen(W)</math> functions produces the helper string <math>P</math> by appending the output of a secure sketch <math>s = SS(w)</math> to the hash of both the reading <math>w</math> and secure sketch <math>s</math>. It generates the secret string <math>R</math> by applying the second hash function to <math>w</math> and <math>s</math>. Formally: <math> Gen(w): s = SS(w), return: P = (s, H_1(w,s)), R = H_2(w,s) </math>

The reproduce function <math>Rep(W, P)</math> also makes use of the hash functions <math>H_1</math> and <math>H_2</math>. In addition to verifying the biometric input is similar enough to the one recovered using the <math>Rec(W,S)</math> function, it also verifies that hash in the second part of <math>P</math> was actually derived from <math>w</math> and <math>s</math>. If both of those conditions are met it returns <math>R</math> which is itself the second hash function applied to <math>w</math> and <math>s</math>. Formally:

<math>Rep(w’,tilde{P}): </math> Get <math>tilde{s} </math> and <math> tilde{h} </math> from <math>tilde{P}; tilde{w} = Rec(w’,tilde{s}).</math> If <math>Delta(tilde{w}, w’) leq t </math> and <math>tilde{h} = H_1(tilde{w},tilde{s})</math> then <math> return: H_2(tilde{w}, tilde{s}) </math> else <math> return: fail</math>

If <math>P</math> has been tampered with it will be obvious because, <math>Rep</math> will output fail with very high probability. To cause the algorithm accept a different <math>P</math> an adversary would have to find a <math>tilde{w}</math> such that <math>H_1(w, s) = H_1(tilde{w},tilde{s}) </math>. Since hash function are believed to be , it is computationally infeasible to find such a <math>tilde{w}</math>. Seeing <math>P</math> would provide the adversary with no useful information. Since, again, hash function are one way functions, it is computationally infeasible for the adversary to reverse the hash function and figure out <math>w</math>. Part of <math>P</math> is the secure sketch, but by definition the sketch reveals negligible information about its input. Similarly seeing <math>R</math>(even though it should never see it) would provide the adversary with no useful information as the adversary wouldn’t be able to reverse the hash function and see the biometric input.

Source

http://wikipedia.org/

See Also on BitcoinWiki