# K-independent hashing

This is the approved revision of this page, as well as being the most recent.

In computer science, a family of hash functions is said to be $k$-independent or $k$-universal if selecting a function at random from the family guarantees that the hash codes of any designated $k$ keys are independent random variables (see precise mathematical definitions below). Such families allow good average case performance in randomized algorithms or data structures, even if the input data is chosen by an adversary. The trade-offs between the degree of independence and the efficiency of evaluating the hash function are well studied, and many $k$-independent families have been proposed.

## Background

The goal of hashing is usually to map keys from some large domain (universe) $U$ into a smaller range, such as $m$ bins (labelled $[m] = \{0, \dots, m-1\}$). In the analysis of randomized algorithms and data structures, it is often desirable for the hash codes of various keys to "behave randomly". For instance, if the hash code of each key were an independent random choice in $[m]$, the number of keys per bin could be analyzed using the Chernoff bound. A deterministic hash function cannot offer any such guarantee in an adversarial setting, as the adversary may choose the keys to be the precisely the preimage of a bin. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function.

The solution to these problems is to pick a function randomly from a large family of hash functions. The randomness in choosing the hash function can be used to guarantee some desired random behavior of the hash codes of any keys of interest. The first definition along these lines was universal hashing, which guarantees a low collision probability for any two designated keys. The concept of $k$-independent hashing, introduced by Wegman and Carter in 1981, strengthens the guarantees of random behavior to families of $k$ designated keys, and adds a guarantee on the uniform distribution of hash codes.

## Definitions

The strictest definition, introduced by Wegman and Carter one may define a $(\mu, k)$-independent family to satisfy: $\forall$ distinct $(x_1, \dots, x_k) \in U^k$ and $\forall (y_1, \dots, y_k) \in [m]^k$, $~~\Pr_{h \in H} \left[ h(x_1)=y_1 \land \cdots \land h(x_k)=y_k \right] \le \mu / m^k$

Observe that, even if $\mu$ is close to 1, $h(x_i)$ are no longer independent random variables, which is often a problem in the analysis of randomized algorithms. Therefore, a more common alternative to dealing with rounding issues is to prove that the hash family is close in statistical distance to a $k$-independent family, which allows black-box use of the independence properties.

## Techniques

### Polynomials with random coefficients

The original technique for constructing -independent hash functions, given by Carter and Wegman, was to select a large prime number , choose random numbers modulo , and use these numbers as the coefficients of a polynomial of degree whose values modulo are used as the value of the hash function. All polynomials of the given degree modulo are equally likely, and any polynomial is uniquely determined by any -tuple of argument-value pairs with distinct arguments, from which it follows that any -tuple of distinct arguments is equally likely to be mapped to any -tuple of hash values. Variations of tabulation hashing can achieve higher degrees of independence by performing table lookups based on overlapping combinations of bits from the input key, or by applying simple tabulation hashing iteratively.

## Independence needed by different hashing methods

The notion of -independence can be used to differentiate between different hashing methods, according to the level of independence required to guarantee constant expected time per operation.

For instance, hash chaining takes constant expected time even with a 2-independent hash function, because the expected time to perform a search for a given key is bounded by the expected number of collisions that key is involved in. By linearity of expectation, this expected number equals the sum, over all other keys in the hash table, of the probability that the given key and the other key collide. Because the terms of this sum only involve probabilistic events involving two keys, 2-independence is sufficient to ensure that this sum has the same value that it would for a truly random hash function.

On the other hand, linear probing, a simpler form of open addressing where the step size is always one, requires 5-independence. It can be guaranteed to work in constant expected time per operation with a 5-independent hash function, and there exist 4-independent hash functions for which it takes logarithmic time per operation.