PLONK is a SNARK (Succinct Non-interactive Argument of Knowledge) that can verify a computation done by an arithmetic circuit, meaning you might have a circuit $C$ that on input $x$ outputs $C(x) = y$, and a PLONK proof (generated by the prover) will convince another party (the verifier) that the prover knows $x$ such that $C(x)$ evaluates to $y$. Plookup is a new SNARK component that would allow some circuits to be encoded more efficiently by implementing a lookup table for a specified function. This removes the aspect of circuit complexity needed to compute the function, so the complexity of the SNARK depends only on the size of the function's domain.

With SNARKS, we're usually interested in the case where computing such an input $x$ from $C$ and $y$ is infeasible. Hash functions are a great example of where plookup could simplify circuits, shortening prover and verifier time as well as the proof sizes. Most hash functions that are considered secure were developed to be fast in computer architecture; some of them were developed to also perform well when implemented as software; either way, the prevailing paradigm is to work over binary strings. When encoded as ~256-bit field elements, this leads to either inefficient encodings (very large field elements representing very small strings) or a huge number of gates required (defining XOR for example using only addition and multiplication mod $p$ is possible, but it is one of the most complex functions to define in terms of arithmetic circuits).

Plookup could also allow recursive SNARKs without cycles by allowing arithmetic over different fields. Most of the elliptic curves used in SNARKs are defined over a field $\mathbb{Z}_{p}$ and each point is a pair of field elements (representing the $x$ and $y$ coordinates). We use these to create SNARKS for arithmetic circuits where the addition and multiplication gates compute the operations $\mod q$ where $q$ and $p$ are different primes. So the difficulty in recursively verifying SNARKs using elliptic curves (without cycles) largely comes from the difficulty in expressing addition and multiplication mod p using only addition and multiplication mod q. (Try if for yourself! How would you define addition mod 5 using only addition and multiplication mod 7. No "if" statements allowed.)

Reviewing PLONK

Note: PLONK and Plookup use sequences in different ways. PLONK uses sequences of length $n$ to describe at most $n$ gates and the values of their input/output wires. Plookup uses sequences of length $n+d$ to describe $d$ queries to a lookup table with $n$ rows. Both of them use a (different) sequence of length $n$ as part of their grand product arguments.

PLONK breaks down circuit verification into a check that the input-output behavior of each gate is correct, which is a fairly simple constraint system. Each gate has a label $i \in { 1, 2, \ldots, n }$ and corresponds to the constraint equation

$$ Q^L_i a_i + Q^R_i b_i +Q^M_i a_i b_i + Q_i^C = Q^O_i c_i . $$


  • $a_i$ is the left input wire, $b_i$ is the right input wire, $c_i$ is the output wire
  • if gate i is an addition gate then $Q^L_i = Q^R_i = Q^O_i = 1$, and $Q^M_i = Q^C_i = 0$
  • if gate i is a multiplication gate then $Q^M_i = Q^O_i = 1$, and $Q^L_i = Q^R_i = Q^C_i = 0$
  • if gate i is a constant gate (always outputting $C_i \in \mathbb{F}$) then $Q^C_i = C_i$, $Q^O_i = 1$, and $Q^L_i = Q^R_i = Q^M_i = 0$

so that the sequences $Q^L, Q^R, Q^M, Q^C, Q^O$ are "selector" sequences, selecting the gate type by making the irrelevant constraint terms zero.

For each sequence, we can define an interpolating polynomial that passes through the elements of the sequence i.e. for the sequence $(a_1, a_2, \ldots, a_n)$ we could define a polynomial $A(x)$ so that $A(x_1) = a_1$, $A(x_2) = a_2$, all the way through $A(x_n) = n$.
For efficiency, we choose a primitive nth root of unity, which we call ω, and then set $x_1 = \omega$, $x_2 = \omega^2$, and so on... From now on, when we talk about a polynomial interpolating a sequence, this is the construction we're referring to (but the math (mostly) checks out if it's easier to think of $x_1$ being 1, $x_2$ being 2, etc.).

Once we have these polynomials, we have a way to check (with overwhelming probability) that the constraints hold. We have polynomials interpolating or representing each selector sequence and each wire sequence and if they were defined for a valid circuit computation, then the constraint above takes the polynomial form

$$ Q^L(\omega^i)A(\omega^i) + Q^R(\omega^i) B(\omega^i) +Q^M(\omega^i) A(\omega^i) B(\omega^i) + Q^C(\omega^i) = Q^O(\omega^i) C(\omega^i) . $$

Which only needs to hold on ${ \omega, \omega^2, \ldots , \omega^n }$. Subtracting $Q^O(\omega^i) C(\omega^i)$ from both sides, this is

$$ Q^L(\omega^i)A(\omega^i) + Q^R(\omega^i) B(\omega^i) +Q^M(\omega^i) A(\omega^i) B(\omega^i) + Q^C(\omega^i) - Q^O(\omega^i) C(\omega^i)= 0 $$

so we check that the left-hand side is divisible by the nonzero polynomial of minimal degree that vanishes (evaluates to zero) on ${ \omega, \omega^2, \ldots , \omega^n }$.

So now we know that the output of each gate matches the expected behavior with the given inputs. But at this stage the gates are totally disconnected from each other. To ensure the gates are connected correctly, we create a copy constraint in the form of a permutation in the following way:

  1. First, wires that are supposed to take the same value (usually when the output wire of one gate is also the input wire of another gate) are grouped together into disjoint blocks. For example, if an output wire with index $x_{c1}$ feeds into other gates as input wires with indices $x_{a2}, x_{a3}, x_{b3}$ for example, then the corresponding block is ${ x_{c1}, x_{a2}, x_{a3}, x_{b3} }$. (To be abundantly clear on notation here, in this section $x_{ai}$ means "the index of the left wire of gate i" whereas $a_i$ or $w_{ai}$ would refer to "the value that that wire takes")

  2. Then we create a permutation $\sigma$ that cycles each block. So $\sigma$ would permute $\sigma (x_{c1}) = x_{a2}$, $\sigma (x_{a2}) = x_{a3}$, $\sigma (x_{a3}) = x_{b3}$, $\sigma (x_{b3}) = x_{c1}$.

Then to check that the gates are wired together correctly, we only need to check that $(a_1, \ldots, a_n, b1, \ldots, b_n, c_1, \ldots, c_n) = (w_{\sigma(x_{a1})}, \ldots, w_{\sigma(x_{an})}, w_{\sigma(x_{b1})}, \ldots, w_{\sigma(x_{bn})}, w_{\sigma(x_{c1})}, \ldots, w_{\sigma(x_{cn})})$. The key to the permutation check is a "grand product" that can be checked as it's built up iteratively but ultimately encodes the sequence elements and their order in such a way that a sequence and $\sigma$ applied to that sequence will be represented by the same product.

PLONK's Permutation Argument

For two sequences of length n, $(f_1, f_2, \ldots, f_n)$ and $(g_1, g_2, \ldots, g_n)$, the prover wants to convince the verifier that $g_i = f_{\sigma(i)}$ for $1 \leq i \leq n$ for a permutation $\sigma$ that is known to both parties. This is done in a roundabout way partly to make it more amenable to succinct verification, but also to allow zero knowledge. We'll describe the protocol here in terms of sequences, but the protocol will actually use polynomials $\hat{f}$ and $\hat{g}$ that interpolate the two sequences.

It proceeds as follows:

  1. The verifier sends random β, γ ∈ 𝔽 to the prover.
  2. The prover constructs two new sequences $(\mathbf{f}_1, \mathbf{f}_2, \ldots, \mathbf{f}_n)$ and $(\mathbf{g}_1, \mathbf{g}_2, \ldots, \mathbf{g}_n)$ where

$$ \mathbf{f}_i = f_i + \beta \cdot i + \gamma $$

$$ \mathbf{g}_i = g_i + \beta \cdot \sigma(i) + \gamma $$

  1. The prover then sends a [polynomial interpolating a] sequence defined recursively as

$$ s_0 = 1 $$

$$ s_i = s_{i-1} \cdot \frac{\mathbf{f_i}}{\mathbf{g_i}} $$

This is the "grand product" in our grand product argument, where the name comes from the fact that this recursion is equivalently defined as

$$ s_i = \frac{\prod_{1 \leq j \leq i} \ \mathbf{f_j} }{\prod_{1 \leq j \leq i} \ \mathbf{g_j} } $$

  1. The verifier checks that the following equations hold on the set ${ \omega, \omega^2, \ldots, \omega^n }$ (the set we used to define the interpolation of the sequences)

$$ \hat{s}(\omega^n) = s_0 = 1 $$

$$ \hat{s}(\omega^i) \cdot \mathbf{g}_i = \hat{s}(\omega^{i-1}) \cdot \mathbf{f}_i $$

where $\hat{s}$ is the polynomial that interpolates the sequence $s$ defined above. If these equations hold, then the verifier is convinced that $g_i = f_{\sigma(i)}$ for $1 \leq i \leq n$.

An informal argument as to why the verifier is convinced works backwards from n to 1. Looking at the grand product defining $s_i$, the term $s_n$ should be 1 if $g_i = f_{\sigma(i)}$ for all i because the numerator and denominator are a product of exactly the same terms. Assume $g_i = f_{\sigma(i)}$ for a particular i. Then

$$ \mathbf{g_i} = g_i + \beta \cdot \sigma(i) + \gamma $$

$$ = f_{\sigma(i)} + \beta \cdot \sigma(i) + \gamma = \mathbf{f}_{\sigma(i)} $$

so each term in the denominator also appears in the numerator.

This alone isn't particularly convincing (a dishonest prover could say $s_n =1$ regardless of whether $(f_1, f_2, \ldots, f_n)$ and $(g_1, g_2, \ldots, g_n)$ are related by the permutation), which is why the verifier also checks that
$ \hat{s}(\omega^i) \cdot \mathbf{g}_i = \hat{s}(\omega^{i-1}) \cdot \mathbf{f}_i $ for $i = n$, and for $ i = n-1, \ n-2, \ \ldots, \ 1$.

PLONK vs Plookup

Both PLONK and Plookup rely on a "Grand Product" argument to verify two sequences of length n are permutations of each other. In PLONK, this grand product is the backbone of the permutation check for checking that wires are connected correctly, it checks that polynomials f(x) and g(x) interpolate sequences $(f_1, f_2, \ldots f_n)$ and $(g_1, g_2, \ldots, g_n)$ where some (known) permutation $\sigma$ re-arranges $(f_1, f_2, \ldots f_n)$ into $(g_1, g_2, \ldots, g_n)$. In Plookup, it checks that a sequence $(s_1, s_2, \ldots, s_{n+d})$ is the "gluing together" of two sequences $(f_1, f_2, \ldots f_n)$ and $(t_1, t_2, \ldots, t_d)$ which has then been re-arranged to be "in order" (also by looking at the polynomials which interpolate them) with the important additional condition that every value that $f_i$ takes appears somewhere in the sequence $(t_1, t_2, \ldots, t_d)$. So we can check each function evaluation to look up by compressing a tuple (input_i, output_i) into a single field element $f_i$ and then checking that the lookup values $(f_1, f_2, \ldots f_n)$ actually appear in our table $(t_1, t_2, \ldots, t_d)$ containing all the valid compressed input-output pairs.

Randomized differences

Sequences and subsequences are different from sets, for example an element can appear more than once in a sequence but not a set, and the order in which elements are listed matters, but the notion of subsequence that we will use more closely resembles the notion of a subset. We say that a sequence $A = (a_1, a_2, \ldots a_m)$ is a subsequence of $B = (b_1, b_2, \ldots b_n)$ if

  • every element that appears in $A$ appears in $B$ (and not more often in $A$ than in $B$)
  • in the same order as they do in $B$.

So for example (1, 2, 2, 3) is a subsequence of (1, 1, 2, 2, 3, 3, 4, 4, 2) and (1, 2, 3, 2) is as well. However, (3, 3, 3) and (1, 2, 4, 3) would not be subsequences. As a visual, (b,c,f,i,k) is a subsequence of (a,b,c,d,e,f,g,h,i,j,k) because they can be lined up like so...

. b c . . f . . i . k ...

a b c d e f g h i j k ...

Plookup introduces the notion of "randomized differences" as a necessary condition for two sequences to be the same. (As with many definitions SNARKs are based on, a randomly chosen instantiation of the converse is true with high probability.) The difference set of a sequence $(f_1, f_2, \ldots, f_n)$ is the set {$ f_2 -f_1, \ f_3 -f_2, \ \ldots, \ f_n - f_{n-1} $}. (This is using our usual definition of a set, so order and multiplicity are ignored.) For example, the sequence (1, 4, 8) has a difference set of {3, 4}. The idea would be that we could recognize that (1, 4, 8) is a subsequence of (1, 1, 4, 8, 8, 8) because its difference set is {0, 3, 4}, which is the same up to a zero representing repeated elements. However, it isn't hard to come up with another sequence with the same difference set, like (1, 1, 5, 5, 8, 8). For this reason, Plookup instead uses a "randomized difference set" consisting of elements of the form $f_i + \beta \cdot f_{i+1}$ for a random $\beta \in \mathbb{F}$.

The Protocol

The protocol as defined is fairly simple to outline. First it reduces the problem for functions to a problem with sequences. If we have a function $f$ that takes two arguments then we can think of our lookup table as consisting of the triples $(x, y, z)$ such that $f(x,y) = z$. The verifier then sends a random $\alpha \in \mathbb{F}$ to the prover, and they both compute a new set consisting only of field elements as {$ \alpha x + \alpha^2 y + \alpha^3 z \ : \ f(x,y) = z $}.

Now that we've compiled the lookup table to a lookup set, we can work with sequences of field elements. The verifier sends random $\beta, \gamma \in \mathbb{F}$ to the prover, who computes a polynomial Z defined by the recurrence

$$ Z(g) = 1 $$

$$ Z(g^{i+1}) = Z(g^i) \cdot \frac{(1+\beta)(\gamma +f_i)(\gamma(1+\beta) +t_i + \beta t_{i+1})}{(\gamma (1+\beta)+s_i +\beta s_{i+1})(\gamma(1+\beta)+s_{n+i} + \beta s_{n+i+1})} $$

$$ Z(g^{n+1}) = 1 $$

and then the verifier checks that the polynomial is indeed of that form, as they did in PLONK's permutation check.

The correctness comes from an application of the Schwartz-Zippel lemma, which informally says that when two polynomials (univariate or multivariate) are distinct and you choose a random point (in 𝔽) to evaluate them at, it's likely (think probability 1 - 1/2256 here) that they will differ at that point.

So let's revisit the protocol. Really, the prover is asked to compute a recursive sequence, the polynomial is computed as an interpolation.

$$ z_1 = 1 $$

$$ z_{i+1} = z_i \cdot \frac{(1+\beta)(\gamma +f_i)(\gamma(1+\beta) +t_i + \beta t_{i+1})}{(\gamma (1+\beta)+s_i +\beta s_{i+1})(\gamma(1+\beta)+s_{n+i} + \beta s_{n+i+1})} $$

Which gives a closed form

$$ z_{i} = \frac{\prod_{1 \leq j<i}(1+\beta)(\gamma +f_i)(\gamma(1+\beta) +t_i + \beta t_{i+1})}{\prod_{1 \leq j < i}(\gamma (1+\beta)+s_i +\beta s_{i+1})(\gamma(1+\beta)+s_{n+i} + \beta s_{n+i+1})} $$

So $z_{n+1}$ is the quotient $\dfrac{F(\beta, \gamma)}{G(\beta, \gamma)}$ which is 1 if and only if all entries $f_i$ are in the lookup table $(t_1, t_2, \ldots , t_n)$.

Polynomials in several variables

The notation $\mathbb{F}[x]$ denotes the set of polynomials in one variable (which you may remember from high school) but the $\mathbb{F}[x]$ indicates that the arithmetic of the coefficients is done in the field $\mathbb{F}$. We're hoping you intuitively know what is and isn't a polynomial with one variable, but in order to extend the definition to polynomials for multiple variables, we lay out the following (less intuitive) rules that define what a polynomial is.

  • Every element of $\mathbb{F}$ is a polynomial (called a constant polynomial)
  • The "variable" (more formally called an indeterminate) $x$ is a polynomial
  • The product of two polynomials is a polynomial
    • so $x^n$ is a polynomial for any positive integer n because $x^n = x \cdot x^{n-1}$, $x$ is a polynomial, and $x^{n-1}$ is a polynomial because $x^{n-1} = x \cdot x^{n-2}$...
    • then we also have that $\alpha x^n$ is a polynomial for any $\alpha \in \mathbb{F}$
  • The sum of two polynomials is a polynomial
    • from here we get all of the polynomials with more than one term, i.e. everything of the form $a_n x^n + \ldots + a_1 x + a_0$
    • this does not allow an infinite series, though!
  • Nothing else is a polynomial in $\mathbb{F}[x]$
    • No quotients, no exponential functions, no non-integer exponents, etc.

That's probably not the definition of a polynomial that you learned in school, but hopefully it's clear that it is equivalent.
To extend the definition to multiple variables (the polynomials in $\mathbb{F}[x,y,z]$ should be sufficient for our blog post), we add new rules

  • The "variables" $x$, $y$, and $z$ are all polynomials
  • The multiplication of $x$, $y$, and $z$ is commutative. That is, $xy = yx$, $yz = zy$, and $xz = zx$
    • This allows us to simplify expressions like $(x+2y)(x+3y) = x^2 + 2yx + 3xy + 6y^2$ to be simplified to $(x+2y)(x+3y) = x^2 + 5xy + 6y^2$.
    • This may seem obvious, but remember that some "multiplication" operations, like matrix multiplication, are not commutative.

These sets of polynomials have a very important property: unique factorization. Similar to how each integer can be factored uniquely (up to factors of $\pm1$) into a product of primes, each polynomial can be factored uniquely (up to an element of $\mathbb{F}$) into irreducible polynomials. This gives us a sufficient condition for two polynomials to be different: if they have a different factorization, they're different polynomials. This next theorem is the key to making plookup work.

Plookup's main theorem

For sequences $(f_i)$i∈[n], $(t_i)$i∈[d], and $(s_i)$i∈[n+d]

  • {$ f_i : i \in [n] $} $\subseteq$ {$ t_i : i \in [d] $}
  • $s = (f, t)$, re-arranged to be "ordered by $t$"

if and only if $F \equiv G$ where

$$ F(\beta, \gamma) := \prod_{i \in [n]} \Big( (1 + \beta) (\gamma + f_i) \Big) \cdot \prod_{i \in [d-1]} \Big( \gamma (1 + \beta) + t_i + \beta t_{i+1} \Big) $$

$$ G(\beta, \gamma) := \prod_{i \in [n+d-1]} (\gamma (1 + \beta) + s_i + \beta s_{i+1}) $$

To see this, break up the terms of the product defining $G$ into cases where $s_i = s_{i+1}$ (so it can be written as $(1+\beta)(\gamma+s_i)$)
and cases where $s_i = s_{i+1}$. For simplicity, we can assume that $( t_i )_{i \in [d]}$ has no repeated values.

Tying it together

In the plookup protocol, we saw each of the factors of $F(\beta, \gamma)$ as part of the numerator in a sequence of fractions, and $G(\beta, \gamma)$ as part of the denominator.

The key idea behind PLONK's permutation check and Plookup is that if we encode every element (along with some information about where it falls in the sequence) uniquely as an irreducible polynomial, then we can check two sets are equal even if the elements aren't listed in the same order by multiplying all of the irreducible polynomials encoding elements. With notation, this means that if there is a polynomial $\mathbf{f}_i(x,y)$ that encodes each sequence element $f_i$ and a polynomial $\mathbf{g}_i(x,y)$ that encodes each sequence element $g_i$, then

$$ \prod_{1 \leq i \leq n} \mathbf{f_i}(x,y) = \prod_{1 \leq i \leq n} \mathbf{g_i}(x,y) $$

if and only if the sequences $(f_1, f_2, \ldots, f_n)$ and $(g_1, g_2, \ldots, g_3)$ have the same elements, in a different order. In PLONK, these irreducible factors also encode information about the specific permutation; In plookup, they encode information about how sequence elements relate to neighboring elements. From there, the product argument is the same. So I hope this blog post imparts (beyond just an understanding of how PLONK and plookup work) an appreciation for how the "tools of the trade" can be used in SNARKS in different ways.

Written by Nat Bunner, zero-knowledge cryptography researcher & protocol developer at Metastate.

Image credits: American pika (Ochotona princeps) from Wikimedia.

Metastate has ceased its activities on the 31st of January 2021. The team members have joined a new company and working on a new project. If you're interested in zero-knowledge cryptography and cutting-edge cryptographic protocols engineering positions in Rust, check out the open positions at Heliax.

If you have any feedback or questions on this article, please do not hesitate to contact us: