Introduction

This is the first of two blog posts attempting to demystify the Fʀᴀᴄᴛᴀʟ SNARK and provide a more accessible overview to how it works. To start, let's talk about what it can and can't do. Fʀᴀᴄᴛᴀʟ "only" proves membership for the formal language R1CS. However this isn't really a limitation, as the language is NP-complete, meaning a large class of problems we care about is reducible to the R1CS problem. In particular, it captures arithmetic circuit satisfiablity. So if you've been reading about other SNARKs like PLONK and wondering how Fʀᴀᴄᴛᴀʟ handles circuits, the answer is that you have to convert them to R1CS first. Once you do, Fʀᴀᴄᴛᴀʟ has several desirable properties, including

  • Transparent setup. There is no trapdoor in the setup; it's based entirely on public randomness.
  • Recursive composability. Verification can be written as an R1CS instance, allowing Fʀᴀᴄᴛᴀʟ to verifyanother instance of Fʀᴀᴄᴛᴀʟ. Possible applications include blockchains like Coda which allow the amount of data needed to be stored by any particular (lite) node to remain essentially constant.
  • Security against quantum adversaries. Whereas some constructions are secure under classical intractability assumptions that don't hold for quantum computers, Fʀᴀᴄᴛᴀʟ is based on hash functions, for which we don't have any truly feasible quantum attacks. This actually makes Fʀᴀᴄᴛᴀʟ the first plausibly quantum-secure recursively composable proof system.
  • It uses only lightweight cryptography. Another benefit to avoiding the intractability assumptions is that the algebraic operations involved, such as (cryptographic sized) elliptic curve point addition, are computationally expensive compared to evaluating hash functions.

Asymptotically, for N = #constraints ≈ #gates in the corresponding arithmetic circuit the proof time is O(N log N), the argument size is O( log2 N), and the verifier time is O( log2 N). Concretely, for ≈1 million constraints in this C++ implementation, the prover runs in ~6 minutes while the verifier runs in ~10 milliseconds, and the argument size is ~12KB when configured for 128bits of security.

General Structure of zk-SNARKS

Fʀᴀᴄᴛᴀʟ (and all SNARKs, really) could only be designed by standing on the shoulders of metaphorical giants. It's incredibly complex, and SNARKs are still very much an emerging field, but a standard recipe for SNARKs is emerging as a combination of classical computer science techniques. More or less, SNARKs (and STARKs and SNARGs and...) that verify a computation follow this construction:

Starting with a circuit (or a computational trace in the case of a STARK) it is converted into an intermediate representation. The choice of representation is very important as this is generally what the proof system actually works with. So even if interactive proof is very efficient (which could mean it produces small proof strings, the prover or verifier needs only perform a very limited computation, or any of several other metrics) this won't help verify circuit computations if the complexity or size "blew up" when converted to an inefficient representation. Fʀᴀᴄᴛᴀʟ, as mentioned, assumes you've already converted your circuit to a Rank-1 Constraint System (R1CS) instance. A satisfied R1CS instance consists of the instance, a triple of square matrices $A$, $B$, and $C$ that we can think of as describing the gates in a circuit, and a satisfying assignment $z = (x, w)$ where $z$ is a vector comprised of publicly known values $x$ and a secret "witness" vector $w$ that encodes the circuit input. The constraints are "satisfied" if the following equation holds

\[ Az \boxtimes Bz = Cz \]

where $\boxtimes$ represents the entry-wise product rather than usual matrix multiplication.

The rest of the post is dedicated to breaking down what's happening with each arrow.

Interactive Oracle Proofs

Interactive oracle proofs generalize probabilistically checkable proofs (PCP's) to an interactive setting. While in a PCP, the prover outputs a single proof string and the verifier probabilistically checks the proof in only a few locations, in an IOP the prover and verifier communicate back and forth as in a standard interactive proof system. Unlike in a standard interactive proof system, the prover responds with "proof oracles", messages sent by the prover to convince the verifier [hence the "proof"] but constructed in such a way that the verifier is not required to read them in their entirety, but can probabilistically choose snippets of the message to read [similar to randomly querying an oracle].

Reed-Solomon Encoded IOPs

Reed-Solomon encoded IOP’s phrase interactive proofs in the language of coding theory. Rather than the usual “accept or reject” paradigm, the prover and verifier participate in an interactive protocol and then the verifier outputs a rational constraint consisting of two arithmetic circuits, $p: \mathbb{F}^{n+1} \to \mathbb{F}$ and $q: \mathbb{F} \to \mathbb{F}$, and a degree bound $d \in \mathbb{N}$. A set of Reed-Solomon codewords $f_1, f_2, \ldots, f_n$ (functions on a subset $L \subseteq \mathbb{F}$) are said to satisfy the constraint if the function $h(x) := \frac{p(x, f_1(x), \ldots, f_n(x))}{q(x)}$ coincides with a polynomial function of degree $\leq d$. However, this is checked with a separate protocol called a low-degree test not considered part of the RS-encoded IOP, which only generates the constraints. There are a couple of advantages to this. The first is that it simplifies analysis of proof systems by breaking it down into modular components. Fʀᴀᴄᴛᴀʟ uses the FRI low-degree test, but it wouldn't be much trouble to replace it with another in the event that a significantly faster/more space-efficient low-degree test is discovered in the future. So for the purpose of explanation we can just use a black-box low degree test.  The other advantage is that considering it as a separate component allows us to cut down on the number of low-degree tests. Throughout the protocol, the verifier generates between several and many rational constraints. If they all have degree bound $d$, they can be probabilistically checked with one test because with high probability a random linear combination of rational functions on $\mathbb{F}$ will satisfy the degree constraint only if each of the components does.

A special type of rational constraint called a boundary constraint can be constructed to attest to statements of the form $\hat{f}(\alpha) = \beta$ for some $f$ defined on $L$. If indeed, $\hat{f}(\alpha) = \beta$, then subtracting $\beta$ from both sides shows that $\hat{f}(x) - \beta$ has a root at $\alpha$, hence the polynomial $x-\alpha$ divides $\hat{f}(x)-\beta$, so the rational function defined by $\dfrac{\hat{f}(x)-\beta}{x-\alpha}$ should be interpolated by a polynomial of degree less than or equal to $deg(\hat{f})$. These boundary constraints are useful for verifying that a known function produces a certain output on a particular input, as in the case of checking a hash.

R1CS to lin-check to sum-check

If we had a satisfying $z$ for an R1CS instance, we could compute an $a$ such that $Az =a$, a $b$ such that $Bz = b$, and $c$ such that $Cz = c$. Fʀᴀᴄᴛᴀʟ's works by verifying these three linear relations via a protocol called lin-check and then verifying that $a \boxtimes b = c$ (row-check) (this is done with standard PCP methods not new to Fʀᴀᴄᴛᴀʟ. Notice that the component-wise product of Reed-Solomon codewords is the encoding of the products of the corresponding polynomials.) Overall, the underlying IOP (which is then turned into a non-interactive argument by a somewhat novel, but not specific to Fʀᴀᴄᴛᴀʟ transformation) splits into the following sub-protocols:

Again, the new and exciting parts are along the top, so that's where we keep our attention.

Consider a vector, $u$, whose entries were linearly independent polynomials (and $v_1, v_2$ had entries which were normal field elements), then $\langle u, v_1 \rangle$ and $\langle u, v_2 \rangle$ would be a weighted sum of the polynomial entries of $u$, and since they're linearly independent $\langle u, v_1 \rangle = \langle u, v_2 \rangle$ if and only if $v_1 = v_2$. Then with high probability, if $\langle u, v_1 \rangle \not= \langle u, v_2 \rangle$ then they would differ when evaluated at a random point.

We could use this to test whether $Az = a$. In fact $Az = a$ if and only if
\[ u^T \cdot a = u^T \cdot A \cdot z. \]
With a little bit of matrix algebra this is
\[ u^T \cdot a = (A^T \cdot u)^T \cdot z, \]
\[ u^T \cdot a - (A^T \cdot u)^T \cdot z = 0. \]

This is [almost] the condition we check. For the benefit of being able to use some algebraic results, we represent vectors as functions, but rather than doing it the obvious way and defining a function on the set ${ 1, 2, \ldots, n }$, with the first entry of the vector being $f(1)$, the second entry being $f(2)$, etc., we define the corresponding function over a subgroup $H$ of size $n$. As it turns out, such subgroups are cyclic, meaning every element is a power of some generator $g$. So we can define the function so that the first entry of the vector is $f(g)$, the second entry being $f(g^2)$, etc.

So we have a function defined by the values it takes on $n$ points, which then uniquely defines a polynomial of degree at most $n-1$ that interpolates it.
So in the language of polynomial functions, what we want to do to check is that there exist univariate polynomials $f_z$, $f_a$, $f_b$, $f_c$
\[ \forall h \in H, \ \ \ \sum_{k \in H} A_{hk} f_z(k) = f_a(h) \]
(And corresponding statements for $B$ and $C$). The vector $u$ and the matrices $A$, $B$, and $C$ will be represented by bivariate polynomials, but the specific details will be deferred to the second post in this series.

So, having determined a need to check whether a polynomial sums to a certain value on a given subset, we ask, "how do we do it? (without just computing the sum itself)". If a SNARK verifier were to compute the sum, it would take time linear in the size of the subset to be summed over. The size of this subset in an R1CS SNARK corresponds to the number of constraints, which roughly corresponds to the number of gates in a circuit, so we want to be able to do better than linear verifier time. Fʀᴀᴄᴛᴀʟ gets around this by using a holographic lin-check protocol, meaning that the verifier no longer has access to the matrices $A$, $B$, and $C$ (or even a sparse encoding of them), but now gets access to oracle which the verifier will only need to query a few times.
Previous work found an if-and-only-if condition that makes our polynomial summation problem amenable to holographic IOP's when $S$ has some additional structure.
If $S$ is a multiplicative coset, then $\sum_{s \in S} f(s) = \sigma$ if and only if $f(x) = xg(x) + \sigma / |S|$ where $g(x)$ is a polynomial of degree strictly less than $|S|-1$, which fits in nicely with our proof strategy of polynomial commitments + low degree tests.
With a little tweaking, the authors extend the result to rational functions, and from there we can check that $\sum_{s \in S} f(s) = \sigma$ by generating a corresponding rational constraint and checking that it is satisfied.

Turning it into a NARG

If you're reading this, there's a good chance you're familiar with the Fiat-Shamir heuristic, which transforms initeractive arguments into noninteractive ones by having the prover simulate the random choices of an interactive verifier with pseudorandomness generated from a hash function.

Standard PCP's are turned into a NARG via a different transformation, called the Micali transform. Like the Fiat-Shamir transform, the Micali transform's prover uses hash functions to heuristically simulate the random choices of a verifier in an interactive setting. In the Micali transform, hash functions also serve a second purpose. Any transformation from an IOP to a non-interactive argument has the additional task of showing that the prover didn't falsify the answer that the oracle gave to the [prover simulating the] verifier. In order to do this, the prover computes a proof oracle as in the interactive setting, and then uses a hash function to create a Merkle tree from the proof oracle. Having committed to the oracle, the prover uses the root of this tree to simulate the verifier's random choice and the simulated verifier queries the oracle at the appropriate random point. The prover provides the queried answer, along with an authentication path that can be checked against the Merkle tree.

The transformation used in turning a holographic IOP into the preprocessing NARG that is Fʀᴀᴄᴛᴀʟ is a combination of these two. "Between rounds" we use the Fiat-Shamir transform to make sure that the prover honestly simulates random challenges by the verifier, and for each round we use a Micali transform to show that the prover honestly simulates random queries.

The verifier as a constraint system

The proofs of security are done in the random oracle model. As usual, the theoretical random oracle is instantiated with a cryptographic hash function. Most familiar cryptographic hash functions are designed to be quickly and efficiently computed in hardware, but the complexity of arithmetic circuits that compute them is much too high. In order for Fʀᴀᴄᴛᴀʟ to verify another Fʀᴀᴄᴛᴀʟ instance that uses one of these hash functions, it needs to check the correctness of many hashes, which would amount to checking the correctness of many complex circuits evaluations. Instead, Fʀᴀᴄᴛᴀʟ uses a hash function called Rescue which was recently created specifically to have low arithmetic complexity.

Written by Nat Bunner, zero-knowledge cryptography researcher & protocol developer at Metastate.

Image credits: Marmot baby via Wikimedia Commons.

Metastate has ceased its activities on the 31st of January 2021. The team members have joined a new company and working on a new project. If you're interested in zero-knowledge cryptography and cutting-edge cryptographic protocols engineering positions in Rust, check out the open positions at Heliax.

If you have any feedback or questions on this article, please do not hesitate to contact us: hello@heliax.dev.