Introduction
Last post, we reviewed some basic principles which SNARKs are built upon. Fundamentally, we want to build (possibly interactive) protocols between a prover and a verifier where the prover sends some polynomials over some finite field to the verifier, and by evaluating at a few random points, the verifier will become convinced of the truth of some statement, with high probability. An exchange of polynomials and evaluations between a prover and verifier is called a polynomial interactive oracle proof (polynomial IOP) for some statement. Such a protocol can allow a verifier to confirm the truth of the statement while inspecting only a small part of the "proofs" (the evaluation points) but the polynomials generated by the prover may, in fact, be rather large. As we will see, by giving up statistical soundness (a prover can convince a verifier of a false statement only with small probability) for computational soundness (a computationally bounded prover can convince a verifier of a false statement only with small probability) we can achieve significantly smaller "proof" sizes.
Polynomial Commitment Schemes
In principle, the prover could send the polynomials in the polynomial IOPs to the verifier directly, and the verifier could query any point they choose. However, this is quickly inefficient with size (particularly if there are many rounds of interaction). We can make polynomial IOPs more succinct (smaller proofs) by having the prover evaluate the verifier's oracle queries, instead of sending the entire polynomial to the verifier (this also saves verification time). In order to do so, we need to ensure that the prover actually evaluates the polynomial on the verifier's queries (as opposed to evaluating on some other polynomial or some other query). We can achieve this if we sacrifice the soundness of the polynomial IOP to obtain computational soundness (i.e. we assume the prover is computationally limited) instead.
In order to ensure the prover honestly evaluates the polynomial at the correct value, we need a polynomial commitment scheme. Our goal is to allow the prover to construct a "commitment" to a low degree polynomial $f(x)$ with coefficients in $\mathbb{Z}_p$ such that, when the prover sends the commitment of $f(x)$ to the verifier, the verifier can use the commitment to verify that the prover actually does the correct evaluations of $f(x)$. The commitment may be much smaller than the actual polynomials, which means many polynomials may have the same commitment; however, this is acceptable if it is computationally hard for the prover to evaluate any other polynomial $g(x)$ with the same commitment. This is why we must assume the prover is computationally bounded; but in exchange for this assumption, we can handle polynomial commitments instead of polynomials, with substantial size improvements.
Sonic, PLONK, Fractal, and Libra (among others) all use a polynomial commitment scheme applied to a polynomial IOP. Interestingly, the choice of polynomials and the commitment scheme used are largely independent of each other; therefore, a polynomial IOP from one SNARK can generally be used with a polynomial commitment scheme from a different SNARK. This means that development of polynomial IOPs and development of polynomial commitment schemes can progress separately, which is what happened with Supersonic.
The polynomial commitment scheme used in Supersonic (not previously used for SNARKs) is called DARK (Diophantine Arguments of Knowledge). DARK "compresses" a polynomial mod $p$ to a single integer, by lifting polynomials over $\mathbb{Z}_p$ to polynomials over $\mathbb{Z}$. Supersonic does not propose a new polynomial IOP, but rather uses an existing IOP; the primary new contribution in Supersonic is to combine existing IOPs with the new polynomial commitment scheme.
In principle, any existing polynomial IOP can be used with the DARK commitment scheme, but the best parameters are achieved by using the PLONK IOP (suggesting that the name "SuperPLONK" might have been more appropriate than Supersonic; "SuperPLONK", however, is not to be confused with TurboPLONK, an entirely separate improvement to PLONK).
Preliminaries
Let $p$ be a prime number. Then let $\mathbb{Z}_p = \{0, 1, 2, 3, \ldots, p1\}$ be the integers mod $p$, which form a finite field with addition and multiplication operations using modular arithmetic. Let $\mathbb{G}$ be a cyclic group, and the order of $\mathbb{G}$ is the number of elements in $\mathbb{G}$.
A generator $g$ of $\mathbb{G}$ is an element of $\mathbb{G}$ such that for every element $q \in G$, there exists some integer $i$ such that $g^i = q$. That is, every element of $\mathbb{G}$ is some power of the generator $g$. Note that $g^{\text{order}(\mathbb{G})} = g$.
For the purposes of Supersonic, we will care about groups of unknown order: groups $\mathbb{G}$ where $\text{order}(\mathbb{G})$ is hard to compute from the description of the group. The two main candidates are large cyclic subgroups of RSA groups and Ideal Class Groups. An RSA group is the multiplicative group mod $pq$ where $p,q$ are large primes, $\mathbb{Z}^{\times}_{pq}$. The order of an RSA group is difficult to compute without knowing $p,q$ and therefore if $p,q$ are discarded, the order is unknown (the "trusted" part of the setup is discarding $p,q$). Ideal class groups are discussed later, but also have unknown order without a trusted setup.
Recently, another candidate group of unknown order was proposed: Jacobians of hyperelliptic curves of genus 2 or 3. The Jacobian of a hyperelliptic curve forms a group similar to the points on an elliptic curve, but the order of such (sub)groups appears to be harder to compute than in the elliptic curve case. While the Supersonic paper did not evaluate hyperelliptic curves as candidate groups of unknown order, they should have appealing properties in this application, such as smaller proof sizes and more efficient commitment and verification operations, without a trusted setup.
How commitments are used
Before we get into the details of the polynomial commitment scheme, let's discuss the important properties that we want from our polynomial commitment scheme. We want the prover to commit to $f(x)$ in a way that lets the verifier do some typical polynomial operations, given the commitments of $f(x), g(x)$:
 Addition of two committed low degree polynomials $f(x)$ and $g(x)$
 Multiplication of a committed low degree polynomial $f(x)$ by a (public) integer $a$ % $a$ mod $p$
 Multiplication of a committed low degree polynomial $f(x)$ by a (public) monomial $x^i$
The verifier can do these polynomial operations by operating on the commitment. So given the commitments of $f(x)$ and $g(x)$, the verifier can compute a commitment of $f(x) + g(x)$; given $a \in \mathbb{Z}_p$ and the commitment of $f(x)$, the verifier can compute a commitment of $a f(x)$; and given $i$ (not too large) and the commitment of $f(x)$ the verifier can compute a commitment of $x^i f(x)$.
A polynomial commitment scheme with these properties is called homomorphic. We will now quickly observe that these properties are enough to allow the prover to evaluate $f(x)$ for the verifier and also confirm that $f(x)$ is low degree.
Let's look (at a high level) how we might use a homomorphic polynomial commitment scheme to evaluate the polynomial $f(x) = 4x^3 + 3x^2 + 2x +1$ at the value $10$, using only commitments and evaluations from the prover. Consider the following recursive protocol to evaluate $f(x)$ at $x=10$:
 The prover commits to the polynomial $f(x)$, and also commits to the two polynomials $f_L(x) = 2x + 1$ and $f_R(x) = 4x + 3$. Notice that $f(x) = f_L(x) + x^2 f_R(x)$.
 The verifier homomorphically computes a commitment of $x^2 f_R(x)$ from the commitment of $f_R(x)$
 The verifier homomorphically computes a commitment of $f_L(x) + x^2 f_R(x)$ from the commitments of $f_L(x)$ and $x^2 f_R(x)$
 The verifier compares the commitments of $f(x)$ and $f_L(x) + x^2 f_R(x)$ to check that they are equal
 The prover sends the value $f(10) = 4321, f_L(10) = 21, f_R(10) = 43$ to the verifier
 The verifier asks the prover to evaluate $f_L(x)$ and $f_R(x)$ at $x=10$ using the same protocol (starting from step 1 with $f$ replaced with $f_L$ and $f_R$) and checks that they are equal to $f_L(10) = 21$ and $f_R(10) = 43$ sent above. When the recursion reaches degree 0 polynomials (constants) then the prover simply opens the commitment to reveal the polynomial.
 The verifier compares $f(10) = 4321$ to $f_L(10) + 10^2 f_R(10) = 21 + 10^2 \times 43$ and checks that they are equal
Since the recursion halves the degree of the polynomial at every step, the depth of this recursion is logarithmic in the degree of $f(x)$. However, because it evaluates two polynomials recursively, this protocol has linear complexity. We can improve it to be overall logarithmic in the degree of $f(x)$; instead of evaluating both $f_L(x)$ and $f_R(x)$ at $5$, the verifier asks the prover to evaluate $\alpha f_L(5) + f_R(5)$ for some random $\alpha$. By having the prover commit to $f_L(x)$ and $f_R(x)$, and send $f_L(5)$ and $f_R(5)$ before the verifier chooses $\alpha$, with high probability, the only way the verifier will accept the prover's evaluation $f_L(5)$ and $f_R(5)$ is if they agree with the evaluation of $\alpha f_L(5) + f_L(5)$:
 The prover commits to $f(x)$, and also commits to the two polynomials $f_L(x) = 2x + 1$ and $f_R(x) = 4x + 3$
 The verifier homomorphically computes a commitment of $x^2 f_R(x)$ from the commitment of $f_R(x)$
 The verifier homomorphically computes a commitment of $f_L(x) + x^2 f_R(x)$ from the commitments of $f_L(x)$ and $x^2 f_R(x)$
 The verifier compares the commitments of $f(x)$ and $f_L(x) + x^2 f_R(x)$ to check that they are equal
 The prover sends the values $f(10) = 4321, f_L(10) = 21, f_R(10) = 43$ to the verifier
 The verifier randomly chooses some number $\alpha$ and homomorphically computes a commitment of $\alpha f_L(x) + f_R(x)$, and sends $\alpha$ to the prover. (Let's assume $\alpha=2$ in our example
 The protocol recursively evaluates $\alpha f_L(x) + f_R(x)$ at $x=10$ and checks that the result is equal to $2 f_L(10) + f_R(10) = 2 \times 21 + 43 = 85$, where $f_L(10) = 21$ and$f_R(10) = 43$ were provided by the prover before $\alpha=2$ was selected. When the recursion reaches a degree 0 polynomial (a constant) then the prover simply opens the commitment to reveal the polynomial. If the recursive evaluation matches $2 f_L(10) + f_R(10)$, the verifier can be confident that the prover honestly provided $f_L(10)$ and $f_R(10)$
 The verifier compares $f(10)= 4321$ to $f_L(10) + 10^2 f_R(10)$ and checks that they are equal
There are some technical details not described here (for example, the distribution of $\alpha$) but we will address those later. The important thing is that the prover can compute the values $f(10)$, $f_L(10)$, and $f_R(10)$ for the verifier, and the verifier can (recursively) check that those values are accurate from the commitments. Since the prover computed and sent $f_L(10)$ and $f_R(10)$ before $\alpha=2$ was chosen, if the prover had lied about the evaluations of $f_L(10)$ and $f_R(10)$, then it is very unlikely that $2 f_L(10) + f_R(10)$ will (by chance) result in exactly the same number as running the evaluation protocol on $\alpha f_L(x) + f_R(x)$ at $x=10$. On the inductive assumption that the evaluation protocol works for lower degree polynomials (it must work for degree 0 polynomials, as the prover simply opens the commitments) then the evaluation protocol will work for higher degree as well.
Some readers may note a similarity to the hashbased FRI polynomial commitment scheme used in STARKs, which divides polynomials into two polynomials of half degree based on even/odd powers.
Instantiation of a Homomorphic Polynomial Commitment Scheme
The astute reader may have noticed that when we evaluated $f(x) = 4x^3 + 3x^2 + 2x +1$ at $x=10$, the resulting integer had the $i$'th coefficient of $f(x)$ placed in the $i$'th decimal place of $f(10)$. In general this will be true if the coefficients of $f(x)$ are all smaller than 10 (for base10 representation), and will be true for base$q$ representation; if $q$ is larger than every coefficient of $f(x)$ then $f(q)$ will have exactly one coefficient in each $q$ary place. Therefore, we can identify each polynomial $f(x)$ by evaluation on a suitably large integer $q$; each polynomial $f(x)$ is encoded into a single integer $f(q)$.
Note that generally our polynomial $f(x)$ has coefficients from $\mathbb{Z}_p$, a finite field. In order to evaluate $f(x)$ at an integer, we will "lift" every coefficient from $\mathbb{Z}_p$ to an equivalent element of $\mathbb{Z}$ which, when taken mod $p$, results in the original coefficient. However, for technical reasons, we will not want to lift a coefficient mod $p$ to an integer in the range $\{ 0, 1, 2, \ldots, p1\}$, but rather to an integer in the range $\{ \frac{(p1)}{2}, \ldots, 1, 0, 1, \ldots, \frac{p1}{2}\}$. This is because the coefficients may grow (in absolute value) every time we do a homomorphic operation, and when the prover finally opens the constant degree polynomial, the verifier can check that the coefficients were in the original range $\{ \frac{(p1)}{2}, \ldots, 1, 0, 1, \ldots, \frac{p1}{2}\}$ based on the opening. (The situation also becomes nuanced when ideal class groups are used; we ignore the technical details of the encoding and refer to the original paper)
Since we are now working with integers instead of polynomials, Supersonic introduces a new homomorphic polynomial commitment scheme based on DARK, an integer commitment scheme.
To summarize, based on our requirements for a homomorphic polynomial commitment scheme, we want a homomorphic integer commitment scheme that can support:
 Addition of two committed integers
 Multiplication of a committed integer by a (public) integer $a$
Then we want to ensure that our encoding of polynomials into integers causes the homomorphic integer commitment scheme to also function as a homomorphic polynomial commitment scheme:
 Addition of integer commitments yields addition of polynomial commitments: for example, if $f(x) = 4x^3 + 3x^2 + 2x + 1$ is encoded as $f(10) = 4321$ and $g(x) = x^3 + 2x^2 + 3x + 4$ is encoded as $g(10) = 1234$, then adding the evaluations $f(10) + g(10) = 5555$ gives the encoding of $f(x) + g(x) = 5x^3 + 5x^2 + 5x + 5$.
 Multiplication of integer commitments by a public integer $a$ yields multiplication of polynomial commitments by public integers or monomials:

The commitment of $a \times f(x)$ is $a$ times the commitment of $f(x)$, or $a \times f(q)$

The commitment of $x \times f(x)$ is $q \times f(q)$, or $q$ times the commitment of $f(x)$.

So given an encoding of polynomials as integers, we want an homomorphic integer commitment scheme to go along with that.
Enter the DARK integer commitment scheme, when given a large cyclic (sub)group $\mathbb{G}$ and generator $g \in \mathbb{G}$. To commit to an integer $i$, the prover computes $g^i$ and sends it to the verifier. Under plausible hardness assumptions in certain groups, it should be computationally hard for the verifier to recover $i$ from $g, g^i$. The group operation would also allow the homomorphic computation on commitments we want: $g^i g^j = g^{i+j}$ and $(g^i)^a = g^{ai}$.
Now, we also need that once the prover commits to $i$, it should not be computationally feasible for the prover to open $g^i$ to any other integer besides $i$. However, if the order of $\mathbb{G}$ is known, then the prover can open $g^i$ to any integer $i \pmod{ \text{order}(\mathbb{G})}$ because $g^i = g^{i + \text{order}(\mathbb{G})}$. Therefore, $\mathbb{G}$ cannot be a one of a number of commonly used cryptographic groups; most notably, it is computationally easy to compute the order of an elliptic curve group, so we cannot use elliptic curves here.
One possible group to use is the multiplicative group $\mathbb{Z}_n$ where $n=pq$ as used in the RSA cryptosystem, (or more accurately the group of quadratic residues mod $n$, which we will ignore for simplicity; see the paper for details). The RSA group has order $\phi = (p1)(q1)$. Computing this order is as hard as breaking RSA (note that if $\phi$ is known, then the RSA secret exponent is the inverse of the RSA public exponent mod $\phi$, so the Extended Euclid algorithm can recover the secret exponent from the public exponent and $\phi$. Additionally, $p$ and $q$ can be computed from $\phi$) Note that the RSA group (even of only quadratic residues) is not cyclic, so we could sample a random element $g$ and use the cyclic subgroup generated by $g$.
Therefore, if the prover does not already know $\phi$, the prover can use this group to compute $g^i$ and cannot open $g^i$ to anything but $i$ under plausible hardness assumptions. However, to generate $n = pq$ in the first place, two large primes $p,q$ had to be generated and multiplied together. The $p,q$ can be discarded once $n$ is computed (it is not necessary to generate the RSA secret key either) but nevertheless a trusted setup is needed to generate $n$.
An exotic alternative is to use the ideal class group of an imaginary quadratic extension of the rationals. As a reminder, the fundamental theorem of arithmetic says (roughly) "every integer uniquely factors into the product of prime numbers (up to order and multiplicity)". However, while this statement is true of the integers, it is not true of quadratic extensions. The classic example of this happens when we consider $\sqrt{5}$. We can consider all numbers of the form $a + b\sqrt{5}$ (denoted $\mathbb{Z}[\sqrt{5}]$), where $a,b\in \mathbb{Z}$ are integers and $\sqrt{5}$ is a complex number. Then the number $6$ then factors two different ways, $6 = 2 \times 3$ and $6 = (1 + \sqrt{5}) \times (1  \sqrt{5})$.
(As an aside, there does exist an alternative to the fundamental theorem of arithmetic: every ideal in $\mathbb{Z}[\sqrt{5}]$ uniquely factors into the product of of prime ideals. But we will not need this fact here.)
The class group exists nontrivially whenever unique factorization fails; in the case of $\mathbb{Z}[\sqrt{5}]$, the class group of $\mathbb{Q}[\sqrt{5}]$ is isomorphic to $\mathbb{Z}_2$, which is related to the two possible factorizations of $6$ in this domain. In particular, the class group of $\mathbb{Q}[\sqrt{p}]$ for a very large prime $p$ (subject to $p \equiv 1 \pmod{4}$) is easily constructed for a public $p$ and has unknown order which is difficult to compute. Therefore, no trusted setup is required to instantiate a class group used for commitment. (Note again that we would sample a random element $g$ which will generate a large cyclic subgroup of unknown order, but the class group as a whole may not be cyclic)
Technical details
This is almost enough: we have a group of unknown order, a strategy for homomorphically committing integers in that group, and a strategy for turning that homomorphic integer commitment scheme into a homomorphic polynomial commitment scheme. However, it is not that simple and a lot of technical details were omitted above.
The main details omitted:
 The polynomial $f(x)$ has coefficients that are integers mod $p$, a large prime, and these coefficients are "lifted" to actual integers so that the polynomial can be evaluated on a $q$ larger than $p$ for commitment. The homomorphic properties do not completely work the way we would like, however. The addition and multiplication operations on commitments may cause the coefficients to become larger than $p$, so $q$ must be much larger than $p$ (but not unfeasibly larger) to allow the coefficients to grow in the encoding.
 The prover may attempt to cheat and commit to a polynomial with coefficients that are larger than allowed. This is fixable; if we instead consider the coefficients of $f(x)$ mod $p$ to be values in the range $(p1)/2$ and $(p1)/2$, and then analyze the growth of each coefficient when the commitments are opened. However, the details mean that the schemes presented above are not quite sufficient.
 It may be computationally expensive for the verifier to compute exponentiations of a group element, as needed for doing the multiplication operation homomorphically; it is possible to outsource this computation to the prover.
 In the class group, square roots of arbitrary group elements are easy to compute. This means that the prover can commit to coefficients that are not integer at all, but rather dyadic rationals (a rational number with denominator a power of two), which then appear bounded in the protocol. Therefore, the encoding of polynomials needs to encode each polynomial as a dyadic rational. The full scheme is in the paper.
 The polynomials that are evaluated can be blinded to achieve statistical zero knowledge.
Hiding and Zero Knowledge
It is possible to make this polynomial commitment scheme hiding (the verifier cannot distinguish commitments) and zero knowledge (the verifier learns nothing about the polynomial other than the evaluated point).
In order to make a polynomial commitment scheme hiding, we have to make sure that the verifier cannot check the commitment of $f(x)$ without the prover opening it. In other words, the verifier cannot distinguish the commitment of $f(x)$ and the commitment of some other polynomial $g(x)$ even if the verifier knows $f(x)$ or $g(x)$. This is possible to do (again, omitting some technical details) by adding a degree $d+1$ term to $f(x)$ with a very large random coefficient. As long as the random coefficient is extremely large (much larger than the order of the group, which is unknown, but possibly upperboundable) then the commitments of $f(x)$ and $g(x)$ will be multiplied by some unknown element of $\mathbb{G}$, and therefore (plausibly) hide the actual commitment of $f(x)$ in such a way that the verifier cannot obtain the same commitment of $f(x)$ without knowing the random coefficient.
Now that the commitment scheme is hiding, we can make the evaluation of $f(x)$ zero knowledge. If $f(x)$ is degree $d$, the prover samples some polynomial $r(x)$ with random coefficients and commits to both $f(x), r(x)$. In order to evaluate $f(10)$, the prover instead sends both $f(10)$ and $r(10)$. The verifier responds with a random challenge $c$ and the prover and verifier both compute a commitment of $s(x) = r(x) + c \times f(x)$ (since the verifier has a hiding commitment to both $r(x), f(x)$ and knows $c$ the verifier can use homomorphic operations to get a hiding commitment of $s(x)$)
Having a commitment to $s(x)$, the verifier asks the prover (using the evaluation protocol described above) to evaluate $s(10)$. Since the value $r(10)$ was chosen prior to the verifier revealing the challenge $c$, if $s(10) = r(10) + c \times f(10)$, with high probability, the true value of $f(x)$ evaluated at $x=10$ is actually the value $f(10)$ that was sent by the prover. If the prover had sent some other value, then it would be unlikely that the challenge $c$ would make $s(10) = r(10) + c \times m$ for some arbitrary other value $m \ne f(10)$.
Why is this evaluation zero knowledge? The verifier could choose their own random polynomial $s(x)$ and challenge $c$, and then easily compute a (hiding) commitment to $r(x) = s(x)  c \times f(x)$. Since $s(x)$ and the commitment of $r(x)$ are distributed the same as the actual polynomial $s(x)$ and actual (hiding) commitment of $r(x)$, the verifier learned (statistically) nothing from the prover's choice of $s(x)$ and commitment of $r(x)$.
Conclusion
Since polynomial commitments are much smaller than polynomials (for DARK, one group element per polynomial) we achieve significantly smaller "proof" sizes for a polynomial IOP, while introducing a new assumption that it is computationally infeasible for the prover to abuse the commitment scheme to produce "proofs" of false statements (they may exist, which is why "proofs" with only computational soundness are more properly called arguments; but as long as arguments of false statements are difficult to construct, they serve all purposes we want).
The interactive polynomial evaluation protocol described above can be turned into a noninteractive argument (similar to other SNARKs) by the Fiat Shamir heuristic, which (essentially) uses a commitment to simulate the verifier's public coins to generate random challenges. Since the prover cannot (generally) computationally control the output of these commitments, the challenges constructed in this way (generally) preserve the soundness of the interactive protocols (although the details are quite subtle and addressed in other papers) against a computationally bounded adversary.
While Supersonic does not (quite) produce proofs as small as pairing based polynomial commitment schemes (or the ultrasmall pairing based linear commitment scheme of Groth16), it comes with the advantage of logarithmic verification time (a novel feature of no trusted setup SNARKs), and the proof size is still somewhat practical compared to other no trusted setup SNARKs. Supersonic also allows batchable proofs and a faster trusted setup option (using RSA subgroups).
Further, since the DARK polynomial commitment scheme can be applied to any polynomial IOP, further optimizations of the IOP can result in immediate improvements of the SNARK.
Written by Joseph Bebel, zeroknowledge cryptography researcher & protocol developer at Metastate.
Image credits: Porcupine from New Atlas.
Metastate has ceased its activities on the 31st of January 2021. The team members have joined a new company and working on a new project. If you're interested in zeroknowledge cryptography and cuttingedge cryptographic protocols engineering positions in Rust, check out the open positions at Heliax.
If you have any feedback or questions on this article, please do not hesitate to contact us: hello@heliax.dev.