We audited the Consensys/gnark repository at commit e204083. All the resolutions mentioned in this report are contained at commit 065027a, making it the final version reviewed during this audit.
In scope were the following files:
backend/plonk
├── bn254
│ ├── marshal.go
│ ├── prove.go
│ ├── setup.go
│ ├── unmarshal.go
│ └── verify.go
└── plonk.go
We also audited the changes made to backend/plonk/bn254/solidity.go
between commits 3421eaa and e204083.
The in-scope files heavily rely on the proper implementation of functions and structs across a number of out-of-scope files. As such, we also reviewed parts of the out-of-scope files with regards to their interactions to ensure correctness. However, our review of these files should not be considered an audit. For an inexhaustive list of additional files refer to the Appendix I - Out of Scope Files.
The Linea system and its modifications to the Plonk algorithm have been described in a previous audit report. While that audit focused on the Solidity verifier, this one reviewed the corresponding Go prover (and the associated Go verifier). The expected use case is to support the Linea rollup, but it is a standalone library that can be used for other Plonk circuits, which may also include the custom gate.
We also reviewed the changes required to upgrade from a previous version of the Plonk specification to the current one, as well as some encoding/decoding mechanisms to coordinate between the two codebases.
The provers and verifiers are standalone systems with no inherent trust requirements. Anyone can use them to construct and verify Plonk proofs for any circuit. As noted in our previous report, in practice, they should be configured to match a circuit encoding the Linea state transition function. We did not review the circuit construction and its relationship with Linea's particular use case. Moreover, the codebase assumes the existence of a Structured Reference String, which we assume was specified safely and correctly. Lastly, the Plonk prover codebase relies on an external cryptographic library to obtain random numbers. We assume these are generated and handled securely.
After constructing the shards of the quotient polynomial h(X) their KZG commitments are added to the proof without blinding, as required by the latest Plonk specification. As noted by Sefranek, this means they do not provide the statistical zero-knowledge property.
Consider blinding the individual shards in the proof.
Update: Resolved in pull request #14.
The issue was fixed by applying the blinding of the h
shards (h1,h2,h3
) according to the latest version of the PLONK paper (February 23, 2024). In addition, a global StatisticalZK
option was introduced which, when turned off, switches back to the previous version of the code (i.e., the one that does not blind the shards).
This is desirable since the new blinding increases the memory consumption of the prover due to the necessity to reserve three additional arrays of size n
(where n
is the size of the circuit) to store the three blinded polynomials. Previously, they were just sliced from the (already allocated) h
polynomial.
Some recommendations to further improve the fix:
StatisticalZK
switch to control all blindings (i.e., also of the a,b,c,z
polynomials) and not just of the t
shards. This will further ease the memory requirements on the prover when blinding is not desirable and will also simplify the implementation and improve consistency.h3
polynomial the same size as the h1,h2
shards by setting its top coefficients to zero. This will remove the conditional logic in the innerComputeLinearizedPoly
function (which is a bit unnatural, anyway), since all the hx[i]
values will be valid, and will improve code readability.The Linea team stated:
We made it an option to achieve zero knowledge-ness (see
backend.WithStatisticalZeroKnowledge()
) because it uses more memory. In cases where it is not needed, it is better to avoid it.
In order to blind the BSB22 preimage polynomials, they are constructed with random evaluations in two positions. These positions are carefully chosen so that the evaluations are unused. In particular:
However, if the BSB22 gate is the last gate added to the circuit, both of these will correspond to the same position. This means that the corresponding preimage polynomial will only be blinded in one position instead of two.
We believe that one position is sufficient to maintain zero knowledge in this case. Nevertheless, in the interest of predictability and consistency, consider ensuring that all BSB22 preimage polynomials are blinded twice. This could involve choosing a different (unused) evaluation position or ensuring that the last constraint does not match a BSB22 gate.
Update: Acknowledged, not resolved. The Linea team stated:
For the moment, the polynomials are blinded in at least one spot no matter what. This should be enough (even if one in every case is enough).
UnmarshalSolidity
FunctionThe MarshalSolidity
and UnmarshalSolidity
functions are intended to encode and decode a Go Proof
object so that it can be passed to a Solidity verifier contract. However, the MarshalSolidity
function skips the opening of the linearized polynomial while the UnmarshalSolidity
function does not. This means that the rest of the function will decode values into the wrong variables and will attempt to read past the end of the input.
Consider removing the linearized polynomial from the UnmarshalSolidity
function so that it behaves as expected.
Update: Resolved in pull request #10.
The code under audit updates the Plonk implementation from a previous version to the latest version. Specifically:
However, the transition is incomplete:
The code uses the old definition of r(X) which disregards the constant terms. To see this, note the following:
The Go verifier still receives under the old definition of r(X)
In addition, previous inconsistencies have been retained:
While internally consistent, this means that the code does not conform to either version of the Plonk protocol. It also uses terminology that spans both versions. This makes the codebase harder to understand, modify and maintain.
Consider strictly following the latest Plonk specification.
Update: Acknowledged, not resolved. The Linea team stated:
Acknowledged. We will leave it this way for the moment. The most important part is that there is no division anymore in the verifier since the quotient is included in the linearized polynomial. For the signs, it is arbitrary. For the degrees of the shards, it allows using a slightly smaller SRS.
The Verify
function does not validate the following:
Fortunately, bn curves do not have small subgroups (and therefore are not subject to small subgroup attacks), but this should be validated for the other curves in the library.
In the interest of reducing the attack surface and increasing compliance with the Plonk specification, consider introducing these checks.
Update: Partially resolved in pull request #11.
The fix validates that the commitments from the proof are elements of the correct subgroup. It does not check that the openings and the public values are valid field elements. The rationale is that these checks are done implicitly in Gnark during instantiation.
However, the implicit checks can still be bypassed (e.g., if one passes a 256-bit value with all the bits set as in a := fr.Element{ 0xffffffffffffffff, 0xffffffffffffffff, 0xffffffffffffffff, 0xffffffffffffffff}
, that is still a valid 256-bit constant, but is not a BN254 field element).
The larger domain size is identified by a simple calculation, but is then manually separated into two cases. Moreover, when sizeSystem
is 5, direct calculation would result in a larger domain of size 32 rather than the currently computed 64.
Consider passing the desired number of evaluation points, (domain0.Cardinality + 2) * 3
, in all cases.
Update: Acknowledged, not resolved. The Linea team stated:
We prefer leaving it this way. We remember that it led to weird issues back when we implemented it.
In the UnmarshalSolidity
function, there should be 6 rather than 7 positions reserved for standard batched polynomial openings. Similarly, the additional BSB22 openings should start at index 6
. This is consistent with the proof construction.
Consider updating the index, ideally with a named global constant.
Update: Resolved in pull request #10.
Z
Polynomial Inside innerComputeLinearizedPoly
The innerComputeLinearizedPoly
function directly modifies the blindedZCanonical
slice, which corresponds to the instance's blindedZ
value, before assigning the result to the linearizedPolynomial
variable.
Note that blindedZ
is not part of the proof (which contains just the opening proof [Wζω] and the opening Z(ζω)). Thus, blindedZ
getting modified inside innerComputeLinearizedPoly
does not have adverse consequences (e.g., on the verifier side).
Consider cloning blindedZCanonical
first so that blindedZ
remains correct, or explicitly documenting that it is modified in place.
Update: Resolved in pull request #12. This issue was fixed by following our second suggestion. That is, by adding an explicit comment that the blindedZCanonical
value is modified inside the function.
There are multiple instances of functions and variables with misleading names:
divideByXMinusOne
could be more appropriately renamed to divideByXnMinusOne
or divideByZH
.verify_opening_linearised_polynomial
does not perform any verification and could more appropriately be renamed to compute_opening_linearised_polynomial
.
lagrangeOne
is computed as ζn−1/(n(ζ−1)), which is the evaluation L0(ζ) of the 0-th Lagrange basis polynomial. It could more appropriately be renamed to lagrangeZero
.
Similarly, alphaSquareLagrangeOne
could be more appropriately renamed to alphaSquareLagrangeZero
.
Consider implementing the above renaming suggestions to improve code clarity.
Update: Resolved in pull request #9.
The error associated with a failed MOD_EXP
precompile appears to have been inadvertently removed.
Consider restoring the error.
Update: Resolved in pull request #8.
We have identified the following instances of misleading documentation:
The solveConstraints comment says that the function computes the (LRO) polynomials in canonical form whereas they are in Lagrange form.
kzg.Commit defined in the gnark-crypto
library at gnark-crypto/ecc/bn254/kzg/kzg.go:L170, expects a polynomial in coefficient form according to the comment (in gnark-crypto
kzg.go:L169), but in prove.go
, it can also be in Lagrange form.
The domain size validation uses GetNbConstraints
but the actual size includes the public inputs.
The comment explaining the "gamma" encoding is out-of-place now that there is an FS_GAMMA
constant.
The fold_state
comment still includes H(ζ)
- a remnant from the previous version.
The STATE_FOLDED_DIGESTS_X
comment still includes H
.
The comment explaining a preimage size says 0x17
instead of 0x14
.
The BSB22 proof layout comment is obsolete because the two preceding constants, namely PROOF_OPENING_QCP_AT_ZETA
and PROOF_BSB_COMMITMENTS
, already identify these values.
The quotient_polynomial_at_zeta
comment is obsolete since it has been removed from the batched proof.
The getBlindedCoefficient
function claims that it modifies the underlying polynomial coefficients. However, the append
operation will resize the slice and so the original coefficients slice will remain unchanged.
The wire_committed_commitments
refers to the following loop.
To improve the clarity and readability of the codebase, consider addressing the aforementioned inconsistencies in the documentation.
Update: Partially resolved in pull request #7.
The following recommendations were not addressed:
kzg.Commit
defined in the gnark-crypto library at gnark-crypto/ecc/bn254/kzg/kzg.go:L170
expects a polynomial in coefficient form according to the comment (in gnark-crypto kzg.go:L169
). However, in prove.go
, it can also be in Lagrange form.wire_committed_commitments
refers to the wrong loop.getBlindedCoefficient
function claims that it modifies the underlying polynomial coefficients. However, the append
operation will resize the slice and so the original coefficients slice will remain unchanged. Note that the fix amends the comment, saying that only the size is modified, which is also incorrect. Since the append
operation makes a local copy of p
(which is then modified), the size of p
remains the same.The following are some suggestions to simplify the code:
Coefficients
function is used to retrieve evaluations (e.g., here) when the polynomial is in Lagrange form. For clarity, consider creating an Evaluations
function to be used instead.NewTrace
function could read the domain's cardinality instead of recomputing the size.MarshalSolidity
function hardcodes the proof size. If there are more than two BSB22 commitment gates, this will need to be resized. Instead, the correct size could be computed dynamically.if
statement is redundant.add(state, STATE_FOLDED_DIGESTS_X)
instead of using state_folded_digests
. Similarly, add(state, STATE_FOLDED_CLAIMED_VALUES)
could be saved so that it does not need to be recomputed.FoldProof
function contains an if
statement that could be naturally integrated into the following loop.batchOpening
function waits for the channels chLRO
and chLinearizedPolynomial
. However, the first select
statement is redundant because chLinearizedPolynomial
cannot close before chLRO
.expo
unnecessarily.Consider implementing the aforementioned code simplification suggestions to improve the clarity and maintainability of the codebase.
Update: Partially resolved in pull request #6.
The following recommendations were not addressed:
Coefficients
function is used to retrieve evaluations when the polynomial is in Lagrange form. For clarity, consider creating an Evaluations
function to be used instead.MarshalSolidity
function hardcodes the proof size. If there are more than two BSB22 commitment gates, this will need to be resized. Instead, the correct size could be computed dynamically.add(state, STATE_FOLDED_CLAIMED_VALUES)
could be saved so that it does not need to be recomputed.FoldProof
function contains an if
statement that could be naturally integrated into the following loop.Our previous audit report included recommendations and fixes associated with the Solidity verifier. This corresponds to the output of the ExportSolidity
Go function.
However, when reviewing the template, we identified the following regressions:
sum_pi_commit
function contains a final unnecessary pointer update.fold_h
function uses m
instead of n
as the group size.batch_compute_lagranges_at_z
function is the number of public inputs which conflicts with the domain size.com
in their name to be consistent with the other constants.Consider restoring the aforementioned corrections.
Update: Partially resolved in pull request #5.
The following regressions from the previous audit were not addressed:
b0
and b0 ^ b1
comments incorrectly describe them as occupying 64 bytes."b2"
is 16 bytes."api"
decorator which is a reference to how they were constructed rather than what they represent.In the computeNumerator
function, the number of BSB gates is computed using the size of the memory location where the gates are stored. However, the +1
term in this calculation is incorrect. This does not introduce incorrect functionality because the last bit is discarded during the final division.
Nevertheless, in the interest of clarity and simplicity, consider removing this extra term. Alternatively, consider reading the number of gates directly from the commitmentInfo
parameter.
Update: Resolved in pull request #2.
In the Solidity verifier, several constants for the precompiles are introduced. There are multiple instances where the constants should be used (e.g., 1, 2, 3, 4). In addition, there are several instances (e.g., 1, 2, 3 and 6 more places) where the hard-coded constant 0x2
for the SHA2
precompile is used. Finally, the fixed proof size should be its own constant or one could use PROOF_OPENING_QCP_AT_ZETA
instead of hardcoding 0x300
.
To minimize the possibility of issues arising from the use of magic numbers, consider applying the following changes:
SHA2
precompile and use it instead of the hard-coded identifier 0x2
.PROOF_OPENING_QCP_AT_ZETA
instead of hardcoding 0x300
, as noted above.Update: Partially resolved in pull request #3.
The following recommendation has not been addressed:
The state
variable in the point_acc_mul_calldata
function is unused.
Consider removing the unused variable to improve code clarity.
Update: Resolved in pull request #4.
In the solidity verifier, template offset variables are used to ensure that the constant offsets are specified correctly. Consider using the same mechanism for all buffers with hardcoded constants, such as:
544
and 576
"magic" constants),Update: Partially resolved in pull request #1.
The following recommendation has not been addressed:
The Linea team stated:
In
compute_kzg
, we left it as it was since there is a jump on the offset due to thecalldataload
. We prefer to leave the constants hard-coded as they do not depend on the number of custom gates.
We identified the following typographical errors:
Consider fixing any typographical errors to improve the readability of the codebase.
Update: Resolved in pull request #13.
The audit covered the Linea Go implementation of a PLONK ZK prover and associated verifier, as well as the changes to a version of the verifier implemented in Solidity. The codebase is well-written and sufficiently documented.
One high-severity and four medium-severity issues were discovered among other lower-severity issues. The high-severity issue concerned not blinding the three parts of the quotient polynomials computed by the prover which effectively invalidates the statistical ZK property of the system. This, as well as some of the medium-severity issues, resulted from incomplete adherence to the latest version of the PLONK paper. Our overall recommendation is to make the audited implementation fully consistent with the latest version of the paper.
We would like to thank the Linea team - and Thomas Piellard, in particular - for their responsiveness in addressing promptly and in detail all questions that were raised during the course of the audit.
Appendix I - Out of Scope Files
Inexhaustive list of out-of-scope files and functions that were reviewed in order to understand the in-scope files:
gnark/backend/backend.go:57:backend.ProverConfig
gnark-crypto/ecc/bn254/kzg/kzg.go:L51:VerifyingKey
kzg.Commit
from gnark-crypto/ecc/bn254/kzg/kzg.go:170
gnark-crypto/ecc/bn254/fr/iop/polynomial.go:49
gnark-crypto/ecc/bn254/fr/iop/polynomial.go:73:Polynomial
gnark-crypto/ecc/bn254/fr/iop/polynomial.go:222
gnark-crypto/ecc/bn254/fr/polynomial/polynomial.go:99:Add
gnark-crypto/ecc/bn254/kzg/kzg.go:58:SRS
gnark-crypto/ecc/bn254/fr/fft/domain.go:79:NewDomain(m)
cs.SparseR1C
defined in gnark/constraint/r1cs_sparse.go:143
gnark-crypto/ecc/bn254/fr/generator.go:29:Generator
gnark-crypto/fiat-shamir/transcript.go:64:Bind
gnark-crypto/fiat-shamir/transcript.go:89:ComputeChallenge
gnark-crypto/fiat-shamir/transcript.go:49:NewTranscript
gnark-crypto/ecc/bn254/fr/iop/ratios.go:147:BuildRatioCopyConstraint
gnark-crypto/ecc/bn254/fr/iop/ratios.go:331:getSupportIdentityPermutation
gnark-crypto/ecc/bn254/kzg/kzg.go:L301:BatchOpenSinglePoint
gnark-crypto/ecc/bn254/fr/fft/domain.go
gnark-crypto/ecc/bn254/fr/fft/domain.go:210:BuildExpTable
gnark-crypto/ecc/bn254/fr/iop/expressions.go:37:Evaluate
gnark-crypto/ecc/bn254/fr/iop/ratios.go:288:checkSize
gnark-crypto/ecc/bn254/multiexp.go:L31:MultiExp
gnark-crypto/ecc/bn254/kzg/kzg.go:L424-449:BatchVerifyMultiPoints
gnark-crypto/ecc/bn254/kzg/kzg.go:L360:FoldProof
gnark-crypto/ecc/bn254/kzg/kzg.go:L550:deriveGamma
gnark-crypto/ecc/bn254/kzg/marshal.go:L151:ReadFrom
gnark-crypto/ecc/bn254/kzg/marshal.go:L162:UnsafeReadFrom
gnark-crypto/ecc/bn254/marshal.go:L377:NewEncoder
gnark-crypto/ecc/bn254/marshal.go:L68:NewDecoder
gnark-crypto/ecc/bn254/kzg/marshal.go:L27:WriteTo
gnark-crypto/ecc/bn254/kzg/marshal.go:L151:ReadFrom
gnark-crypto/ecc/bn254/kzg/marshal.go:L162:UnsafeReadFrom
gnark-crypto/ecc/bn254/marshal.go:L730:Unmarshal
gnark-crypto/ecc/bn254/fp/element.go::L912:SetBytes