## Table of Contents

## Summary

- Type
- ZK Rollup
- Timeline
- From 2023-08-03
- To 2023-08-18
- Languages
- Solidity

- Total Issues
- 18 (11 resolved, 5 partially resolved)
- Critical Severity Issues
- 1 (1 resolved)
- High Severity Issues
- 0 (0 resolved)
- Medium Severity Issues
- 3 (1 resolved, 2 partially resolved)
- Low Severity Issues
- 6 (4 resolved, 1 partially resolved)
- Notes & Additional Information
- 8 (5 resolved, 2 partially resolved)

## Scope

We audited the Consensys/linea-contracts-fix repository at the 04d6677 commit.

` ````
contracts
└── verifiers
├── PlonkVerifierFull.sol
└── PlonkVerifierFullLarge.sol
```

## System Overview

The Linea blockchain (L2) is a ZK-rollup on top of the Ethereum mainnet (L1). The L1 periodically receives the latest L2 state, along with a succinct proof that the state transition is the result of the execution of a known list of transactions. This is achieved using a variant of the PLONK algorithm, and the code under review implements the verifier deployed to L1.

### Transpilation

The Solidity code is automatically generated from gnark, a zk-SNARK Go library used to create circuits. This means that our recommended changes should actually be implemented in the source file or the transpiler. Nevertheless, for simplicity, this report describes the suggested effect on the outputted code.

One related architectural consideration is that the code makes extensive use of assembly for both low-level memory manipulation and high-level logic. Since the use of assembly discards many safety features provided by Solidity, it should usually be limited to small well-defined code blocks. This would make the code more robust, readable and extensible. However, this may not be practical without significantly modifying the transpiler. Instead, we recommend significantly increasing the test suite to ensure consistency with the specification and validate all expected behaviors.

### PLONK Modification

The verifier code includes a custom gate, which is not part of the original PLONK specification. This is an optimization that allows the verifier to independently construct a hash using a subset of the circuit wires, and validate that these match a public input provided by the prover. Thus the circuit itself does not need to include a hash function, which is a costly operation. The Linea team has indicated that the underlying circuit implements the Vortex protocol and this hash is used as part of the Fiat-Shamir step.

## Security Model and Trust Assumptions

The verifier contracts are standalone systems with no inherent trust requirements. Anyone can use them to confirm that the supplied proof is valid for the configured circuit conditions. In practice, the particular verifiers under review should be configured to match a circuit encoding the Linea state transition function. They can then be used by the Linea L1 contracts to validate whether a proposed state root should be accepted.

However, the configuration is out of scope for the purposes of this audit. Each verifier contract contains a subset of the Structured Reference String, as well as cryptographic parameters and a verifying key that encodes the circuit. These are assumed to be specified safely and correctly. The circuit itself is also out of the scope of this audit and is assumed to function as specified.

## Critical Severity

### Incorrect Randomness Computation Allows Proof Forgery

The `PlonkVerifier`

contract is called by the rollup to verify the validity proofs associated with blocks of transactions. The `Verify`

function can be called with the proof and the public inputs, and outputs a boolean indicating that the proof is valid for the received public inputs. To do so, the verifier notably samples a random value `u`

and does a batch evaluation to verify that all the openings match their respective commitments.

However, `u`

is not sampled randomly (or in practice as the hash of the full transcript), resulting in the possibility of forged proofs passing the verification. More specifically, `u`

does not depend on `[W`

and _{ζ}]_{1}`[W`

, meaning that it is possible for a malicious prover to change _{ζω}]_{1}`[W`

and _{ζ}]_{1}`[W`

after seeing the value of _{ζω}]_{1}`u`

. Here is an example of an attack based on this observation:

- The malicious prover
`P`

extracts`A = [W`

and_{ζ}]_{1}+ u*[W_{ζω}]_{1}`B = z*[W`

obtained when any valid proof is submitted._{ζ}]_{1}+ uζω*[W_{ζω}]_{1}+ [F]_{1}- [E]_{1}`A`

and`B`

are by construction points on the elliptic curve for which`e(-A, [x]`

._{2}) * e(B, [1]_{2}) == 1 `P`

sets the public input (or any proof commitment) to an arbitrary value. Any value beside`t(ζ)`

,`[W`

and_{ζ}]_{1}`[W`

can be changed._{ζω}]_{1}`P`

computes`T = [ r(ζ) + PI(ζ) − ((a(ζ) + βs`

following step 8 of the PLONK verifier algorithm. The prover sets_{σ1}(ζ) + γ)(b(ζ) + βs_{σ2}(ζ) + γ)(c(ζ) + γ)z_{ω}(ζ))α − L1(ζ)α^2 ] / Z_{H}(ζ)`t(ζ) = T`

in the forged proof. This step is required to pass this check in the code.`P`

computes`u`

,`ζ`

and`ω`

as done by the code.`P`

solves the equations`X + u*Y = A`

and`ζ*X + ζωu*Y = C + B`

obtained by denoting`X = [W`

and_{ζ}]_{1}`Y = [W`

taken from step 12 of the verifier algorithm. This system has for solutions_{ζω}]_{1}`[W`

and_{ζ}]_{1}= X = (-u)*Y + A`[W`

._{ζω}]_{1}= Y = 1/(ζu(ω - 1)) * (C + B -ζA)`P`

submits the proof to the verifier, replacing`t(ζ)`

by`T`

, and`[W`

,_{ζ}]_{1}`[W`

by the values computed in step 5._{ζω}]_{1}- The verifier computes
`e(-A, [x]`

and accepts the proof._{2}) * e(B, [1]_{2}) == 1

The core of the attack is the ability of the prover to change `[W`

and _{ζ}]_{1}`[W`

after _{ζω}]_{1}`u`

has been sampled. This attack allows the prover to arbitrarily change the public inputs or any polynomial commitments and still submit a valid proof. In the case of Linea, this could be used by the prover to modify the state root and steal all the assets on the rollup.

Consider computing `u`

as the hash of the transcript following the PLONK paper.

**Update:** Resolved in pull request #30.

## Medium Severity

### Missing Commitment in Challenge Computation

When following the non-interactive version of PLONK using Fiat-Shamir, the challenges are assumed to be derived from the transcript, defined as "the concatenation of the common preprocessed input, and public input, and the proof elements written by the prover up to a certain point in time". In the code, the following parameters are used to compute the `γ`

and `β`

challenges:

- The public inputs
- The commitments to the wire polynomials (
`[a]`

,_{1}`[b]`

,_{1}`[c]`

,_{1}`[P`

)_{i}]_{1} - The commitments to the selector polynomials (
`[Sσ`

,_{1}]_{1}`[Sσ`

,_{2}]_{1}`[Sσ`

,_{3}]_{1}`[Ql]`

,_{1}`[Qr]`

,_{1}`[Qm]`

,_{1}`[Qo]`

,_{1}`[Qk]`

)_{1}

However, the commitments to the selector polynomials corresponding to the custom gate `[Qci]`

are missing when deriving these challenges. While they are eventually included in the derivation of the _{1}`v`

challenge, other challenges (namely `γ`

, `β`

, `α`

, `ζ`

and `u`

) do not depend on them.

Since they are hardcoded anyway, they cannot be used as a free parameter in frozen-heart-style attacks. Nevertheless, this means the system is out of compliance with the security analysis, which undermines the security guarantees.

Consider including all relevant parameters when deriving these challenges. In particular, consider adding the `[Qc`

commitments, the group parameters (generators, at least _{i}]_{1}`[1]`

, _{1}`[x]`

and _{1}`[1]`

, modulus and domain size), as well as the number of public inputs and the circuit size. Similarly, the Fiat-Shamir done inside of the circuit should also include all relevant parameters._{2}

**Update:** Resolved in pull request #25. The Linea team stated:

The modifications are that now gamma depends on the Qcp (instead of the commitments to the wires associated with it). Beta is unchanged, and alpha depends on the commitments to the wires associated with the custom gate (it is at this point that the verifier should have sent alpha if the protocol were interactive).

### Deviations From So-Far-Digest Model

We are aware of only two provably secure transformations of a constant round argument of knowledge from its interactive to its non-interactive version. Those transformations are, in the terms used in this article, the so-far-transcript (i.e., the Fiat-Shamir transformation in its classical form) and the so-far-digest transformations.

The PLONK specification chooses pseudorandom challenges using the so-far-transcript method, where each challenge is generated from the content of the entire transcript so far. The Linea verifier uses the newly introduced so-far-digest method, where each challenge is generated only from the previous challenge and subsequent messages. However, there are several deviations:

`compute_gamma_kzg`

, corresponding to`v`

in the PLONK specification, should strictly follow the so-far-digest transformation. In particular, it should depend on`ζ`

and all the PLONK prover messages that are output in round 4 (i.e., the respective opening evaluations). Currently, it has several redundant dependencies but does not depend on`z(ζω)`

.- The
`random`

variable used in the KZG opening, corresponding to`u`

in the PLONK specification, should also be computed as any challenge defined by the so-far-digest model. In particular, it should depend on challenge`v`

(not`α`

) and on the two KZG commitments that are output by the PLONK prover in round 5. Moreover, the use of`keccak256`

for computing this challenge should be substituted by the same hash-to-field function as in the rest of the so-far-digest model.

Note that the definition of a challenge in the so-far-transcript and so-far-digest models is that of randomness computed by a non-interactive prover/verifier in lieu of the public coin randomness computed by an interactive verifier, so `u`

in the PLONK specification is also a challenge.

`ζ`

depends on the non-reduced version of`α`

, but in accordance with the so-far-digest transformation we recommend making the dependency directly on`α`

. The same holds for almost all other challenges.

In summary, in order to fall under the security guarantees of one of the two existing interactive to non-interactive secure transformations for argument systems, once started, it is strongly recommended to strictly follow the so-far-digest transformation without interleaving it with any other possible transformation. Avoiding compliance may lead to transformations without currently explored security proofs and may put the codebase at risk of attacks on the system's soundness.

Finally, note that we are in contact with the authors of this paper with comments and feedback regarding the security proof of the so-far-digest transformation.

**Update:** Partially resolved in pull request #30, except for `ζ`

which still depends on the non-reduced version of `α`

. The issue regarding the computation of `random`

in the second point was spun off as a separate issue during the fix review as the vulnerability was found to be more serious than initially thought.

### Missing Validations

The `Verify`

function performs sanity checks on the inputs, but there are still some validations missing. In particular, it does not confirm the number of public inputs, or whether the commitments correspond to valid elliptic curve points.

In the interest of reducing the attack surface, consider including these validations. Note that checking that the commitments are valid elliptic curve points is a requirement of the PLONK specification in order to inherit the security proof.

**Update:** Partially resolved in pull request #26. A check to explicitly restrict the number of public inputs was added. Regarding the elliptic curve checks, the Linea team stated:

It is implicitly tested when the EC precompiles are called - if a point is not on the curve, the precompile will revert. This avoids doing checks each time we verify a proof (there are a lot of points to verify), and the proofs that are received are likely to be correctly formatted so we believe the trade-off is better as it is now.

## Low Severity

### Incomplete Specialization of Custom Gate Support

The codebase is designed to support an arbitrary number of selector gates that are nevertheless fixed at compile time.

However, the `sum_pi_commit`

function is specialized for the current configuration of one selector gate, which is inconsistent with the rest of the codebase. Moreover, the specialization is incomplete. In particular:

- It only hashes the first commitment and ignores any subsequent commitments.
- It unnecessarily adds the result to an unspecified value.
- It unnecessarily increases a pointer that is never reused.
- It is still named
`sum`

even though it does not return a summation.

Similarly, the `compute_gamma_kzg`

function assumes there is only one custom gate, but then allows for multiple corresponding openings.

Consider generalizing the functions to support multiple selector gates. Alternatively, consider explicitly enforcing the single gate requirement and simplifying the functions accordingly.

**Update:** Resolved in pull request #27.

### Inconsistent Error Handling

The `pow`

, `compute_gamma_kzg`

and batch_verify_multi_points functions all use the `eq`

instruction to check the error flag. To be consistent with the rest of the codebase, consider using the `iszero`

instruction instead.

In addition, the final KZG check combines protocol logic and error handling. In particular, it validates the previously saved (quotient polynomial evaluation) status, the success of the precompile call, and the result of the pairing. To be consistent with the rest of the codebase, consider checking the precompile success flag independently and immediately after the call is made.

**Update:** Partially resolved in pull request #17. The `check_pairing_kzg`

function still checks the precompile call success flag in the returned boolean instead of raising an error.

### Inconsistent Memory Management

The codebase adopts a convention where the memory at the free memory pointer is reserved for a shared state, and the subsequent block of memory is considered free. When a function wants to reserve additional memory, it explicitly passes a new free memory pointer (e.g., here) to the called function.

However, this violates Solidity's memory model, where reserved memory is "allocated" by updating the free memory pointer. Moreover, the `fold_state`

function directly uses a buffer that was constructed in the `compute_gamma_kzg`

function, even though it appears to be in "free" memory under both conventions.

To improve readability and limit potential inconsistencies, consider allocating memory whenever it is reserved. Additionally, consider explicitly documenting that the code is not memory-safe.

**Update:** Resolved in pull request #20.

### Misleading Documentation

There are several instances where the codebase's documentation could be improved:

- This comment references the Lagrange polynomial at index 1 instead of index 0.
- The explanation for the "beta" and "alpha" string offsets incorrectly references "gamma". Note that the "beta" comment also has the wrong offset.
- The polynomial openings are described as "wire values at zeta". It would be more accurate to describe them as "evaluations of wire polynomials at zeta".
- The b0 and b0 ^ b1 comments incorrectly describe them as occupying 64 bytes.
- The comment describing the
`fold_h`

function uses`m`

instead of`n`

as the group size. - The
`verify_quotient_poly_eval_at_zeta`

function contains a commented-out instruction that does not match the rest of the code. - This comment uses "mPtr[32:]" instead of "mPtr[:32]".
- This comment is unclear and incorrectly suggests that "b2" is 16 bytes.
- Some comments could use additional context. For example, this comment could indicate which step in the paper it corresponds to (step 8) and what it is computing (the
`t(z)`

expression).

Consider updating these comments to improve the clarity of the codebase.

**Update:** Resolved in pull request #16.

### Missing Docstrings

The code under review contains several functions with minimal or missing docstrings, which limits readability. This is compounded by the fact that many functions intermingle protocol logic with low-level memory manipulation in assembly.

Consider thoroughly documenting all functions (and their parameters). When writing docstrings, consider following the Ethereum Natural Specification Format (NatSpec).

**Update:** Resolved in pull request #18.

### Implementation Deviates From Paper

Some operations are done differently in the code compared to what is described in the paper:

- The permutation polynomial is defined using the Lagrange polynomial at index 0, which means the interpolation starts at the previous position when compared to the specification.
- Some signs have been inverted, for example when evaluating the quotient polynomial.
- Some computations are done differently, such as when computing the partial opening commitment
`[D]`

, where the multiplication by_{1}`v`

and the addition of`u[z]`

are done later._{1} - The components of the quotient polynomial are scaled by
`ζ^{n+2}`

instead of`ζ^{n}`

.

Consider closely following the paper's implementation to reduce the likelihood of mistakes and improve the clarity of the codebase. When this is not feasible, consider clearly documenting the differences between the paper and the codebase.

**Update:** Acknowledged, not resolved. The first point was found to stem from the code implementing an older version of PLONK than the audit team was looking at, which in itself is not a security concern. The Linea team expressed that the other points were not resolved as, while the code differs from the specifications, it is consistent.

## Notes & Additional Information

### Complicated Bit Manipulations

The penultimate step of the `hash_fr`

function is to retrieve the most significant 16 bytes of the 32-byte word `b2`

. This is achieved by zeroing out the previous 16 bytes one at a time, and then reading 32 bytes across the natural word boundary. Instead, the same effect can be achieved directly through right-shifting `b2`

by 128-bit positions. Consider implementing this simplification.

**Update:** Resolved in pull request #29.

### Constants Not Using UPPER_CASE Format

Throughout the codebase, constants are not using the `UPPER_CASE`

format. Consider following this convention as recommended by the Solidity Style Guide.

**Update:** Resolved in pull request #24.

### Magic Numbers

The `batch_verify_multi_points`

function hardcodes the group 1 generator with no explanation. To improve readability, consider introducing named constants to describe this value.

Similarly, consider replacing the hardcoded proof size offset with the positional constant that it represents.

Lastly, the size of the `ecPairing`

buffer is assumed to be `0x180`

, which is not apparent through contextual analysis, because the buffer is constructed in the calling function. Consider passing the length to the `check_pairing_kzg`

function.

**Update:** Partially resolved in pull request #23. Constants for the group 1 generator were introduced, but the proof size offset and the size of the elliptic curve pairing buffer are still hardcoded.

### Naming Suggestions

The following variables and constants could benefit from more descriptive names:

- The
`n`

parameter of the`batch_compute_lagranges_at_z`

function is the number of public inputs, which conflicts with the domain size. - The
`h`

commitments should include`com`

or`commitment`

in their name to be consistent with the other constants. - There are several variables that could remove the "api" decorator, which is a reference to how they were constructed rather than what they represent.

Consider renaming these values accordingly.

**Update:** Resolved in pull request #22.

### Outdated Specification

The verifier implements an outdated version of the PLONK specification. If possible, consider implementing the latest version, which requires a smaller proof.

**Update:** Acknowledged, not resolved. The Linea team stated:

There is a release of Linea coming soon, we will potentially update our PLONK prover and verifier afterwards.

### Potentially Unnecessary Computation

There are multiple places where the verifier continues processing once the outcome is known:

- Several functions are called after the
`verify_quotient_poly_eval_at_zeta`

function saves its result, regardless of the outcome, even though a failure will always eventually revert. - The
`check_inputs_size`

and`check_proof_openings_size`

functions construct a combined error flag for each input, even though any failure ultimately results in a revert.

In the interest of code simplicity and limiting unnecessary computation, consider reverting as soon as a failure is detected.

**Update:** Partially resolved in pull request #28. An early revert was added when validating the inputs and opening. The Linea team stated:

For`verify_quotient_poly_eval_at_zeta`

, we left the old version so that the verifier executes fully and returns either true or false according to the success.

### Reuse Cached Value

Several functions in the `fold_state`

function recompute the pointer to free memory, such as here. Consider reusing the `mPtrOffset`

variable.

**Update:** Resolved in pull request #20.

### Typographical Errors

The following typographical errors were identified in the codebase:

- "comming" should be "coming".
- "32byes" should be "32 bytes".
- "in" should be "is".
- "too" should be "to".

Consider resolving these typographical errors, as well as running an automated spelling/grammar checker on the codebase and correcting any identified errors.

**Update:** Resolved in pull request #21.

security considerations in PLONK.

SRS and Transcript in Practice

As part of the Fiat-Shamir (FS) transformation from the interactive to the non-interactive version

of a protocol, the full transcript should include all public parameters. In a S(N)ARK, the set of

public parameters includes the structured reference string (i.e., the SRS).

In the following, we explain why it is necessary and sufficient to include just a small part of the

SRS in the full transcript for the FS transformation.

Let O(n) be the size of the SNARK witness.

On the one hand, it is not feasible to include the full SRS as part of the full transcript to be later

hashed as this would constitute a computational cost of up to O(n) for both the non-interactive

prover as well as the non-interactive verifier. Indeed, O(n) largely outweighs the upper bound of

O(log n) as the desired computational cost for a non-interactive SNARK verifier.

On the other hand, one certainly needs to add at least some SRS points as part of the full

transcript. We recommend adding [1]

_{1}, [x]

_{1}and [1]

_{2}. Given these three elements, there exists a

deterministic procedure that verifies the well-formedness of any other SRS element due to the

bilinearity of the pairing. Moreover, this verification is valid only if the SRS elements match [1]

_{1}, [x]

_{1}

and [1]

_{2}. For short, the SRS elements are uniquely determined by fixing only 3 elements: [1]

_{1}, [x]

_{1}and [1]

_{2}. The mathematical motivation is included below.

Step 1: Given [1]

_{1}, [x]

_{1}, [1]

_{2}, one can check that a value y is indeed equal to [x]

_{2 }by checking the pairing equality e([x]

_{1}, [1]

_{2}) = e([1]

_{1}, y). This equality holds if and only if y = [x]

_{2}.

Step 2 and onwards: given [1]

_{2}, [x]

_{2}and a previously verified [x

^{i-1}]

_{1}, we want to check that a value y is indeed equal to [x

^{i}]

_{1}for i>1. This can be done by checking the pairing equality e(y, [1]

_{2}) = e([x

^{i-1}]

_{1}, [x]

_{2}) which (due to the non-degeneracy of the pairing) holds if and only if y =[x

^{i}]

_{1}.

**Observation 1:**In order to prove that e([x]

_{1}, [1]

_{2}) = e([1]

_{1}, y) holds if and only if y = [x]

_{2}, the harder part is the direction e([x]

_{1}, [1]

_{2}) = e([1]

_{1}, y) (*) implies y = [x]

_{2}. Assuming (*), we also know that e([x]

_{1}, [1]

_{2}) =e([1]

_{1}, [x]

_{2}) (**) holds. Dividing the two equations (*) and (**), we obtain: e([1]

_{1}, y- [x]

_{2}) = 1. In turn, by the definition of a pairing (the non-degeneracy condition, see Pairing-Based Cryptography), this holds if and only if y= [x]

_{2}.

Observation 2: Analogously to Observation 1, one can use the non-degeneracy of the pairing

to motivate the reasoning in Step 2 and onwards.

**Observation 3:**Analogously to

**Step 2 and onwards**, we have Step 2' and onwards: given [1]

_{1}, [x]

_{1 }and a previously verified [x

^{i-1}]2, we now want to check that a value y is indeed equal to [x

^{i}]

_{2 }for i>1. This can be done by checking the pairing equality e([1]

_{1}, y) = e([x]

_{1},[x

^{i-1}]

_{2}) which, with analogous reasoning as described in previous observations, holds if and only if y = [x

^{i}]

_{2}.

In conclusion, using only [1]

_{1}, [x]

_{1}and [1]

_{2}, one can check if any other element of a PLONK SRS is well-formatted. Hence, we have explained why including only [1]

_{1}, [x]

_{1}and [1]

_{2}in the full transcript is both necessary and sufficient.

**Batching KZG Proofs for Two Evaluation Points**

These sections address the missing security proof on page 13 of the PLONK paper, and how to

implement the required randomness in practice.

**Section 1: Security Proof**

*Let a, a', b, b' ∈ F be some given, fixed scalars, where F⊂ O(2*

**Lemma 1:**^{λ}) is a finite field and

λ is the security parameter. Let r' ∈ F be a scalar chosen uniformly at random in F and, thus,

independently chosen from a, a', b, b'.

Then, it holds that a = a' and b= b' if and only if, up to a negligible probability in λ, a+ r'·b = a'+

r'·b'.

*Proof:*

• "=>": The implication from left to right holds trivially and with a probability of 1.

equivalently, (a - a')+ r'·(b' - b) = 0. For a, a', b, b' ∈ F fixed and r' chosen uniformly at

random in F, by applying the Schwartz-Zippel Lemma, this implies, with overwhelming

probability in λ that a = a' and b = b'. This concludes the proof.

*Next, we apply Lemma 1 for the special case of the scheme describing*

**Instantiation 1:**batching KZG proofs for two evaluation points (page 13 in PLONK). In order to do that, we

instantiate:

- a = e(F + zW, [1]
_{1}), where F = ∑_{i=1}^{t1}γ^{i-1}·cm_{i }- [ ∑_{i=1}^{t1}γ^{i-1}·s_{i}]_{1}and using the PLONK paper (pages 11-13) notation for γ, cmi, si, t1. - b = e(W, [x]
_{2}). - a' = e(F' + z'W', [1]
_{1}), where F' = ∑_{i=1}^{t2}γ'i-1· cmi' - [ ∑_{i=1}^{t2}γ'^{i-1}· s'_{i}]_{1}. - b' = e(W', [x]
_{2}).

Hence, we can conclude that given the security proof for the KZG scheme for one evaluation

point (i.e., the KZG scheme described in Section 3.1, pages 11-12 of PLONK), and using

Instantiation 1, the security of the KZG scheme described on page 13 of PLONK holds.

Note that in order to conclude the above security proof, we made explicit use of the Schwartz-

Zippel Lemma used in the proof of Lemma 1. If not all pre-conditions are met, such as if either the degree of the polynomial considered is not negligible over the size of the field F, or if any of

the coefficients of the polynomial for which we apply the Schwartz-Zippel Lemma are chosen

after r' is chosen, then the security proof may no longer hold.

Section 2: Concrete Implementation Details

To protect itself from a malicious KZG prover, a KZG verifier has two options.

- In an ideal world, the verifier would choose r' uniformly at random from F. However, that

is not efficient to implement in practice. - In practice and in support of an efficient implementation, the KZG verifier simply needs

to compute the value r' as the hash of the entire transcript and all the public values used

in the KZG scheme up to that point. This is equivalent to the Fiat-Shamir transformation

for a multi-round protocol, with the last round in this case being the computation by the

verifier of some publicly used randomness and the final KZG paring check. From a

theoretical perspective, this is secure in the random oracle model.

Note that when extrapolated to PLONK, for example, everything we have seen above points to

the fact that a PLONK verifier, in step 12 of the protocol, should compute the challenge u as the hash of the entire transcript of the interaction so far. Only then does one fall under the coverage of a known security proof.**Open Questions and Remarks Regarding the Security Proof of the So-Far Digest****Model**

Non-interactivity is a desirable and sometimes necessary property of blockchain protocols. In a

non-interactive protocol, each party completes its part only via local computation and the

outcome of that computation is eventually made public. Thus, there is only one round of

communication from each party towards all the others. Certain secure and interactive

cryptographic protocols (e.g., public-coin protocols with a constant number of communication

rounds) can be converted into non-interactive protocols while preserving their security

properties via the Fiat-Shamir transformation. Essentially, the public coin (i.e., challenges) in the

non-interactive version can be computed by any party as the hash of the entire transcript of the

communication up to that point. The transcript includes all the public inputs and public

parameters necessary during the interaction, as mentioned before.

We noticed that the Linea verifiers' implementations do not use the standard Fiat-Shamir

transformation. We have traced the actual transformation used to this recent article, where it is

detailed in the 9-th section of the appendix i.e., Appendix I. The article introduces a more

efficient alternative for computing the Fiat-Shamir challenges. The authors call it the so-far

digest model. We have revised the proposed security proof (i.e., proof of Theorem 4, Appendix

I) in detail for this model and have some comments, suggestions and questions about it. We

have previously attempted to ask the authors directly and have not received an answer, but we

believe it would be valuable to have them addressed.

- With respect to attacker B, how is the starting value h
_{0 }defined and how are the values h_{1}, h_{2}, ..., defined? - Do you implicitly assume/restrict the attacker B to only submit values in the set {(m
_{1}

, h_{0}),(m_{2}, h_{1}), (m_{3}, h_{2}), ...} as RO-queries to their respective oracles? If yes, does your security proof work without this assumption? - If we think about the classical Fiat-Shamir transformation and how the first challenge is

obtained, namely as the hash of all the public inputs (without including any message

from the prover or any challenge so far as there is none yet), how is this modelled for B?

Can the security proof of*Theorem 4*work in all its generality without taking this point into

account? - Regarding the sentence "
**Bad Event**. is when B sends a RO-query on (m, h) to H_{FS}, for

m that has not been the response for any round [...]", do you actually mean "any round

so far", since you also write "later during the i-th round it sends m_{i}= m as its message"? - Is it implicitly true that every h used by B (including but not limited to h
_{0}, h_{1}, h_{2}, ...) is the

output of a RO-query, and hence is uniformly distributed? Is this why h=h_{i}for some round i holds with negligible probability for the right definition of RO using the security parameter? - Is it possible that you are missing some steps or some description, either in the definition

of B or in the use of B by A? Otherwise, why would it be conflicting with the simulation

that the pair (m,h) is sent twice to the RO-query? - The output of B doesn't seem to be explicitly defined. Is it an alleged proof? When does

B win (is it when the output proof is an accepted proof)? Does the same hold for A?

## Conclusion

Three medium-severity issues were discovered among various lower-severity issues. While reviewing the fixes for these, one of the issues previously identified as a medium was found to be exploitable in practice by allowing proof forgery and was updated to a critical severity. Since the verifier is a complex but critical part of the rollup architecture, it was also recommended to add more extensive documentation to the codebase, notably with regard to the differences between the implementation and the original PLONK specification. We wish to extend our gratitude to the Linea team for their quick and helpful answers to our questions.