Here are my notes from the workshop. Apologies if I missed or misheard some parts of the discussion.

**Izaak**: A SNARK can include a SNARK verifier; this is called proof composition. Why would you want that? Let me give a motivation.

Say you have a map f: X \rightarrow X. You want to prove \exists n . f^n(x_0) = x_n. If you use pairing based SNARKs, you need to write down a circuit, so you need a bound on n.

The statement can be proved inductively. It’s true if either x_0 = x_n, or \exists n, y such that f^n(x_0) = y and f(y) = x_n. So instead of proving the statement directly, we can say that \exists y such that \mathrm{verify}_\mathrm{me}(x, y) = \top and f(y) = x_n.

How do we pass the verification key into the SNARK? We have two ways of binding variables in SNARKs: the instance and the witness. You can try putting the verification key in the instance, but that doesn’t work, because the size of the verification key depends on the size of the instance.

We need to make sure that the inner proof was verified with the right verification key, not just a key corresponding to zero constraints. So we include \operatorname{hash}(x, y, \mathit{vk}) in the instance. In the circuit, we require that \exists \mathit{vk, x, y} such that \operatorname{hash}(x, y, \mathit{vk}) = h and \operatorname{verifier}_\mathit{vk}(\operatorname{hash}(x, y, \mathit{vk})) = \top.

**Eran**: There is a useful abstraction called proof carrying data (PCD). You have various parties that communicate data to one another. The first party sends message m_1 to second party, who sends m_2 to the third party, etc. The messages could be blocks in blockchain.

We want to show that all messages are compliant with some predicate. We attach a proof to each message, with two parts, one showing that the message is valid and another showing that the previous message had a valid proof. By verifying the final proof, you can verify compliance of the entire chain. This is implemented in libSNARK. We can write the message compliance predicate in R1CS.

A simple example is provenance. We might want to verify that some image file, text file, etc. evolves in a permissable way. We start with authentic images, then require that edits must abide by certain rules. Maybe the only permissible modifications are cropping, adjusting brightness, etc.

Another application is scalable SNARKs. If our computation involves many steps, we might run out of memory if we try to create one huge SNARK. Using PCD, we can break up the computation into chunks, where each chunk verifies one clock cycle of a larger computation.

A third application is succinctly verifiable blockchains, like CODA.

**Izaak**: Succinct blockchains use the same idea as PCD. The message predicate verifies VRFs etc. to ensure that a block is correct.

# Discussion

**???**: Some computations don’t have a recursive structure.

**Izaak**: The scheme is flexible; you can stick any verifiers inside verifiers. Can merge proofs.

**???**: Why are you restricting proofs to be non-interactive?

**Eran**: A naive answer is that the person verifying one message can’t look at previous interactions. There might be some possible schemes though.

**Alessandro**: Even with public coin arguments, you have to be there to witness the randomness.

**Alessandro**: Could work with designated verifier proofs.

**???**: If you want to merge multiple proofs, you don’t know how many there will be, so you aggregate them in a tree. Is it more efficient to have one circuit verify several nested proofs?

**Eran**: You can do either. Depends on the constants.

**???**: How does composition affect security?

**Eran**: You can recursively extract the whole initial witness, but parameters deteriorate with recursion depth. You need strong, non-falsifiable assumptions about the efficiency of a knowledge extractor. That said, no attack is known.

**???**: Knowledge extractor may not run in polynomial time. Can I still compose arguments?

**Alessandro**: We can look at it in two ways: attacks or analysis. From the perspective of attacks, a fresh proof looks the same as a recursive one. I wouldn’t know how to attack it. From an analysis perspective, we have a spectrum of knowledge assumptions one can make. The weakest would only allow constant recursion depth; others would allow logarithmic depth; others would allow arbitrary polynomial depth. I would feel comfortable with constant or logarithmic depth.

**???**: It would be nice to get an idea of the proving time, verification time, field sizes, etc.

**Izaak**: Only known, decently efficient construction involves cycles of elliptic curves. You have two pairing-friendly curves: E_1(F_q) with order r, and E_2(F_r) with order q. The problem is that the only known cycles have low embedding degrees. One has 4 and the other has 6. With 768 bit fields, we get a claimed 131 bits of security. Need high 2-adicity. If you don’t care about 2-adicity, you can generate these cycles on your laptop.

**???**: You’re talking about a cycle of two MNT curves, but we could also use a cycle of curves from different families.

**Alessandro**: Would love to see more cycles of any reasonable length. It’s an open question. Want a minimum embedding degree of 10-12. Can you demonstrate or rule out the existence of such cycles? These are only type within MNT. We can rule out certain combinations. All have to be prime order.

**Izaak**: And besides cycles, we could also have a graph of related curves. The graph could have a small curve, say 2^{384}, for leaf SNARKs, then a cycle of curves with low embedding degree for composition.

**???**: CODA is using recursive SNARKs to get a constant size blockchain, which doesn’t need zero knowledge. What about applications which require zero knowledge?

**Izaak**: We could add private transactions to CODA.

**Eran**: Also image authenticity – you might not want to reveal which regions were cropped out.

**???**: I might have process loops that run an arbitrary number of times. Are there ways to stop a proof from continuing forward without affecting soundness? E.g. how many times a car has a maintenance cycle.

**Eran**: You can always recurse, but with security implications.

**???**: Are there other argument systems that have this property?

**Eran**: We need succinctness, non-interactivity, etc. so any such system would be a SNARK.

**Izaak**: Systems other than pairing based SNARKs?

**Eran**: Composition has only been implemented for QAP based SNARKs so far, but we could use STARKs, etc. Any argument system with sublinear verification could work.

**Eran**: One could also ask, do we need a SNARK for NP? The answer is no—we just need a language for a SNARK verifier, plus whatever you put in your compliance predicate.

**???**: Are cycles of curves used only for performance?

**Eran**: Yes.

**???**: Can we compose Fiat-Shamir based non-interactive arguments?

**Alessandro**: Formally, all SNARKs compose. Concretely, we have to pay a cost for Fiat-Shamir based arguments.

**???**: Don’t need SNARK at top level. Could be non-succinct.

**Eran**: SHARK is an example.

**???**: We often need to verify signatures; does the cycles scheme imply what signature scheme we should be using?

**Eran**: We want the signatures to be SNARK-friendly, i.e. involving linear constraints over the SNARK’s field.

**Eran**: What should go into the community reference? Proposals for the next iteration?

**???**: What curves to use.

**???**: Collaboration with programming languages people.

**???**: Guidelines for how to combine argument systems without breaking security.