The ZKProof Community

genSTARK: a JavaScript zk-STARK generation framework

I’ve put together a JavaScript library that can help people generate STARK-based proofs of computation. The goal is to take care of as much boilerplate code as possible, and make creating new STARKs simple and “easy.”

The library is in this GitHub repo and it is also published on NPM. It is largely based on Vitalik Buterin’s zk-STARK/MiMC tutorial - but it is highly generalized. For example, defining a MiMC STARK takes just 14 lines of code:

const mimcStark = new Stark({
    field: new PrimeField(2n ** 256n - 351n * 2n ** 32n + 1n),
    tExpressions: {
        'n0': 'r0^3 + k0'
    tConstraints: [
        'n0 - (r0^3 + k0)'
    tConstraintDegree: 3,
    constants: [{
        values  : roundConstants,
        pattern : 'repeat'  

Defining a STARK to prove Fibonacci computation is only 12 lines:

const fibStark = new Stark({
    field: new PrimeField(2n**32n - 3n * 2n**25n + 1n),
    tExpressions: {
        'n0': 'r0 + r1',
        'n1': 'r1 + (r0 + r1)'
    tConstraints: [
        'n0 - (r0 + r1)',
        'n1 - (r1 + r0 + r1)'
    tConstraintDegree: 1

(here, we need to set up 2 registers because the framework is limited to 2 consecutive states, but Fibonacci sequences requires 3 consecutive states to validate).

Once you’ve defined a STARK, you can use it to make proofs and verify computations like so:

const steps = 2**13;
const result = 95224774355499767951968048714566316597785297695903697235130434363122555476056n;
const assertions = [
    { step: 0, register: 0, value: inputs[0] },         // value at first step is equal to input
    { step: steps - 1, register: 0, value: result }     // value at last step is equal to result

let proof = mimcStark.prove(assertions, steps, inputs);     // create a proof
let result = mimcStark.verify(assertions, proof, steps);    // verify the proof
console.log(result); // true

The project is in its infancy right now, and there are still many things to fix and optimize (see the issues in the repo). So, would appreciate any feedback, help, and support.

1 Like

This is a really cool work, and I appreciate how well written the documentation is.

A few questions:

  1. What are your next steps (are you thinking of optimizing code further?)
  2. What are the known security issues that exist in the code (hash function security, fiat-shamir instantiation, …)?
  3. Can you briefly share what the main lesson learnt from you is from working through this?

Thanks for sharing!

Thank you!

I’ve built this primarily to learn how STARKs work - so, for me personally, there’s been a lot of lessons learned :slight_smile:. One big takeaway is that doing STARKs “at scale” will probably be as much of a hardware play as a software play (you’ll need massive parallelization).

There is a ton of technical optimization that can be done (moving computationally-heavy stuff to WASM, parallelizing etc.) - but it might be a bit premature to work on that. The things I’d like to do next are:

  • Define a STARK that proves preimage of a hash function (I have an issue 1 open for that).
  • Define a STARK that proves membership of a value in a Merkle tree (also have an issue 6 for that)

Putting these together will help figuring out if the library is missing any high-level features. And once these are done, technical optimization would be a natural next step. Having said that, there is some algorithmic optimization that can be done right now. For example, simplifying how linear combinations are done would make proof size independent of the number of constraints (issue 5 and issue 7).

The main security issue (aside from potential bugs etc.) is that I’m not sure how to estimate security level of a STARK with given parameters. I have an issue opened for this (issue 3) and hopefully someone who does cryptography for living can give me some guidance.

1 Like

Thanks again for sharing.

Can you elaborate a bit more on this? It seems to me like one of the advantages of STARKs is that you have an inherent way of parallelizing recursive (or uniform) computations, so for this you would not need the hardware. Do you then mean, whenever (and if) you use STARKs to prove statements that are not uniform?

I probably should have been more specific about what I meant by “at scale”. In my mind, it is something close to the ability to prove membership of ~10K values in Merkle trees per second. Right now, STARKs are more than several orders of magnitude away from this, and getting there will require massive parallelism (100s of cores). This can be done with general purpose hardware, but my thought was that creating specialized hardware might be significantly cheaper.

1 Like