noir-lang/noir

Usage of BigInts in the stdlib is inefficient

Opened this issue · 0 comments

Problem

This is an example of how bigints are currently used in the stdlib:

struct Secpk1Fq {
   array: [u8;32],
}


impl Add for Secpk1Fq { 
    fn add(self: Self, other: Secpk1Fq) -> Secpk1Fq {
        let a = BigInt::from_le_bytes(self.array.as_slice(), secpk1_fq);
        let b = BigInt::from_le_bytes(other.array.as_slice(), secpk1_fq);
        Secpk1Fq {
           array: a.bigint_add(b).to_le_bytes()
        }
    }
}

Every operation is creating new bigints from the underlying bytes and serializing the result back. This creates extra conversions to/from bigint when chaining operations.

Happy Case

Usage like this one:

struct Secpk1Fq {
   storage: BigInt,
}


impl Add for Secpk1Fq { 
    fn add(self: Self, other: Secpk1Fq) -> Secpk1Fq {
        Secpk1Fq {
           storage: self.storage.bigint_add(other.storage)
        }
    }
    
    fn to_le_bytes() -> [u8; 32] {
      self.storage.to_le_bytes()
    }
}

Where the BigInt id is what is actually backing the struct to avoid unnecesary conversions.

Project Impact

None

Impact Context

No response

Workaround

None

Workaround Description

No response

Additional Context

No response

Would you like to submit a PR for this Issue?

None

Support Needs

No response