nomos-node/nomos-da/kzgrs/benches/kzg.rs

163 lines
6.2 KiB
Rust
Raw Normal View History

DA Protocol V1 (#626) * Base cleaning of da to new traits/structure Added new da protocols and types * DA: KZG+RS core (#632) * Removed old kzg rs modules * Added new kzgrs core module * Implemented bytes_to_polynomial and tests * Use coefficient form * Refactor evaluations into method * Use domain elements instead of roots of unity in tests * Fix encoding and test * Clippy happy * Add comments * Implement polynomial commitment * Implement proof generation * Sketch fn signature for verification * implement proof verification * Implemented verification and tests * Return evaluations from bytes_to_polynomial as well * Use modular le bytes * Implement rs encode/decode * Implement decoding tests * Implement decode using lagrange * Cleanup imports * Da: v1 encoder (#633) * Added new kzgrs core module * Implemented bytes_to_polynomial and tests * Use coefficient form * Refactor evaluations into method * Use domain elements instead of roots of unity in tests * Fix encoding and test * Clippy happy * Add comments * Implement polynomial commitment * Implement proof generation * Sketch fn signature for verification * implement proof verification * Implemented verification and tests * Return evaluations from bytes_to_polynomial as well * Use modular le bytes * Implement rs encode/decode * Implement decoding tests * Implement decode using lagrange * Added chunksmatrix * Implement encoder with chunkify * Added missing files * Implement commit row commitments * Implement compute elements (row) proofs * Fix lib and types exposures * Implement encoder main methods * Implement encode method * Implement chunkify test Fix related bugs * Implement compute row kzg commitments Fix related bugs * Implement rs encode rows test Fix related bugs Refactored API * Implement row proofs tests Fix fieldelement encoding/decoding bug * Implement aggregated commitment test Implement aggregated column proofs test * Cleanup * Fix deps * Fix tests * Fix chunk too big test * Da: v1 verifier (#635) * Fix encoding and test * Implement commit row commitments * Implemented dablob * Implement verifier new Implement verifier check column * Clippy cleanup * Implement verifier * Implemented verify column test * Implemented full verify test * DA API Payload to Item in mempool (#634) * Base cleaning of da to new traits/structure Added new da protocols and types * DA: KZG+RS core (#632) * Removed old kzg rs modules * Added new kzgrs core module * Implemented bytes_to_polynomial and tests * Use coefficient form * Refactor evaluations into method * Use domain elements instead of roots of unity in tests * Fix encoding and test * Clippy happy * Add comments * Implement polynomial commitment * Implement proof generation * Sketch fn signature for verification * implement proof verification * Implemented verification and tests * Return evaluations from bytes_to_polynomial as well * Use modular le bytes * Implement rs encode/decode * Implement decoding tests * Implement decode using lagrange * Cleanup imports * Reduce abstraction for certificate and vid metadata * Allow payload to mempool as long as it converts into item * Da Certificate verifier * Add mock certificate for core tests * Mempool payload verification * Integrate mock verifiers for tx and certs * Detach verification from cert and tx * Seperate payload and item in mempools * Mempools in integration tests * Remove old cert verifier * Network payload to item constraints in da mempool * Update nomos-da/full-replication/src/lib.rs Co-authored-by: Daniel Sanchez <sanchez.quiros.daniel@gmail.com> * Sort attestations for cert signature * Update nomos-da/full-replication/src/lib.rs Co-authored-by: Daniel Sanchez <sanchez.quiros.daniel@gmail.com> --------- Co-authored-by: danielsanchezq <sanchez.quiros.daniel@gmail.com> * DA API Certificate verification (#641) * Redo certificate verification in mempool * FullReplication verifier params provider * Integrate da params provider into the node * DA API Indexer service (#643) * Base cleaning of da to new traits/structure Added new da protocols and types * Remove da availability crate completely * Scaffold for da storage service * Indexer service responsible for storage and blockchain subscription * Handle index related ops only * Acquire storage and consensus relays * Indexer trait * wip: storage adapter * Use storage adapter instead of storage * Add consensus adapter trait for block subscriptions * Consensus block subscriber adapter * Use metadata from da core in indexer * Update nomos-services/data-availability/indexer/src/lib.rs Co-authored-by: Daniel Sanchez <sanchez.quiros.daniel@gmail.com> * Update nomos-services/data-availability/indexer/src/lib.rs Co-authored-by: Daniel Sanchez <sanchez.quiros.daniel@gmail.com> * Use std::ops::Range for da queries * Return certificate metadata * Da storage adapter methods --------- Co-authored-by: danielsanchezq <sanchez.quiros.daniel@gmail.com> * Reuse evaluations when computing proofs (#647) * Reuse precomputed evaluations instead of evaluation polynomial for each proof * kzgrs benchmarks * Clippy happy * DA API indexer implementation (#644) * Rocksb adapter in da indexer * Handle da service messages * Remove indexer trait, use storage directly in the indexer service * Return unordered indexes range * Load blob by vid from file * Use VID in consensus * Change full replication index type to byte array * Change payload to cert and item to vid where required * Service integration tests for indexer * Feature gate rocksdb backend * Propagate range response send error * FRIndex to Index * VID to VidCertificate * Pass blobs file dir via settings * Da v1 multiple proofs bench (#648) * Parallel proof generation bench * Added bench comment * Modify domain to fit exact sizes * Fix domain in benches * Force parallelization features in lib * DA: Implement base structure for verifier service (#627) * Base cleaning of da to new traits/structure Added new da protocols and types * Implement base structure for verifier service * Added comments and todo! * Cleanup imports * Size of VidCert in full replication * Nomos Da Verifier service crate * Extension replaced with metadata * Fix DaIndexer service name * Storage adapter trait in verifier * Manage lifecycle and messages in verifier * Blob trait in core * Common nomos da storage crate * Use updated nomos da storage in indexer * Verifier storage adapter * Libp2p adaper for verifier * Kzgrs backend in verifier service * Fix fmt * Clippy happy --------- Co-authored-by: Gusto <bacvinka@gmail.com> * DA Verifier service integration tests (#650) * Base cleaning of da to new traits/structure Added new da protocols and types * Implement base structure for verifier service * Added comments and todo! * Cleanup imports * Size of VidCert in full replication * Nomos Da Verifier service crate * Extension replaced with metadata * Storage adapter trait in verifier * Manage lifecycle and messages in verifier * Common nomos da storage crate * Use updated nomos da storage in indexer * Verifier storage adapter * Libp2p adaper for verifier * Kzgrs backend in verifier service * Fix fmt * Data availability tests module * Return attestation in service msg response * Common definitions for da tests * Serde for kzgrs proofs and commitments * Da verifier integration test * WIP nomos-core in kzgrs backend * Kzgrs blob to common module * Add client zone to verifier test and check if attestations are created * Cleanup and comments * Use libp2p only for verifier and indexer service tests * Lint in da tests * Simplify da blob serialization * Remove attester from nomos-core attestation * Verifier backend error --------- Co-authored-by: danielsanchezq <sanchez.quiros.daniel@gmail.com> * DA Kzgrs Backend Certificate implementation (#651) * Kzgrs backend certificate definition * Encoded data to certificate test * Nomos da domain specific tag * Handle errors in da certificate creation * Add nomos core traits to da cert * Derive ordering traits for da index * Add failure test cases to kzgrs certificate * Da v1 benches expand (#658) * Update benches with more cases * Expand benches * Added parallel feature * Fix test comment Co-authored-by: Youngjoon Lee <5462944+youngjoon-lee@users.noreply.github.com> * Remove outdated comment --------- Co-authored-by: Gusto <bacvinka@gmail.com> Co-authored-by: gusto <bacv@users.noreply.github.com> Co-authored-by: Youngjoon Lee <5462944+youngjoon-lee@users.noreply.github.com>
2024-06-13 08:23:11 +00:00
use ark_bls12_381::{Bls12_381, Fr};
use ark_poly::univariate::DensePolynomial;
use ark_poly::{EvaluationDomain, GeneralEvaluationDomain};
use ark_poly_commit::kzg10::{UniversalParams, KZG10};
use divan::counter::ItemsCount;
use divan::{black_box, counter::BytesCount, AllocProfiler, Bencher};
use once_cell::sync::Lazy;
use rand::RngCore;
use rayon::iter::IntoParallelIterator;
use rayon::iter::ParallelIterator;
use kzgrs::{common::bytes_to_polynomial_unchecked, kzg::*};
fn main() {
divan::main()
}
// This allocator setting seems like it doesn't work on windows. Disable for now, but letting
// it here in case it's needed at some specific point.
// #[global_allocator]
// static ALLOC: AllocProfiler = AllocProfiler::system();
static GLOBAL_PARAMETERS: Lazy<UniversalParams<Bls12_381>> = Lazy::new(|| {
let mut rng = rand::thread_rng();
KZG10::<Bls12_381, DensePolynomial<Fr>>::setup(4096, true, &mut rng).unwrap()
});
fn rand_data_elements(elements_count: usize, chunk_size: usize) -> Vec<u8> {
let mut buff = vec![0u8; elements_count * chunk_size];
rand::thread_rng().fill_bytes(&mut buff);
buff
}
const CHUNK_SIZE: usize = 31;
#[allow(non_snake_case)]
#[divan::bench(args = [16, 32, 64, 128, 256, 512, 1024, 2048, 4096])]
fn commit_single_polynomial_with_element_count(bencher: Bencher, element_count: usize) {
bencher
.with_inputs(|| {
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain)
})
.input_counter(move |(_evals, _poly)| ItemsCount::new(1usize))
.bench_refs(|(_evals, poly)| black_box(commit_polynomial(poly, &GLOBAL_PARAMETERS)));
}
#[allow(non_snake_case)]
#[divan::bench(args = [16, 32, 64, 128, 256, 512, 1024, 2048, 4096])]
fn commit_polynomial_with_element_count_parallelized(bencher: Bencher, element_count: usize) {
let threads = 8usize;
bencher
.with_inputs(|| {
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain)
})
.input_counter(move |(_evals, _poly)| ItemsCount::new(threads))
.bench_refs(|(_evals, poly)| {
let commitments: Vec<_> = (0..threads)
.into_par_iter()
.map(|_| commit_polynomial(poly, &GLOBAL_PARAMETERS))
.collect();
});
}
#[allow(non_snake_case)]
#[divan::bench(args = [128, 256, 512, 1024, 2048, 4096])]
fn compute_single_proof(bencher: Bencher, element_count: usize) {
bencher
.with_inputs(|| {
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
(
bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain),
domain,
)
})
.input_counter(|_| ItemsCount::new(1usize))
.bench_refs(|((evals, poly), domain)| {
black_box(generate_element_proof(
7,
poly,
evals,
&GLOBAL_PARAMETERS,
*domain,
))
});
}
#[allow(non_snake_case)]
#[divan::bench(args = [128, 256, 512, 1024], sample_count = 3, sample_size = 5)]
fn compute_batch_proofs(bencher: Bencher, element_count: usize) {
bencher
.with_inputs(|| {
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
(
bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain),
domain,
)
})
.input_counter(move |_| ItemsCount::new(element_count))
.bench_refs(|((evals, poly), domain)| {
for i in 0..element_count {
black_box(
generate_element_proof(i, poly, evals, &GLOBAL_PARAMETERS, *domain).unwrap(),
);
}
});
}
// This is a test on how will perform by having a wrapping rayon on top of the proof computation
// ark libraries already use rayon underneath so no great improvements are probably come up from this.
// But it should help reusing the same thread pool for all jobs saving a little time.
#[allow(non_snake_case)]
#[divan::bench(args = [128, 256, 512, 1024], sample_count = 3, sample_size = 5)]
fn compute_parallelize_batch_proofs(bencher: Bencher, element_count: usize) {
bencher
.with_inputs(|| {
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
(
bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain),
domain,
)
})
.input_counter(move |_| ItemsCount::new(element_count))
.bench_refs(|((evals, poly), domain)| {
black_box((0..element_count).into_par_iter().for_each(|i| {
generate_element_proof(i, poly, evals, &GLOBAL_PARAMETERS, *domain).unwrap();
}));
});
}
#[allow(non_snake_case)]
#[divan::bench]
fn verify_single_proof(bencher: Bencher) {
bencher
.with_inputs(|| {
let element_count = 10;
let domain = GeneralEvaluationDomain::new(element_count).unwrap();
let data = rand_data_elements(element_count, CHUNK_SIZE);
let (eval, poly) = bytes_to_polynomial_unchecked::<CHUNK_SIZE>(&data, domain);
let commitment = commit_polynomial(&poly, &GLOBAL_PARAMETERS).unwrap();
let proof =
generate_element_proof(0, &poly, &eval, &GLOBAL_PARAMETERS, domain).unwrap();
(0usize, eval.evals[0], commitment, proof, domain)
})
.input_counter(|_| ItemsCount::new(1usize))
.bench_refs(|(index, elemnent, commitment, proof, domain)| {
black_box(verify_element_proof(
index.clone(),
elemnent,
commitment,
proof,
*domain,
&GLOBAL_PARAMETERS,
))
});
}