- May 23, 2024
-
-
STEVAN Antoine authored
this MR: - refactors the "inbreeding" example into `examples/inbreeding/` - adds `--strategy` and `--environment` - `Strategy::draw` will draw the number of shards to keep for recoding - `Environment::update` will update the pool of shards by losing some of them
-
- May 21, 2024
-
-
STEVAN Antoine authored
- update `benches/README.md` to use `cargo run --release --example ...` - add `build-examples` to `Makefile` to build all examples in release ### minor change add two `eprintln!` in `inbreeding.rs` to show the experiment parameters
-
STEVAN Antoine authored
- new `scripts/plot.nu` with common tools and options - better sets of parameters - better commands in `benches/README.md`
-
- May 13, 2024
-
-
STEVAN Antoine authored
this MR makes the plot a bit nicer. ## new figures       
-
- May 02, 2024
-
-
STEVAN Antoine authored
this MR adds `examples/inbreeding.rs` which allows to do two things - _naive recoding_: in order to generate a new random shard, we first $k$-decode the whole data and then $1$-encode a single shard - _true recoding_: to achieve the same goal, we directly $k$-recode shards into a new one ## the scenario regardless of the _recoding strategy_, the scenario is the same 1. data is split into $k$ shards and $n$ original shards are generated 2. for a given number of steps $s$, $k$ shards are drawn randomly with replacement and we count the number of successful decoding, given a measure of the _diversity_, $$\delta = \frac{\#success}{\#attempts}$$ 3. create a new _recoded shard_ and add it to the $n$ previous ones, i.e. $n$ increases by one 4. repeat steps 2. and 3. as long as you want ## results 
-
- Apr 29, 2024
-
-
we want to compare - _naive recoding_: k-decoding followed by 1-encoding - _komodo recoding_: k-recoding with $k = \#\text{shards}$ # results >
**Note** > we see that the _naive recoding_ is around 100 times slower compared to the _komodo recoding_ > **Note** > the format of the labels is always `{curve} / {k}`   -
STEVAN Antoine authored
i've moved the plotting scripts to [GPLT](https://gitlab.isae-supaero.fr/a.stevan/gplt) which allows to install a single command, called `gplt`, with two subcommands - `gplt plot` which is the same as old `python scripts/plot/plot.py` - `gplt multi_bar` which is the same as old `python scripts/plot/multi_bar.py`
-
STEVAN Antoine authored
otherwise, k doesn't play any role in the "recoding" benchmark
-
- Apr 26, 2024
-
-
STEVAN Antoine authored
this MR adds - `examples/benches/bench_fec.rs` to the list of example benches - instructions on how to run the new benchmark and plot the results ## results   
-
STEVAN Antoine authored
- fix the path to the "bench" readme and remove it from the plot scripts - "BLS-12-381" to "BLS12-381" for consistency
-
STEVAN Antoine authored
this MR goes from ```rust let xs = seq 0 5 | each { 2 ** $in } | wrap x let twice = $xs | insert measurement { 2 * $in.x } | insert error { 0.1 + 0.5 * $in.x } let square = $xs | insert measurement { $in.x ** 2 } | insert error { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square }, { group: "2 * x", items: $twice } ] | to json) ``` to ```rust let xs = seq 0 5 | each { 2 ** $in } let twice = $xs | wrap x | insert y { 2 * $in.x } | insert e { 0.1 + 0.5 * $in.x } let square = $xs | wrap x | insert y { $in.x ** 2 } | insert e { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { name: "x ^ 2", points: $square }, { name: "2 * x", points: $twice } ] | to json) ``` updates the "bench" README and adds type annotations to the `plot.py` script.
-
STEVAN Antoine authored
this MR - moves the last "recoding" benchmark to `examples/benches/` - moves the README, which is now all alone, to `examples/benches/` - adds a mention to `examples/benches/README.md` in `README.md` - some minor improvements to the bench README ## TODO - [x] find a way to plot the "recoding" results (thanks to !90)
-
STEVAN Antoine authored
>
**Note** > - in the following examples, any part of the `$.style` specification is optional and can either be ommitted or set to `null` > - the default values for `$.style` are given in `plot.py --help` ```rust let xs = seq 0 5 | each { 2 ** $in } | wrap x let twice = $xs | insert measurement { 2 * $in.x } | insert error { 0.1 + 0.5 * $in.x } let square = $xs | insert measurement { $in.x ** 2 } | insert error { 1 + 1.5 * $in.x } ``` and try ```rust python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square }, { group: "2 * x", items: $twice } ] | to json) ``` vs ```rust python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square, style: { color: "red", alpha: 0.5, line: { marker: 's', width: 2, type: "dashed", }, } }, { group: "2 * x", items: $twice, style: { color: "purple", alpha: 0.1, line: { marker: 'o', width: 5, type: "dotted", }, } } ] | to json) ```
-
- Apr 25, 2024
-
-
STEVAN Antoine authored
## changelog - use `out> *.ndjson` in the README to simplify running the benchmarks - create a `scripts/math.nu` module with `ns-to-ms` and `compute-stats` to refactor some of the most common operations - add `--fullscreen` to `plot.py` and `multi_bar.py` - add `--x-scale` and `--y-scale` to `plot.py`
-
STEVAN Antoine authored
## changelog - benchmarks - _commit_ has been removed in favor of `examples/benches/commit.rs` - _linalg_ has been migrated to `examples/benches/` as `bench_linalg` - _setup_ has been migrated to `examples/benches/` as `bench_setup` - `read-atomic-ops` command has been moved to `scripts/parse.nu` module - `scripts/plot/bench_commit.py` has been made more general and renamed to `scripts/plot/plot.py` - `scripts/plot/benches.py` has been removed because it's not required anymore => `plot.py` and `multi_bar.py` are general enough
-
STEVAN Antoine authored
this MR - bumps PLNK to 0.6.0 - update all existing code - uses the PLNK lib in `examples/benches/commit.rs` - fixes the y label of the plot in `scripts/plot/bench_commit.py`: was _ns_, should be _ms_
-
- Apr 24, 2024
-
-
STEVAN Antoine authored
i've basically refactored the whole "bench" framework that was inlined in `examples/benches/operations/field.rs` and `examples/benches/operations/curve_group.rs` into a new repo called [PLNK](https://gitlab.isae-supaero.fr/a.stevan/plnk). nothing effectively changes on the side of Komodo but now the code is much simpler here :)
-
STEVAN Antoine authored
this idea is to not use `criterion` and measure exactly what we want ## results    
-
STEVAN Antoine authored
we see that the Arkworks / Komodo versions of the same curves are basically the same. 
-
- Apr 23, 2024
-
-
STEVAN Antoine authored
as per title
-
- Apr 22, 2024
-
-
STEVAN Antoine authored
this MR improves the "_atomic_" script in `benches/README.md` to allow filtering _species_ to show in the _multibar_ plot. in addition to this, the warmup time and the number of samples of Criterion have been increased back to 3sec and 100 respectively. > **Note** > the benchmarks take 15min on my machine, i.e. by running the following two commands in Nushell > ```bash > cargo criterion --output-format verbose --message-format json --bench field_operations out> field.ndjson > cargo criterion --output-format verbose --message-format json --bench curve_group_operations out> curve.ndjson > ``` ## results   
-
STEVAN Antoine authored
this MR adds two now benchmarks: - `field_operations` in `benches/operations/field.rs` - `curve_group_operations` in `benches/operations/curve_group.rs` as well as `scripts/plot/multi_bar.py` to plot the results, see `benches/README.md` for the commands to run. ## results  
-
STEVAN Antoine authored
this MR - adds an Arkworks bench oneshot function to the `bench_commit` example - adapts the `measure!` macro to pass a _pairing-friendly_ curve - give different linestyle to curves in the Python script ## example measurements 
-
- Apr 15, 2024
-
-
STEVAN Antoine authored
add "unchecked" versions of `Matrix::{vandermonde,from_vec_vec}` and test both matrices (dragoon/komodo!75) ## changelog - replace `Matrix::vandermonde` with `Matrix::vandermonde_unchecked` - add a new `Matrix::vandermonde` which calls `Matrix::vandermonde_unchecked` after checking the seed points are distinct, otherwise, gives a `KomodoError::InvalidVandermonde` error - same with `Matrix::from_vec_vec` and `Matrix::from_vec_vec_unchecked` - add documentation tests for the two "checked" functions - run the main lib tests on both a random and a Vandermond matrix, just to be sure we do not take advantage of the Vandermonde structure
-
- Apr 12, 2024
-
-
STEVAN Antoine authored
-
STEVAN Antoine authored
## changelog - rename the `encode` function to `prove` and have it take _shards_ instead of an _encoding matrix_: this is to isolate the "encoding" process inside the `fec` module and leave the main `komodo::prove` only compute the "proof", i.e. the commits of the data from ```rust fn encode<F, G, P>( bytes: &[u8], encoding_mat: &Matrix<F>, powers: &Powers<F, G>, ) -> Result<Vec<Block<F, G>>, KomodoError> ``` to ```rust fn prove<F, G, P>( bytes: &[u8], powers: &Powers<F, G>, k: usize, ) -> Result<Vec<Commitment<F, G>>, KomodoError> ``` - rename `fec::Shard.combine` to `fec::Shard.recode_with` to get rid of "combine" - rename `fec::recode` to `fec::recode_with_coeffs` to show that this version takes a list of coefficients - rename `Block.commit` to `Block.proof`: "commit" should be "commits" and it's usually refered to as "proof" - split `prove` further into `prove` and `build`: `prove` now outputs a `Vec<Commitment<F>>`, `build` simply takes a `Vec<Shard<F>>` and a `Vec<Commitment<F>>` and outputs a `Vec<Block<F>>` - add `fec::recode_random` that does the "shard" part of `recode` to wrap around `fec::recode_with_coeffs` - remove `R: RngCore` from the signature of `zk::setup`, to avoid having to pass a generic `_` annotation everywhere `zk::setup` is used, same change has been applied to `recode` and the `generate_random_powers` in `main.rs` from ```rust fn setup<R: RngCore, F: PrimeField, G: CurveGroup<ScalarField = F>>( max_degree: usize, rng: &mut R, ) -> Result<Powers<F, G>, KomodoError> { ``` to ```rust fn setup<F: PrimeField, G: CurveGroup<ScalarField = F>>( max_degree: usize, rng: &mut impl RngCore, ) -> Result<Powers<F, G>, KomodoError> { ``` ### some extra minor changes - remove some useles generic type annotations, e.g. `prove::<F, G, P>` can become a simpler `prove` most of the time, i.e. when there is at least one generic annotation somewhere in the scope
-
- Apr 11, 2024
-
-
STEVAN Antoine authored
## changelog * eb1b1381 don't use a T in the lib `run_template` test * fbd503c6 remove the useless unwrap and TODO * b550d712 remove some pub * 339c3038 remove useless `.iter()` * 537993f0 remove useless `.add(...)` * d7720907 remove hiding_bound from timer in commit * eecab5a6 move `commit` to inlined `zk::batch_commit`
- Apr 10, 2024
-
-
STEVAN Antoine authored
i ended up adding a bunch of changes to the benchmarks
## changelog * 805a2454 reduce the number of loops and the warmup time * f7ce05c3 don't serialize for real to save time * 37a2a7e2 don't try to compress with validation * 409f3e3c don't multiply degree by 1_024 * 610024a9 fix setup * 3d5e7c58 fix setup * 3d3167fb run benchmarks on BLS12-381, BN-254 and PALLAS * da2a71a1 pass name of the curve as parameter * 954fd6d3 plot commit for all curves * f980b30f plot all curves in linalg * 5e41df1d rename `labels` to `keys` in commit * 8bb64f99 filter setup by curves * 0163c8f9 plot all curves in setup * 8c91c6d8 split the setup of Komodo and the serde benchmarks * 0784f294 add a manual benchmark to measure the commit * 608a3fd1 move the "example benches" to `examples/benches/` * 10f9a37c add a script to plot results from `bench_commit` * 6d512fa6 move plot script from `benches/` to `scripts/plot/` * a4e6ffbc measure VESTA
-
- Apr 09, 2024
-
-
STEVAN Antoine authored
-
STEVAN Antoine authored
... instead of the number of bytes that does not really make any sense because we only consider one polynomial, i.e. only one column / shard of data. the following relation between the degree of the polynomials and the size of the data holds still $$\text{deg}(P) = \frac{\#\text{bytes}}{k \times |f_r|}$$ where $|f_r|$ is the size of an element of the scalar finite prime field of the elliptic curve $F_r$ and $P$ is the polynomial of a single column / shard of data
-
- Apr 08, 2024
-
-
STEVAN Antoine authored
should address #8 ## changelog - move the internal `rng` to an argument of type `R: RngCore` for the following functions - `recode` in `lib.rs` - `linalg::Matrix::random` - `generate_random_setup` in `main.rs` - make sure - `ark_std::test_rng` is only used in tests modules - `rand::thread_rng` is used in benchmarks, examples and `main.rs`
-
STEVAN Antoine authored
## changelog - add `(komodo)` to the benchmark names for commit and setup for easier parsing - add `--bench commit` to plot the commit times - make the `plot.py` more robust - use `ns_to_ms` and `b_to_kb` to convert times and filesizes - remove the prefix from bench IDs and always take the first space-separated token as the input data size - remove the "bounds" from the labels - remove the "mean" from the labels when there's only the mean to show - plot the figures in fullscreen - add `--save` to save the figures directly to disk with a little message - add `--all` to plot / save all the figures at once ## examples   
-
STEVAN Antoine authored
this MR adds a benchmark for - the KZG10 trusted setup creating of `ark-poly-commit` - the KZG10 commit of `ark-poly-commit` - our own implement of the commit in `zk::commit` there is also a slight improvement to the previous benchmarking of our `zk::setup`: the degree of the _trusted setup_ is now computed once and for all before the benchmarking loop starts, because it's not what is of interest, let's not benchmark it.
-
STEVAN Antoine authored
there was some missing parts from recent commits and also a dead link to `setup::setup` which is now `zk::setup`.
-
- Apr 05, 2024
-
-
STEVAN Antoine authored
in 3c91ef12 and !54, a new implementation of the creation of the _trusted setup_ has been introduced, that gets rid of the `E: Pairing` requirement with a more general `<F: PrimeField, G: CurveGroup<_>>`. however, the size of the _trusted setup_ was incorrect. `zk::setup` requires the _maximum degree_ of the _trusted setup_, however, the number of bytes `nb_bytes` was consistently being given to it throughout the code base... this MR - introduces a new `zk::nb_elements_in_setup` that converts a number of bytes to the associated number of _trusted setup_ elements - uses that new `zk` function before calling `zK::setup` in all the code base ## results >
**Note** > !58 is required for the whole table to be used easily > **Note** > here is how to run the benchmarks in Nushell > ```bash > let bad_mr = "3c91ef12" > let fix = "fix-setup-size" > > git co $"($bad_mr)^" > cargo criterion --output-format verbose --message-format json out> benches/results/before.ndjson > cargo run --example bench_setup_size out>> benches/results/before.ndjson > > git co $bad_mr > cargo criterion --output-format verbose --message-format json out> benches/results/after.ndjson > cargo run --example bench_setup_size out>> benches/results/after.ndjson > > git co $fix > cargo criterion --output-format verbose --message-format json out> benches/results/fix.ndjson > cargo run --example bench_setup_size out>> benches/results/fix.ndjson > ``` > and here the script used to generate that table is the following: > ```bash > def "parse bench-file" []: table<reason: string, id: string, mean: any> -> table<id: string, mean: float> { > where reason == "benchmark-complete" > | select id mean > # NOTE: because `bench_setup_size.rs` outputs `record<reason: string, id: string, mean: float>` > | update mean { if ($in | describe) == int { $in } else { $in.estimate } } > # NOTE: addressed in `!58` > | update id {|it| > if ($it.id | str starts-with "recoding") { > $it.id ++ " on some curve" > } else { > $it.id > } > } > | update mean { into int } > | update id { parse "{id} on {curve}" | into record | get id } > } > > let before = open benches/results/before.ndjson | parse bench-file > let after = open benches/results/after.ndjson | parse bench-file > let fix = open benches/results/fix.ndjson | parse bench-file > > $before > | join $after id > | rename --column { mean: "before", mean_: "after" } > | join $fix id > | rename --column { mean: "fix" } > | insert b->a {|it| $it.after / $it.before | math round --precision 2 } > | insert a->f {|it| $it.fix / $it.after | math round --precision 2 } > | insert b->f {|it| $it.fix / $it.before | math round --precision 2 } > | select id before b->a after a->f fix b->f > | to md --pretty > ``` > **Important** > before this very MR, i.e. on `3c91ef12`, there was a factor of 15x between _before_ and _after_, meaning that the _trusted setups_ were 15 times larger and longer to serde > > this can be explained by the following facts > - due to the bad sizes given to the _trusted setup_ building function, the setups were around 30 times larger, 30 being close to the size of a field element on BLS-12-381 > - because the `zk::setup` function only creates half of what its Arkworks counterpart does, the setups were at the same time around 2 times smaller > > combining these two and we get a factor of 15x!! > > now, with this MR, we get rid of the first factor and are left with _trusted setups_ twice as small and twice as fast to serde | id | before | b->a | after | a->f | fix | b->f | | --------------------------------------------------------------- | ---------- | ------- | ---------- | ------- | ---------- | ------- | | inverse 10x10 | 336359 | 0.93 | 313852 | 1.05 | 329191 | 0.98 | | inverse 15x15 | 811018 | 0.99 | 800064 | 1.01 | 807417 | 1 | | inverse 20x20 | 1511592 | 1 | 1508034 | 1.02 | 1542538 | 1.02 | | inverse 30x30 | 3703750 | 1.01 | 3731380 | 1.02 | 3793071 | 1.02 | | inverse 40x40 | 7163839 | 1 | 7145015 | 1.03 | 7336996 | 1.02 | | inverse 60x60 | 18620089 | 1 | 18625577 | 1.02 | 18922329 | 1.02 | | inverse 80x80 | 37571610 | 1 | 37643906 | 1.02 | 38306236 | 1.02 | | inverse 120x120 | 105404054 | 1 | 105281874 | 1.01 | 106797441 | 1.01 | | inverse 160x160 | 224332257 | 1 | 224092724 | 1.01 | 227066824 | 1.01 | | inverse 240x240 | 671096671 | 1 | 671005055 | 1.01 | 679280010 | 1.01 | | inverse 320x320 | 1487909175 | 1 | 1488534950 | 1.01 | 1506027089 | 1.01 | | transpose 10x10 | 87 | 0.93 | 81 | 1 | 81 | 0.93 | | transpose 15x15 | 175 | 0.96 | 168 | 1 | 168 | 0.96 | | transpose 20x20 | 284 | 1.03 | 293 | 0.95 | 279 | 0.98 | | transpose 30x30 | 759 | 1.22 | 924 | 0.89 | 823 | 1.08 | | transpose 40x40 | 1798 | 1.63 | 2935 | 0.98 | 2887 | 1.61 | | transpose 60x60 | 3830 | 1.67 | 6378 | 1.01 | 6468 | 1.69 | | transpose 80x80 | 7720 | 1.5 | 11548 | 0.99 | 11470 | 1.49 | | transpose 120x120 | 16365 | 1.5 | 24572 | 0.98 | 24059 | 1.47 | | transpose 160x160 | 42764 | 1.18 | 50453 | 1.07 | 54189 | 1.27 | | transpose 240x240 | 119435 | 1.18 | 141357 | 1 | 140752 | 1.18 | | transpose 320x320 | 218674 | 1.13 | 246262 | 1 | 247167 | 1.13 | | mul 10x10 | 15499 | 1 | 15474 | 1 | 15527 | 1 | | mul 15x15 | 51800 | 1 | 51913 | 1 | 51772 | 1 | | mul 20x20 | 122399 | 1 | 122390 | 1.01 | 123248 | 1.01 | | mul 30x30 | 499047 | 0.95 | 474740 | 1.01 | 481756 | 0.97 | | mul 40x40 | 1224755 | 0.98 | 1203588 | 1.01 | 1211995 | 0.99 | | mul 60x60 | 4166589 | 0.99 | 4122003 | 1 | 4139839 | 0.99 | | mul 80x80 | 9942560 | 0.99 | 9870864 | 1 | 9912815 | 1 | | mul 120x120 | 33706366 | 0.99 | 33458234 | 1.01 | 33680802 | 1 | | mul 160x160 | 79645646 | 1 | 79974020 | 1.01 | 80469214 | 1.01 | | mul 240x240 | 277091998 | 0.99 | 274638961 | 1.01 | 276412347 | 1 | | mul 320x320 | 664942845 | 1 | 662229758 | 1.02 | 676065811 | 1.02 | | recoding 1 bytes and 2 shards with k = 2 | 124 | 1 | 124 | 1.02 | 127 | 1.02 | | recoding 1 bytes and 2 shards with k = 4 | 179 | 0.99 | 178 | 1.01 | 180 | 1.01 | | recoding 1 bytes and 2 shards with k = 8 | 284 | 1 | 284 | 1 | 285 | 1 | | recoding 1 bytes and 2 shards with k = 16 | 496 | 1.01 | 499 | 1.01 | 505 | 1.02 | | recoding 1 bytes and 4 shards with k = 2 | 347 | 1.01 | 349 | 0.99 | 347 | 1 | | recoding 1 bytes and 4 shards with k = 4 | 505 | 1 | 505 | 1 | 507 | 1 | | recoding 1 bytes and 4 shards with k = 8 | 821 | 1 | 825 | 1 | 825 | 1 | | recoding 1 bytes and 4 shards with k = 16 | 1451 | 1 | 1454 | 1.01 | 1464 | 1.01 | | recoding 1 bytes and 8 shards with k = 2 | 792 | 1 | 791 | 1 | 792 | 1 | | recoding 1 bytes and 8 shards with k = 4 | 1162 | 1 | 1163 | 1.01 | 1169 | 1.01 | | recoding 1 bytes and 8 shards with k = 8 | 1884 | 1.01 | 1897 | 1 | 1902 | 1.01 | | recoding 1 bytes and 8 shards with k = 16 | 3361 | 1 | 3368 | 1.02 | 3446 | 1.03 | | recoding 1 bytes and 16 shards with k = 2 | 1680 | 1 | 1679 | 1.01 | 1699 | 1.01 | | recoding 1 bytes and 16 shards with k = 4 | 2472 | 1 | 2475 | 1 | 2468 | 1 | | recoding 1 bytes and 16 shards with k = 8 | 4034 | 1 | 4033 | 1.01 | 4060 | 1.01 | | recoding 1 bytes and 16 shards with k = 16 | 7187 | 1 | 7173 | 1.02 | 7331 | 1.02 | | recoding 1024 bytes and 2 shards with k = 2 | 1020 | 1 | 1020 | 1 | 1017 | 1 | | recoding 1024 bytes and 2 shards with k = 4 | 1079 | 1 | 1081 | 0.98 | 1064 | 0.99 | | recoding 1024 bytes and 2 shards with k = 8 | 1186 | 0.98 | 1167 | 1 | 1166 | 0.98 | | recoding 1024 bytes and 2 shards with k = 16 | 1386 | 1 | 1392 | 0.99 | 1383 | 1 | | recoding 1024 bytes and 4 shards with k = 2 | 2978 | 1 | 2968 | 1 | 2970 | 1 | | recoding 1024 bytes and 4 shards with k = 4 | 3120 | 1 | 3113 | 1 | 3113 | 1 | | recoding 1024 bytes and 4 shards with k = 8 | 3438 | 1 | 3445 | 1 | 3447 | 1 | | recoding 1024 bytes and 4 shards with k = 16 | 4056 | 1 | 4071 | 1 | 4051 | 1 | | recoding 1024 bytes and 8 shards with k = 2 | 6905 | 1 | 6879 | 1 | 6861 | 0.99 | | recoding 1024 bytes and 8 shards with k = 4 | 7236 | 1 | 7216 | 1 | 7227 | 1 | | recoding 1024 bytes and 8 shards with k = 8 | 7969 | 1 | 7986 | 1 | 7962 | 1 | | recoding 1024 bytes and 8 shards with k = 16 | 9455 | 1 | 9427 | 1 | 9442 | 1 | | recoding 1024 bytes and 16 shards with k = 2 | 14746 | 1 | 14760 | 0.99 | 14686 | 1 | | recoding 1024 bytes and 16 shards with k = 4 | 15516 | 1 | 15493 | 1 | 15538 | 1 | | recoding 1024 bytes and 16 shards with k = 8 | 17112 | 1 | 17097 | 1 | 17078 | 1 | | recoding 1024 bytes and 16 shards with k = 16 | 20237 | 1 | 20284 | 1 | 20295 | 1 | | recoding 1048576 bytes and 2 shards with k = 2 | 1427516 | 1.01 | 1441658 | 0.99 | 1424866 | 1 | | recoding 1048576 bytes and 2 shards with k = 4 | 1083761 | 1.01 | 1094451 | 1 | 1089954 | 1.01 | | recoding 1048576 bytes and 2 shards with k = 8 | 1087564 | 0.99 | 1076515 | 1.02 | 1094795 | 1.01 | | recoding 1048576 bytes and 2 shards with k = 16 | 1089556 | 0.99 | 1078406 | 1.03 | 1105840 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 2 | 3256507 | 1 | 3250060 | 1.04 | 3370007 | 1.03 | | recoding 1048576 bytes and 4 shards with k = 4 | 3259079 | 1.01 | 3285892 | 1 | 3297768 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 8 | 3235697 | 1 | 3244151 | 1.01 | 3278027 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 16 | 3240586 | 1.01 | 3264910 | 1.01 | 3284101 | 1.01 | | recoding 1048576 bytes and 8 shards with k = 2 | 7580388 | 1 | 7576306 | 1.02 | 7732461 | 1.02 | | recoding 1048576 bytes and 8 shards with k = 4 | 7567385 | 1.01 | 7614250 | 1.01 | 7699032 | 1.02 | | recoding 1048576 bytes and 8 shards with k = 8 | 7589588 | 1 | 7584071 | 1.01 | 7643021 | 1.01 | | recoding 1048576 bytes and 8 shards with k = 16 | 7572517 | 1 | 7596138 | 1.01 | 7637596 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 2 | 16248634 | 1 | 16245477 | 1.01 | 16450530 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 4 | 16253850 | 1 | 16299266 | 1.01 | 16458170 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 8 | 16240827 | 1 | 16265027 | 1 | 16256734 | 1 | | recoding 1048576 bytes and 16 shards with k = 16 | 16229981 | 1 | 16307729 | 1 | 16265882 | 1 | | setup/setup 1024 | 8934763 | 2.12 | 18942383 | 0.11 | 2175852 | 0.24 | | setup/serializing with compression 1024 | 4194 | 15.82 | 66364 | 0.03 | 2100 | 0.5 | | setup/serializing with no compression 1024 | 4953 | 16.04 | 79451 | 0.03 | 2501 | 0.5 | | setup/deserializing with compression and validation 1024 | 3644409 | 15.18 | 55337980 | 0.03 | 1809773 | 0.5 | | setup/deserializing with compression and no validation 1024 | 1065186 | 15.74 | 16762363 | 0.03 | 544255 | 0.51 | | setup/deserializing with no compression and validation 1024 | 2566945 | 15.17 | 38931135 | 0.03 | 1258935 | 0.49 | | setup/deserializing with no compression and no validation 1024 | 6722 | 14.84 | 99769 | 0.03 | 3235 | 0.48 | | setup/setup 2048 | 9092980 | 3.63 | 33024605 | 0.09 | 2909175 | 0.32 | | setup/serializing with compression 2048 | 8240 | 16.32 | 134437 | 0.03 | 4141 | 0.5 | | setup/serializing with no compression 2048 | 9767 | 16.41 | 160306 | 0.03 | 4976 | 0.51 | | setup/deserializing with compression and validation 2048 | 7239787 | 15.32 | 110931280 | 0.03 | 3639477 | 0.5 | | setup/deserializing with compression and no validation 2048 | 2113330 | 15.93 | 33674890 | 0.03 | 1084243 | 0.51 | | setup/deserializing with no compression and validation 2048 | 5081373 | 15.25 | 77482178 | 0.03 | 2537317 | 0.5 | | setup/deserializing with no compression and no validation 2048 | 13079 | 15.14 | 198034 | 0.03 | 6479 | 0.5 | | setup/setup 4096 | 9731992 | 6.14 | 59757543 | 0.07 | 4328023 | 0.44 | | setup/serializing with compression 4096 | 16462 | 16.44 | 270647 | 0.03 | 8407 | 0.51 | | setup/serializing with no compression 4096 | 19654 | 16.4 | 322264 | 0.03 | 9854 | 0.5 | | setup/deserializing with compression and validation 4096 | 14330104 | 15.47 | 221659652 | 0.03 | 7227388 | 0.5 | | setup/deserializing with compression and no validation 4096 | 4214098 | 15.79 | 66537465 | 0.03 | 2137818 | 0.51 | | setup/deserializing with no compression and validation 4096 | 10095359 | 15.33 | 154755178 | 0.03 | 5037809 | 0.5 | | setup/deserializing with no compression and no validation 4096 | 26192 | 14.94 | 391397 | 0.03 | 12862 | 0.49 | | setup/setup 8192 | 9594720 | 11.35 | 108884342 | 0.06 | 6893620 | 0.72 | | setup/serializing with compression 8192 | 33114 | 16.42 | 543855 | 0.03 | 16713 | 0.5 | | setup/serializing with no compression 8192 | 39992 | 16.17 | 646576 | 0.03 | 19983 | 0.5 | | setup/deserializing with compression and validation 8192 | 28578044 | 15.55 | 444525236 | 0.03 | 14337421 | 0.5 | | setup/deserializing with compression and no validation 8192 | 8417684 | 15.93 | 134082205 | 0.03 | 4309633 | 0.51 | | setup/deserializing with no compression and validation 8192 | 20134851 | 15.39 | 309785238 | 0.03 | 10066797 | 0.5 | | setup/deserializing with no compression and no validation 8192 | 51832 | 15.06 | 780369 | 0.03 | 25710 | 0.5 | | setup/setup 16384 | 10096523 | 19.72 | 199105054 | 0.06 | 11317161 | 1.12 | | setup/serializing with compression 16384 | 67050 | 16.28 | 1091282 | 0.03 | 33502 | 0.5 | | setup/serializing with no compression 16384 | 80269 | 16.2 | 1300111 | 0.03 | 40785 | 0.51 | | setup/deserializing with compression and validation 16384 | 56905556 | 15.56 | 885542593 | 0.03 | 28622218 | 0.5 | | setup/deserializing with compression and no validation 16384 | 16829951 | 15.96 | 268660355 | 0.03 | 8607645 | 0.51 | | setup/deserializing with no compression and validation 16384 | 40158772 | 15.44 | 619890738 | 0.03 | 20006634 | 0.5 | | setup/deserializing with no compression and no validation 16384 | 103242 | 15.07 | 1555913 | 0.03 | 51533 | 0.5 | | serialized size with compression and validation 1024 | 3280 | 15 | 49208 | 0.03 | 1640 | 0.5 | | serialized size with compression and no validation 1024 | 3280 | 15 | 49208 | 0.03 | 1640 | 0.5 | | serialized size with no compression and validation 1024 | 6544 | 15.04 | 98408 | 0.03 | 3272 | 0.5 | | serialized size with no compression and no validation 1024 | 6544 | 15.04 | 98408 | 0.03 | 3272 | 0.5 | | serialized size with compression and validation 2048 | 6448 | 15.25 | 98360 | 0.03 | 3224 | 0.5 | | serialized size with compression and no validation 2048 | 6448 | 15.25 | 98360 | 0.03 | 3224 | 0.5 | | serialized size with no compression and validation 2048 | 12880 | 15.27 | 196712 | 0.03 | 6440 | 0.5 | | serialized size with no compression and no validation 2048 | 12880 | 15.27 | 196712 | 0.03 | 6440 | 0.5 | | serialized size with compression and validation 4096 | 12784 | 15.38 | 196664 | 0.03 | 6392 | 0.5 | | serialized size with compression and no validation 4096 | 12784 | 15.38 | 196664 | 0.03 | 6392 | 0.5 | | serialized size with no compression and validation 4096 | 25552 | 15.39 | 393320 | 0.03 | 12776 | 0.5 | | serialized size with no compression and no validation 4096 | 25552 | 15.39 | 393320 | 0.03 | 12776 | 0.5 | | serialized size with compression and validation 8192 | 25456 | 15.45 | 393272 | 0.03 | 12728 | 0.5 | | serialized size with compression and no validation 8192 | 25456 | 15.45 | 393272 | 0.03 | 12728 | 0.5 | | serialized size with no compression and validation 8192 | 50896 | 15.45 | 786536 | 0.03 | 25448 | 0.5 | | serialized size with no compression and no validation 8192 | 50896 | 15.45 | 786536 | 0.03 | 25448 | 0.5 | | serialized size with compression and validation 16384 | 50800 | 15.48 | 786488 | 0.03 | 25400 | 0.5 | | serialized size with compression and no validation 16384 | 50800 | 15.48 | 786488 | 0.03 | 25400 | 0.5 | | serialized size with no compression and validation 16384 | 101584 | 15.48 | 1572968 | 0.03 | 50792 | 0.5 | | serialized size with no compression and no validation 16384 | 101584 | 15.48 | 1572968 | 0.03 | 50792 | 0.5 | -
STEVAN Antoine authored
woopsie, it was missing from !54
-
STEVAN Antoine authored
this is a minor proposition, get rid of the `UniPoly12_381` or `UniPoly381` type aliases that are just `DensePolynomial<Fr>`. now, it's enough to just change the import of `Fr` to another crate / another curve, without having an inconsistent mention to BLS-12-381 in the name of the _dense polynomial_.
-
STEVAN Antoine authored
a small change to make all the benchmarks consistent with each other, i.e. the ID of the bench itself and then the name of the curve / the field. this should make the parsing of the ID much simpler.
-