- Sep 23, 2024
-
-
STEVAN Antoine authored
## changelog - _semi\_avid_, _kzg_ and _aplonk_ examples have been added - the `fs` module has been hidden behind an `fs` feature - the `conversions` module has been properly hidden behind the `test` config feature - the documentation has been completed - some error messages have been improved > **Note** > > the documentation of aPlonK has been left as-is for now
-
- Aug 01, 2024
-
-
STEVAN Antoine authored
## changelog - `src/main.rs` has been moved to a new crate: `bins/saclin` which stands for **S**emi-**A**VID **CLI** in **N**ushell - dependencies of `komodo` have been fixed - Nushell and Rust tests have been split in the Makefile: by default, only Rust tests will run locally and Nushell tests and examples can be run manually if desired. The CI will still run everything. - the README has been updated - test images have been moved to `assets/` - the majority of the old `./nu-utils/` module have been moved to internals of `./benchmarks/` and imports have been fixed - `cargo.nu` has been moved to `./bins/` and a new `./bins/README.md` mentions it - `./bins/saclin/` has been created and should be a self-contained Rust crate + Nushell module
-
- Jul 12, 2024
-
-
STEVAN Antoine authored
this MR is two-fold - it restructures the two main Nushell modules so that they are easier to read and use - it improves the "run" and "plot" modules for the benchmarks ## changelog - `.nushell/` is now renamed to `nu-utils/` - `benchmarks/` is now a valid Nushell module which exports a bunch of modules - `benchmarks linalg`: measure and plot linear algebra operations - `benchmarks setup`: measure and plot trusted setup building - `benchmarks commit`: measure and plot crafting commitments - `benchmarks recoding`: measure and plot the recoding of shards - `benchmarks fec`: measure and plot FEC operations, such as encoding and recoding, and allow combining these results with the pure recoding ones - the submodules of `benchmarks` typically have a `run` and a `plot` command, whith the exception of `benchmarks fec` which has a `run` module and multiple "plot" commands in `benchmarks fec plot` - the "run" commands will create a random temp file by default and ask for confirmation otherwise if the output file already exists, unless `--force` is used - snippetds in `benchmarks/README.md` have been updated
-
- May 29, 2024
-
-
STEVAN Antoine authored
this MR turns `./.nushell/` into a directory module by - adding `mod.nu` - exporting all the modules all uses of `.nushell/` have been fixed to not mention `.nu` internal modules anymore. >
**Note** > the `.nushell venv` module has been removed because, when the `$venv.VENV` activation script is not there, Nushell can't parse the whole `.nushell` module, which is very annoying to have to rely of the state of the external filesystem to be able to simply parse a module...
-
- May 28, 2024
-
-
STEVAN Antoine authored
## new structure for the repository - benchmarks are in `./benchmarks/` and can be run with either `cargo run --package benchmarks --bin <bench>` or the commands in `./benchmarks/README.md` ``` ├── Cargo.toml ├── README.md └── src └── bin ├── commit.rs ├── fec.rs ├── linalg.rs ├── operations │ ├── curve_group.rs │ └── field.rs ├── recoding.rs ├── setup.rs └── setup_size.rs ``` - examples are now in `./bins/` as standalone binaries and can be run either with `cargo run --package <pkg>` or with the help of the `cargo bin` command from `.nushell/cargo.nu` ``` ├── curves │ ├── Cargo.toml │ ├── README.md │ └── src │ └── main.rs ├── inbreeding │ ├── build.nu │ ├── Cargo.toml │ ├── consts.nu │ ├── mod.nu │ ├── plot.nu │ ├── README.md │ ├── run.nu │ └── src │ ├── environment.rs │ ├── main.rs │ └── strategy.rs ├── rank │ ├── Cargo.toml │ └── src │ └── main.rs └── rng ├── Cargo.toml └── src └── main.rs ``` - Nushell modules are now located in `./.nushell/` ## changelog apart from the changes to the general structure of the repo: - `binary.nu` -> `.nushell/binary.nu` - new `cargo bin` command from `.nushell/cargo.nu` - `error throw` is now defined in `.nushell/error.nu` - main TOML has been greatly simplified because the dependencies of "examples" have been moved to the associated crates - the rest is basically the same but in the new structure
-
STEVAN Antoine authored
related to - dragoon/komodo!113 this MR makes sure that the seeds given to each "strategy + scenario" loop are different by generating a bunch of unique seeds per strategy.
-
- May 27, 2024
-
-
STEVAN Antoine authored
- add `--prng-seed: u8` to fix the random number generator seed ## example by running the following snippet, we get - `first.123.png` and `second.123.png` with `--prng-seed 123` which are the same - `first.111.png` and `second.111.png` with `--prng-seed 111` which are the same - `first.111.png` and `first.123.png` are different ```bash use ./scripts/inbreeding const OPTS = { nb_bytes: (10 * 1_024), k: 10, n: 20, nb_scenarii: 10, nb_measurements: 10, measurement_schedule: 1, measurement_schedule_start: 0, max_t: 50, strategies: [ "single:1", "double:0.5:1:2", "single:2" "double:0.5:2:3", "single:3" "single:5" "single:10", ], environment: "random-fixed:0.5:1", } inbreeding build inbreeding run --options $OPTS --prng-seed 123 --output /tmp/first.123.nuon inbreeding plot /tmp/first.123.nuon --options { k: $OPTS.k } --save /tmp/first.123.png inbreeding run --options $OPTS --prng-seed 123 --output /tmp/second.123.nuon inbreeding plot /tmp/second.123.nuon --options { k: $OPTS.k } --save /tmp/second.123.png inbreeding run --options $OPTS --prng-seed 111 --output /tmp/first.111.nuon inbreeding plot /tmp/first.111.nuon --options { k: $OPTS.k } --save /tmp/first.111.png inbreeding run --options $OPTS --prng-seed 111 --output /tmp/second.111.nuon inbreeding plot /tmp/second.111.nuon --options { k: $OPTS.k } --save /tmp/second.111.png ``` | seed | first | second | | ---- | ----- | ------ | | 123 |  |  | | 111 |  |  |
-
STEVAN Antoine authored
- add a timestamp to all the measurements of the _diversity_ from `inbreeding/mod.rs` - allow to delay the measurement starts with `--measurement-schedule-start`, to help completing already existing measurements >
**Important** > existing measurement files will have to change shape from > ``` > table<strategy: string, diversity: list<float>> > ``` > to > ``` > table<strategy: string, diversity: table<t: int, diversity: float>> > ``` -
STEVAN Antoine authored
makes sure - "inbreeding" experiment quits when there are less than $k$ shards - `fec::decode` returns `KomodoError::TooFewShards` when no shards are provided
-
- May 24, 2024
-
-
STEVAN Antoine authored
just a small QoL improvement
-
STEVAN Antoine authored
this MR is two-fold - refactor `run.nu` and `plot.nu` from `scripts/inbreeding/` into Nushell modules with `--options` as argument instead of `options.nu` (a7cebb95, 6b72191f and 5f1c4963) - introduce another level of depth to the measurements (a0e52e95) >
**Note** > in the table below > - $s$ is the number of recoding scenarii averages together > - $m$ is the number of measurements per point > - two iterations of the same experiment are shown side by side for comparison s | m | . | . :--:|:----:|:-------------------------:|:-------------------------: 1 | 10 |  |  1 | 100 |  |  1 | 1000 |  |  10 | 100 |  |  100 | 10 |  |  100 | 100 |  |  we can see that - the smaller the $s$, the more different the two figures are on each line -> this is likely due to the fact that, if only one recoding scenario is used, then repeating the same experiment will result in very different results and measurements. Running the same experiment $s$ times and averaging helps reducing the variance along this axis - the smaller the $m$, the more noisy the measures of each points -> this is simply because, when $m$ is small, the variance of the empirical means measured for each point is higher ## final results  
-
- May 23, 2024
-
-
STEVAN Antoine authored
up until now, elliptic curves have been hardcoded in the benchmarks, forcing to run them on all supported curves... this MR makes it possible to use only a subset of curves. >
**Note** > when running the same commands from !104, minus the "inbreeding" ones which are not affected by this MR, the time goes from 12min 33sec to 4min 28sec ## TODO - [x] setup - [x] commit - [x] recoding - [x] fec - [ ] linalg - [ ] setup size - [ ] field operations - [ ] group operations > **Note** > because all the unticked bullet points above are far from critical to the paper and do require to measure all curves, i won't change these for now -
STEVAN Antoine authored
this MR moves run and plot commands from `examples/benches/README.md` to - `scripts/setup/`: `run.nu` and `plot.nu` - `scripts/commit/`: `run.nu` and `plot.nu` - `scripts/recoding/`: `run.nu` and `plot.nu` - `scripts/fec/`: `run.nu` and `plot.nu` - `scripts/inbreeding/`: `build.nu`, `run.nu` and `plot.nu` to generate all the figures at once ```bash use scripts/setup/run.nu; seq 0 13 | each { 2 ** $in } | run --output data/setup.ndjson use ./scripts/setup/plot.nu; plot data/setup.ndjson --save ~/setup.pdf use scripts/commit/run.nu; seq 0 13 | each { 2 ** $in } | run --output data/commit.ndjson use ./scripts/commit/plot.nu; plot data/commit.ndjson --save ~/commit.pdf use scripts/recoding/run.nu; seq 0 18 | each { 512 * 2 ** $in } | run --ks [2, 4, 8, 16] --output data/recoding.ndjson use ./scripts/recoding/plot.nu; plot data/recoding.ndjson --save ~/recoding.pdf use scripts/fec/run.nu; seq 0 18 | each { 512 * 2 ** $in } | run --ks [2, 4, 8, 16] --output data/fec.ndjson use ./scripts/fec/plot.nu; plot encoding data/fec.ndjson --save ~/encoding.pdf use ./scripts/fec/plot.nu; plot decoding data/fec.ndjson --save ~/decoding.pdf use ./scripts/fec/plot.nu; plot e2e data/fec.ndjson --save ~/e2e.pdf use ./scripts/fec/plot.nu; plot combined data/fec.ndjson --recoding data/recoding.ndjson --save ~/comparison.pdf use ./scripts/fec/plot.nu; plot ratio data/fec.ndjson --recoding data/recoding.ndjson --save ~/ratio.pdf ./scripts/inbreeding/build.nu ./scripts/inbreeding/run.nu --output data/inbreeding.nuon ./scripts/inbreeding/plot.nu data/inbreeding.nuon --save ~/inbreeding.pdf ``` >
**Note** > this took around 27min 18sec in total on my machine with 14min 45sec for the inbreeding section only and 12min 33sec for the rest -
STEVAN Antoine authored
this MR: - refactors the "inbreeding" example into `examples/inbreeding/` - adds `--strategy` and `--environment` - `Strategy::draw` will draw the number of shards to keep for recoding - `Environment::update` will update the pool of shards by losing some of them
-
- May 21, 2024
-
-
STEVAN Antoine authored
- update `benches/README.md` to use `cargo run --release --example ...` - add `build-examples` to `Makefile` to build all examples in release ### minor change add two `eprintln!` in `inbreeding.rs` to show the experiment parameters
-
STEVAN Antoine authored
- new `scripts/plot.nu` with common tools and options - better sets of parameters - better commands in `benches/README.md`
-
- May 13, 2024
-
-
STEVAN Antoine authored
this MR makes the plot a bit nicer. ## new figures       
-
- May 02, 2024
-
-
STEVAN Antoine authored
this MR adds `examples/inbreeding.rs` which allows to do two things - _naive recoding_: in order to generate a new random shard, we first $k$-decode the whole data and then $1$-encode a single shard - _true recoding_: to achieve the same goal, we directly $k$-recode shards into a new one ## the scenario regardless of the _recoding strategy_, the scenario is the same 1. data is split into $k$ shards and $n$ original shards are generated 2. for a given number of steps $s$, $k$ shards are drawn randomly with replacement and we count the number of successful decoding, given a measure of the _diversity_, $$\delta = \frac{\#success}{\#attempts}$$ 3. create a new _recoded shard_ and add it to the $n$ previous ones, i.e. $n$ increases by one 4. repeat steps 2. and 3. as long as you want ## results 
-
- Apr 29, 2024
-
-
we want to compare - _naive recoding_: k-decoding followed by 1-encoding - _komodo recoding_: k-recoding with $k = \#\text{shards}$ # results >
**Note** > we see that the _naive recoding_ is around 100 times slower compared to the _komodo recoding_ > **Note** > the format of the labels is always `{curve} / {k}`   -
STEVAN Antoine authored
i've moved the plotting scripts to [GPLT](https://gitlab.isae-supaero.fr/a.stevan/gplt) which allows to install a single command, called `gplt`, with two subcommands - `gplt plot` which is the same as old `python scripts/plot/plot.py` - `gplt multi_bar` which is the same as old `python scripts/plot/multi_bar.py`
-
STEVAN Antoine authored
otherwise, k doesn't play any role in the "recoding" benchmark
-
- Apr 26, 2024
-
-
STEVAN Antoine authored
this MR adds - `examples/benches/bench_fec.rs` to the list of example benches - instructions on how to run the new benchmark and plot the results ## results   
-
STEVAN Antoine authored
- fix the path to the "bench" readme and remove it from the plot scripts - "BLS-12-381" to "BLS12-381" for consistency
-
STEVAN Antoine authored
this MR goes from ```rust let xs = seq 0 5 | each { 2 ** $in } | wrap x let twice = $xs | insert measurement { 2 * $in.x } | insert error { 0.1 + 0.5 * $in.x } let square = $xs | insert measurement { $in.x ** 2 } | insert error { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square }, { group: "2 * x", items: $twice } ] | to json) ``` to ```rust let xs = seq 0 5 | each { 2 ** $in } let twice = $xs | wrap x | insert y { 2 * $in.x } | insert e { 0.1 + 0.5 * $in.x } let square = $xs | wrap x | insert y { $in.x ** 2 } | insert e { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { name: "x ^ 2", points: $square }, { name: "2 * x", points: $twice } ] | to json) ``` updates the "bench" README and adds type annotations to the `plot.py` script.
-
STEVAN Antoine authored
this MR - moves the last "recoding" benchmark to `examples/benches/` - moves the README, which is now all alone, to `examples/benches/` - adds a mention to `examples/benches/README.md` in `README.md` - some minor improvements to the bench README ## TODO - [x] find a way to plot the "recoding" results (thanks to !90)
-
- Apr 25, 2024
-
-
STEVAN Antoine authored
## changelog - benchmarks - _commit_ has been removed in favor of `examples/benches/commit.rs` - _linalg_ has been migrated to `examples/benches/` as `bench_linalg` - _setup_ has been migrated to `examples/benches/` as `bench_setup` - `read-atomic-ops` command has been moved to `scripts/parse.nu` module - `scripts/plot/bench_commit.py` has been made more general and renamed to `scripts/plot/plot.py` - `scripts/plot/benches.py` has been removed because it's not required anymore => `plot.py` and `multi_bar.py` are general enough
-
STEVAN Antoine authored
this MR - bumps PLNK to 0.6.0 - update all existing code - uses the PLNK lib in `examples/benches/commit.rs` - fixes the y label of the plot in `scripts/plot/bench_commit.py`: was _ns_, should be _ms_
-
- Apr 24, 2024
-
-
STEVAN Antoine authored
i've basically refactored the whole "bench" framework that was inlined in `examples/benches/operations/field.rs` and `examples/benches/operations/curve_group.rs` into a new repo called [PLNK](https://gitlab.isae-supaero.fr/a.stevan/plnk). nothing effectively changes on the side of Komodo but now the code is much simpler here :)
-
STEVAN Antoine authored
this idea is to not use `criterion` and measure exactly what we want ## results    
-
- Apr 23, 2024
-
-
STEVAN Antoine authored
as per title
-
- Apr 22, 2024
-
-
STEVAN Antoine authored
this MR - adds an Arkworks bench oneshot function to the `bench_commit` example - adapts the `measure!` macro to pass a _pairing-friendly_ curve - give different linestyle to curves in the Python script ## example measurements 
-
- Apr 12, 2024
-
-
STEVAN Antoine authored
-
STEVAN Antoine authored
## changelog - rename the `encode` function to `prove` and have it take _shards_ instead of an _encoding matrix_: this is to isolate the "encoding" process inside the `fec` module and leave the main `komodo::prove` only compute the "proof", i.e. the commits of the data from ```rust fn encode<F, G, P>( bytes: &[u8], encoding_mat: &Matrix<F>, powers: &Powers<F, G>, ) -> Result<Vec<Block<F, G>>, KomodoError> ``` to ```rust fn prove<F, G, P>( bytes: &[u8], powers: &Powers<F, G>, k: usize, ) -> Result<Vec<Commitment<F, G>>, KomodoError> ``` - rename `fec::Shard.combine` to `fec::Shard.recode_with` to get rid of "combine" - rename `fec::recode` to `fec::recode_with_coeffs` to show that this version takes a list of coefficients - rename `Block.commit` to `Block.proof`: "commit" should be "commits" and it's usually refered to as "proof" - split `prove` further into `prove` and `build`: `prove` now outputs a `Vec<Commitment<F>>`, `build` simply takes a `V...
-
- Apr 10, 2024
-
-
STEVAN Antoine authored
i ended up adding a bunch of changes to the benchmarks
## changelog * 805a2454 reduce the number of loops and the warmup time * f7ce05c3 don't serialize for real to save time * 37a2a7e2 don't try to compress with validation * 409f3e3c don't multiply degree by 1_024 * 610024a9 fix setup * 3d5e7c58 fix setup * 3d3167fb run benchmarks on BLS12-381, BN-254 and PALLAS * da2a71a1 pass name of the curve as parameter * 954fd6d3 plot commit for all curves * f980b30f plot all curves in linalg * 5e41df1d rename `labels` to `keys` in commit * 8bb64f99 filter setup by curves * 0163c8f9 plot all curves in setup * 8c91c6d8 split the setup of Komodo and the serde benchmarks * 0784f294 add a manual benchmark to measure the commit * 608a3fd1 move the "example benches" to `examples/benches/` * 10f9a37c add a script to plot results from `bench_commit` * 6d512fa6 move plot script from `benches/` to `scripts/plot/` * a4e6ffbc measure VESTA
-
- Apr 09, 2024
-
-
STEVAN Antoine authored
-
STEVAN Antoine authored
... instead of the number of bytes that does not really make any sense because we only consider one polynomial, i.e. only one column / shard of data. the following relation between the degree of the polynomials and the size of the data holds still $$\text{deg}(P) = \frac{\#\text{bytes}}{k \times |f_r|}$$ where $|f_r|$ is the size of an element of the scalar finite prime field of the elliptic curve $F_r$ and $P$ is the polynomial of a single column / shard of data
-
- Apr 08, 2024
-
-
STEVAN Antoine authored
should address #8 ## changelog - move the internal `rng` to an argument of type `R: RngCore` for the following functions - `recode` in `lib.rs` - `linalg::Matrix::random` - `generate_random_setup` in `main.rs` - make sure - `ark_std::test_rng` is only used in tests modules - `rand::thread_rng` is used in benchmarks, examples and `main.rs`
-
- Apr 05, 2024
-
-
STEVAN Antoine authored
in 3c91ef12 and !54, a new implementation of the creation of the _trusted setup_ has been introduced, that gets rid of the `E: Pairing` requirement with a more general `<F: PrimeField, G: CurveGroup<_>>`. however, the size of the _trusted setup_ was incorrect. `zk::setup` requires the _maximum degree_ of the _trusted setup_, however, the number of bytes `nb_bytes` was consistently being given to it throughout the code base... this MR - introduces a new `zk::nb_elements_in_setup` that converts a number of bytes to the associated number of _trusted setup_ elements - uses that new `zk` function before calling `zK::setup` in all the code base ## results >
**Note** > !58 is required for the whole table to be used easily > **Note** > here is how to run the benchmarks in Nushell > ```bash > let bad_mr = "3c91ef12" > let fix = "fix-setup-size" > > git co $"($bad_mr)^" > cargo criterion --output-format verbose --message-format json out> benches/results/before.ndjson > cargo run --example bench_setup_size out>> benches/results/before.ndjson > > git co $bad_mr > cargo criterion --output-format verbose --message-format json out> benches/results/after.ndjson > cargo run --example bench_setup_size out>> benches/results/after.ndjson > > git co $fix > cargo criterion --output-format verbose --message-format json out> benches/results/fix.ndjson > cargo run --example bench_setup_size out>> benches/results/fix.ndjson > ``` > and here the script used to generate that table is the following: > ```bash > def "parse bench-file" []: table<reason: string, id: string, mean: any> -> table<id: string, mean: float> { > where reason == "benchmark-complete" > | select id mean > # NOTE: because `bench_setup_size.rs` outputs `record<reason: string, id: string, mean: float>` > | update mean { if ($in | describe) == int { $in } else { $in.estimate } } > # NOTE: addressed in `!58` > | update id {|it| > if ($it.id | str starts-with "recoding") { > $it.id ++ " on some curve" > } else { > $it.id > } > } > | update mean { into int } > | update id { parse "{id} on {curve}" | into record | get id } > } > > let before = open benches/results/before.ndjson | parse bench-file > let after = open benches/results/after.ndjson | parse bench-file > let fix = open benches/results/fix.ndjson | parse bench-file > > $before > | join $after id > | rename --column { mean: "before", mean_: "after" } > | join $fix id > | rename --column { mean: "fix" } > | insert b->a {|it| $it.after / $it.before | math round --precision 2 } > | insert a->f {|it| $it.fix / $it.after | math round --precision 2 } > | insert b->f {|it| $it.fix / $it.before | math round --precision 2 } > | select id before b->a after a->f fix b->f > | to md --pretty > ``` > **Important** > before this very MR, i.e. on `3c91ef12`, there was a factor of 15x between _before_ and _after_, meaning that the _trusted setups_ were 15 times larger and longer to serde > > this can be explained by the following facts > - due to the bad sizes given to the _trusted setup_ building function, the setups were around 30 times larger, 30 being close to the size of a field element on BLS-12-381 > - because the `zk::setup` function only creates half of what its Arkworks counterpart does, the setups were at the same time around 2 times smaller > > combining these two and we get a factor of 15x!! > > now, with this MR, we get rid of the first factor and are left with _trusted setups_ twice as small and twice as fast to serde | id | before | b->a | after | a->f | fix | b->f | | --------------------------------------------------------------- | ---------- | ------- | ---------- | ------- | ---------- | ------- | | inverse 10x10 | 336359 | 0.93 | 313852 | 1.05 | 329191 | 0.98 | | inverse 15x15 | 811018 | 0.99 | 800064 | 1.01 | 807417 | 1 | | inverse 20x20 | 1511592 | 1 | 1508034 | 1.02 | 1542538 | 1.02 | | inverse 30x30 | 3703750 | 1.01 | 3731380 | 1.02 | 3793071 | 1.02 | | inverse 40x40 | 7163839 | 1 | 7145015 | 1.03 | 7336996 | 1.02 | | inverse 60x60 | 18620089 | 1 | 18625577 | 1.02 | 18922329 | 1.02 | | inverse 80x80 | 37571610 | 1 | 37643906 | 1.02 | 38306236 | 1.02 | | inverse 120x120 | 105404054 | 1 | 105281874 | 1.01 | 106797441 | 1.01 | | inverse 160x160 | 224332257 | 1 | 224092724 | 1.01 | 227066824 | 1.01 | | inverse 240x240 | 671096671 | 1 | 671005055 | 1.01 | 679280010 | 1.01 | | inverse 320x320 | 1487909175 | 1 | 1488534950 | 1.01 | 1506027089 | 1.01 | | transpose 10x10 | 87 | 0.93 | 81 | 1 | 81 | 0.93 | | transpose 15x15 | 175 | 0.96 | 168 | 1 | 168 | 0.96 | | transpose 20x20 | 284 | 1.03 | 293 | 0.95 | 279 | 0.98 | | transpose 30x30 | 759 | 1.22 | 924 | 0.89 | 823 | 1.08 | | transpose 40x40 | 1798 | 1.63 | 2935 | 0.98 | 2887 | 1.61 | | transpose 60x60 | 3830 | 1.67 | 6378 | 1.01 | 6468 | 1.69 | | transpose 80x80 | 7720 | 1.5 | 11548 | 0.99 | 11470 | 1.49 | | transpose 120x120 | 16365 | 1.5 | 24572 | 0.98 | 24059 | 1.47 | | transpose 160x160 | 42764 | 1.18 | 50453 | 1.07 | 54189 | 1.27 | | transpose 240x240 | 119435 | 1.18 | 141357 | 1 | 140752 | 1.18 | | transpose 320x320 | 218674 | 1.13 | 246262 | 1 | 247167 | 1.13 | | mul 10x10 | 15499 | 1 | 15474 | 1 | 15527 | 1 | | mul 15x15 | 51800 | 1 | 51913 | 1 | 51772 | 1 | | mul 20x20 | 122399 | 1 | 122390 | 1.01 | 123248 | 1.01 | | mul 30x30 | 499047 | 0.95 | 474740 | 1.01 | 481756 | 0.97 | | mul 40x40 | 1224755 | 0.98 | 1203588 | 1.01 | 1211995 | 0.99 | | mul 60x60 | 4166589 | 0.99 | 4122003 | 1 | 4139839 | 0.99 | | mul 80x80 | 9942560 | 0.99 | 9870864 | 1 | 9912815 | 1 | | mul 120x120 | 33706366 | 0.99 | 33458234 | 1.01 | 33680802 | 1 | | mul 160x160 | 79645646 | 1 | 79974020 | 1.01 | 80469214 | 1.01 | | mul 240x240 | 277091998 | 0.99 | 274638961 | 1.01 | 276412347 | 1 | | mul 320x320 | 664942845 | 1 | 662229758 | 1.02 | 676065811 | 1.02 | | recoding 1 bytes and 2 shards with k = 2 | 124 | 1 | 124 | 1.02 | 127 | 1.02 | | recoding 1 bytes and 2 shards with k = 4 | 179 | 0.99 | 178 | 1.01 | 180 | 1.01 | | recoding 1 bytes and 2 shards with k = 8 | 284 | 1 | 284 | 1 | 285 | 1 | | recoding 1 bytes and 2 shards with k = 16 | 496 | 1.01 | 499 | 1.01 | 505 | 1.02 | | recoding 1 bytes and 4 shards with k = 2 | 347 | 1.01 | 349 | 0.99 | 347 | 1 | | recoding 1 bytes and 4 shards with k = 4 | 505 | 1 | 505 | 1 | 507 | 1 | | recoding 1 bytes and 4 shards with k = 8 | 821 | 1 | 825 | 1 | 825 | 1 | | recoding 1 bytes and 4 shards with k = 16 | 1451 | 1 | 1454 | 1.01 | 1464 | 1.01 | | recoding 1 bytes and 8 shards with k = 2 | 792 | 1 | 791 | 1 | 792 | 1 | | recoding 1 bytes and 8 shards with k = 4 | 1162 | 1 | 1163 | 1.01 | 1169 | 1.01 | | recoding 1 bytes and 8 shards with k = 8 | 1884 | 1.01 | 1897 | 1 | 1902 | 1.01 | | recoding 1 bytes and 8 shards with k = 16 | 3361 | 1 | 3368 | 1.02 | 3446 | 1.03 | | recoding 1 bytes and 16 shards with k = 2 | 1680 | 1 | 1679 | 1.01 | 1699 | 1.01 | | recoding 1 bytes and 16 shards with k = 4 | 2472 | 1 | 2475 | 1 | 2468 | 1 | | recoding 1 bytes and 16 shards with k = 8 | 4034 | 1 | 4033 | 1.01 | 4060 | 1.01 | | recoding 1 bytes and 16 shards with k = 16 | 7187 | 1 | 7173 | 1.02 | 7331 | 1.02 | | recoding 1024 bytes and 2 shards with k = 2 | 1020 | 1 | 1020 | 1 | 1017 | 1 | | recoding 1024 bytes and 2 shards with k = 4 | 1079 | 1 | 1081 | 0.98 | 1064 | 0.99 | | recoding 1024 bytes and 2 shards with k = 8 | 1186 | 0.98 | 1167 | 1 | 1166 | 0.98 | | recoding 1024 bytes and 2 shards with k = 16 | 1386 | 1 | 1392 | 0.99 | 1383 | 1 | | recoding 1024 bytes and 4 shards with k = 2 | 2978 | 1 | 2968 | 1 | 2970 | 1 | | recoding 1024 bytes and 4 shards with k = 4 | 3120 | 1 | 3113 | 1 | 3113 | 1 | | recoding 1024 bytes and 4 shards with k = 8 | 3438 | 1 | 3445 | 1 | 3447 | 1 | | recoding 1024 bytes and 4 shards with k = 16 | 4056 | 1 | 4071 | 1 | 4051 | 1 | | recoding 1024 bytes and 8 shards with k = 2 | 6905 | 1 | 6879 | 1 | 6861 | 0.99 | | recoding 1024 bytes and 8 shards with k = 4 | 7236 | 1 | 7216 | 1 | 7227 | 1 | | recoding 1024 bytes and 8 shards with k = 8 | 7969 | 1 | 7986 | 1 | 7962 | 1 | | recoding 1024 bytes and 8 shards with k = 16 | 9455 | 1 | 9427 | 1 | 9442 | 1 | | recoding 1024 bytes and 16 shards with k = 2 | 14746 | 1 | 14760 | 0.99 | 14686 | 1 | | recoding 1024 bytes and 16 shards with k = 4 | 15516 | 1 | 15493 | 1 | 15538 | 1 | | recoding 1024 bytes and 16 shards with k = 8 | 17112 | 1 | 17097 | 1 | 17078 | 1 | | recoding 1024 bytes and 16 shards with k = 16 | 20237 | 1 | 20284 | 1 | 20295 | 1 | | recoding 1048576 bytes and 2 shards with k = 2 | 1427516 | 1.01 | 1441658 | 0.99 | 1424866 | 1 | | recoding 1048576 bytes and 2 shards with k = 4 | 1083761 | 1.01 | 1094451 | 1 | 1089954 | 1.01 | | recoding 1048576 bytes and 2 shards with k = 8 | 1087564 | 0.99 | 1076515 | 1.02 | 1094795 | 1.01 | | recoding 1048576 bytes and 2 shards with k = 16 | 1089556 | 0.99 | 1078406 | 1.03 | 1105840 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 2 | 3256507 | 1 | 3250060 | 1.04 | 3370007 | 1.03 | | recoding 1048576 bytes and 4 shards with k = 4 | 3259079 | 1.01 | 3285892 | 1 | 3297768 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 8 | 3235697 | 1 | 3244151 | 1.01 | 3278027 | 1.01 | | recoding 1048576 bytes and 4 shards with k = 16 | 3240586 | 1.01 | 3264910 | 1.01 | 3284101 | 1.01 | | recoding 1048576 bytes and 8 shards with k = 2 | 7580388 | 1 | 7576306 | 1.02 | 7732461 | 1.02 | | recoding 1048576 bytes and 8 shards with k = 4 | 7567385 | 1.01 | 7614250 | 1.01 | 7699032 | 1.02 | | recoding 1048576 bytes and 8 shards with k = 8 | 7589588 | 1 | 7584071 | 1.01 | 7643021 | 1.01 | | recoding 1048576 bytes and 8 shards with k = 16 | 7572517 | 1 | 7596138 | 1.01 | 7637596 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 2 | 16248634 | 1 | 16245477 | 1.01 | 16450530 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 4 | 16253850 | 1 | 16299266 | 1.01 | 16458170 | 1.01 | | recoding 1048576 bytes and 16 shards with k = 8 | 16240827 | 1 | 16265027 | 1 | 16256734 | 1 | | recoding 1048576 bytes and 16 shards with k = 16 | 16229981 | 1 | 16307729 | 1 | 16265882 | 1 | | setup/setup 1024 | 8934763 | 2.12 | 18942383 | 0.11 | 2175852 | 0.24 | | setup/serializing with compression 1024 | 4194 | 15.82 | 66364 | 0.03 | 2100 | 0.5 | | setup/serializing with no compression 1024 | 4953 | 16.04 | 79451 | 0.03 | 2501 | 0.5 | | setup/deserializing with compression and validation 1024 | 3644409 | 15.18 | 55337980 | 0.03 | 1809773 | 0.5 | | setup/deserializing with compression and no validation 1024 | 1065186 | 15.74 | 16762363 | 0.03 | 544255 | 0.51 | | setup/deserializing with no compression and validation 1024 | 2566945 | 15.17 | 38931135 | 0.03 | 1258935 | 0.49 | | setup/deserializing with no compression and no validation 1024 | 6722 | 14.84 | 99769 | 0.03 | 3235 | 0.48 | | setup/setup 2048 | 9092980 | 3.63 | 33024605 | 0.09 | 2909175 | 0.32 | | setup/serializing with compression 2048 | 8240 | 16.32 | 134437 | 0.03 | 4141 | 0.5 | | setup/serializing with no compression 2048 | 9767 | 16.41 | 160306 | 0.03 | 4976 | 0.51 | | setup/deserializing with compression and validation 2048 | 7239787 | 15.32 | 110931280 | 0.03 | 3639477 | 0.5 | | setup/deserializing with compression and no validation 2048 | 2113330 | 15.93 | 33674890 | 0.03 | 1084243 | 0.51 | | setup/deserializing with no compression and validation 2048 | 5081373 | 15.25 | 77482178 | 0.03 | 2537317 | 0.5 | | setup/deserializing with no compression and no validation 2048 | 13079 | 15.14 | 198034 | 0.03 | 6479 | 0.5 | | setup/setup 4096 | 9731992 | 6.14 | 59757543 | 0.07 | 4328023 | 0.44 | | setup/serializing with compression 4096 | 16462 | 16.44 | 270647 | 0.03 | 8407 | 0.51 | | setup/serializing with no compression 4096 | 19654 | 16.4 | 322264 | 0.03 | 9854 | 0.5 | | setup/deserializing with compression and validation 4096 | 14330104 | 15.47 | 221659652 | 0.03 | 7227388 | 0.5 | | setup/deserializing with compression and no validation 4096 | 4214098 | 15.79 | 66537465 | 0.03 | 2137818 | 0.51 | | setup/deserializing with no compression and validation 4096 | 10095359 | 15.33 | 154755178 | 0.03 | 5037809 | 0.5 | | setup/deserializing with no compression and no validation 4096 | 26192 | 14.94 | 391397 | 0.03 | 12862 | 0.49 | | setup/setup 8192 | 9594720 | 11.35 | 108884342 | 0.06 | 6893620 | 0.72 | | setup/serializing with compression 8192 | 33114 | 16.42 | 543855 | 0.03 | 16713 | 0.5 | | setup/serializing with no compression 8192 | 39992 | 16.17 | 646576 | 0.03 | 19983 | 0.5 | | setup/deserializing with compression and validation 8192 | 28578044 | 15.55 | 444525236 | 0.03 | 14337421 | 0.5 | | setup/deserializing with compression and no validation 8192 | 8417684 | 15.93 | 134082205 | 0.03 | 4309633 | 0.51 | | setup/deserializing with no compression and validation 8192 | 20134851 | 15.39 | 309785238 | 0.03 | 10066797 | 0.5 | | setup/deserializing with no compression and no validation 8192 | 51832 | 15.06 | 780369 | 0.03 | 25710 | 0.5 | | setup/setup 16384 | 10096523 | 19.72 | 199105054 | 0.06 | 11317161 | 1.12 | | setup/serializing with compression 16384 | 67050 | 16.28 | 1091282 | 0.03 | 33502 | 0.5 | | setup/serializing with no compression 16384 | 80269 | 16.2 | 1300111 | 0.03 | 40785 | 0.51 | | setup/deserializing with compression and validation 16384 | 56905556 | 15.56 | 885542593 | 0.03 | 28622218 | 0.5 | | setup/deserializing with compression and no validation 16384 | 16829951 | 15.96 | 268660355 | 0.03 | 8607645 | 0.51 | | setup/deserializing with no compression and validation 16384 | 40158772 | 15.44 | 619890738 | 0.03 | 20006634 | 0.5 | | setup/deserializing with no compression and no validation 16384 | 103242 | 15.07 | 1555913 | 0.03 | 51533 | 0.5 | | serialized size with compression and validation 1024 | 3280 | 15 | 49208 | 0.03 | 1640 | 0.5 | | serialized size with compression and no validation 1024 | 3280 | 15 | 49208 | 0.03 | 1640 | 0.5 | | serialized size with no compression and validation 1024 | 6544 | 15.04 | 98408 | 0.03 | 3272 | 0.5 | | serialized size with no compression and no validation 1024 | 6544 | 15.04 | 98408 | 0.03 | 3272 | 0.5 | | serialized size with compression and validation 2048 | 6448 | 15.25 | 98360 | 0.03 | 3224 | 0.5 | | serialized size with compression and no validation 2048 | 6448 | 15.25 | 98360 | 0.03 | 3224 | 0.5 | | serialized size with no compression and validation 2048 | 12880 | 15.27 | 196712 | 0.03 | 6440 | 0.5 | | serialized size with no compression and no validation 2048 | 12880 | 15.27 | 196712 | 0.03 | 6440 | 0.5 | | serialized size with compression and validation 4096 | 12784 | 15.38 | 196664 | 0.03 | 6392 | 0.5 | | serialized size with compression and no validation 4096 | 12784 | 15.38 | 196664 | 0.03 | 6392 | 0.5 | | serialized size with no compression and validation 4096 | 25552 | 15.39 | 393320 | 0.03 | 12776 | 0.5 | | serialized size with no compression and no validation 4096 | 25552 | 15.39 | 393320 | 0.03 | 12776 | 0.5 | | serialized size with compression and validation 8192 | 25456 | 15.45 | 393272 | 0.03 | 12728 | 0.5 | | serialized size with compression and no validation 8192 | 25456 | 15.45 | 393272 | 0.03 | 12728 | 0.5 | | serialized size with no compression and validation 8192 | 50896 | 15.45 | 786536 | 0.03 | 25448 | 0.5 | | serialized size with no compression and no validation 8192 | 50896 | 15.45 | 786536 | 0.03 | 25448 | 0.5 | | serialized size with compression and validation 16384 | 50800 | 15.48 | 786488 | 0.03 | 25400 | 0.5 | | serialized size with compression and no validation 16384 | 50800 | 15.48 | 786488 | 0.03 | 25400 | 0.5 | | serialized size with no compression and validation 16384 | 101584 | 15.48 | 1572968 | 0.03 | 50792 | 0.5 | | serialized size with no compression and no validation 16384 | 101584 | 15.48 | 1572968 | 0.03 | 50792 | 0.5 | -
STEVAN Antoine authored
this is a minor proposition, get rid of the `UniPoly12_381` or `UniPoly381` type aliases that are just `DensePolynomial<Fr>`. now, it's enough to just change the import of `Fr` to another crate / another curve, without having an inconsistent mention to BLS-12-381 in the name of the _dense polynomial_.
-