- May 27, 2024
-
-
STEVAN Antoine authored
- define `scripts/color.nu` to manipulate RGB colors, especially mix two colors together - compute the color of _hybrid recoding strategies_ as a weighted sum of the two _simple recoding strategies_ involved, e.g. if the strategy is "10% of the time recode 2 shards and 90% of the time recode 3", then the color of that curve will be 10% the color of the simple strategy recoding 2 shards and 90% the color of the other simple strategy recoding 3 shards - make the _hybrid_ curves transparent and dashed ## example 
-
STEVAN Antoine authored
- add a timestamp to all the measurements of the _diversity_ from `inbreeding/mod.rs` - allow to delay the measurement starts with `--measurement-schedule-start`, to help completing already existing measurements >
**Important** > existing measurement files will have to change shape from > ``` > table<strategy: string, diversity: list<float>> > ``` > to > ``` > table<strategy: string, diversity: table<t: int, diversity: float>> > ``` -
STEVAN Antoine authored
makes sure - "inbreeding" experiment quits when there are less than $k$ shards - `fec::decode` returns `KomodoError::TooFewShards` when no shards are provided
-
- May 24, 2024
-
-
STEVAN Antoine authored
just a small QoL improvement
-
STEVAN Antoine authored
this MR is two-fold - refactor `run.nu` and `plot.nu` from `scripts/inbreeding/` into Nushell modules with `--options` as argument instead of `options.nu` (a7cebb95, 6b72191f and 5f1c4963) - introduce another level of depth to the measurements (a0e52e95) >
**Note** > in the table below > - $s$ is the number of recoding scenarii averages together > - $m$ is the number of measurements per point > - two iterations of the same experiment are shown side by side for comparison s | m | . | . :--:|:----:|:-------------------------:|:-------------------------: 1 | 10 |  |  1 | 100 |  |  1 | 1000 |  |  10 | 100 |  |  100 | 10 |  |  100 | 100 |  |  we can see that - the smaller the $s$, the more different the two figures are on each line -> this is likely due to the fact that, if only one recoding scenario is used, then repeating the same experiment will result in very different results and measurements. Running the same experiment $s$ times and averaging helps reducing the variance along this axis - the smaller the $m$, the more noisy the measures of each points -> this is simply because, when $m$ is small, the variance of the empirical means measured for each point is higher ## final results   -
STEVAN Antoine authored
-
STEVAN Antoine authored
this will use PLNK version 0.7.0 with prettier progress bars.
-
- May 23, 2024
-
-
STEVAN Antoine authored
check for empty inputs in the `run.nu` scripts
-
STEVAN Antoine authored
up until now, elliptic curves have been hardcoded in the benchmarks, forcing to run them on all supported curves... this MR makes it possible to use only a subset of curves. >
**Note** > when running the same commands from !104, minus the "inbreeding" ones which are not affected by this MR, the time goes from 12min 33sec to 4min 28sec ## TODO - [x] setup - [x] commit - [x] recoding - [x] fec - [ ] linalg - [ ] setup size - [ ] field operations - [ ] group operations > **Note** > because all the unticked bullet points above are far from critical to the paper and do require to measure all curves, i won't change these for now -
STEVAN Antoine authored
this MR moves run and plot commands from `examples/benches/README.md` to - `scripts/setup/`: `run.nu` and `plot.nu` - `scripts/commit/`: `run.nu` and `plot.nu` - `scripts/recoding/`: `run.nu` and `plot.nu` - `scripts/fec/`: `run.nu` and `plot.nu` - `scripts/inbreeding/`: `build.nu`, `run.nu` and `plot.nu` to generate all the figures at once ```bash use scripts/setup/run.nu; seq 0 13 | each { 2 ** $in } | run --output data/setup.ndjson use ./scripts/setup/plot.nu; plot data/setup.ndjson --save ~/setup.pdf use scripts/commit/run.nu; seq 0 13 | each { 2 ** $in } | run --output data/commit.ndjson use ./scripts/commit/plot.nu; plot data/commit.ndjson --save ~/commit.pdf use scripts/recoding/run.nu; seq 0 18 | each { 512 * 2 ** $in } | run --ks [2, 4, 8, 16] --output data/recoding.ndjson use ./scripts/recoding/plot.nu; plot data/recoding.ndjson --save ~/recoding.pdf use scripts/fec/run.nu; seq 0 18 | each { 512 * 2 ** $in } | run --ks [2, 4, 8, 16] --output data/fec.ndjson use ./scripts/fec/plot.nu; plot encoding data/fec.ndjson --save ~/encoding.pdf use ./scripts/fec/plot.nu; plot decoding data/fec.ndjson --save ~/decoding.pdf use ./scripts/fec/plot.nu; plot e2e data/fec.ndjson --save ~/e2e.pdf use ./scripts/fec/plot.nu; plot combined data/fec.ndjson --recoding data/recoding.ndjson --save ~/comparison.pdf use ./scripts/fec/plot.nu; plot ratio data/fec.ndjson --recoding data/recoding.ndjson --save ~/ratio.pdf ./scripts/inbreeding/build.nu ./scripts/inbreeding/run.nu --output data/inbreeding.nuon ./scripts/inbreeding/plot.nu data/inbreeding.nuon --save ~/inbreeding.pdf ``` >
**Note** > this took around 27min 18sec in total on my machine with 14min 45sec for the inbreeding section only and 12min 33sec for the rest -
STEVAN Antoine authored
this MR: - refactors the "inbreeding" example into `examples/inbreeding/` - adds `--strategy` and `--environment` - `Strategy::draw` will draw the number of shards to keep for recoding - `Environment::update` will update the pool of shards by losing some of them
-
- May 21, 2024
-
-
STEVAN Antoine authored
- update `benches/README.md` to use `cargo run --release --example ...` - add `build-examples` to `Makefile` to build all examples in release ### minor change add two `eprintln!` in `inbreeding.rs` to show the experiment parameters
-
STEVAN Antoine authored
- new `scripts/plot.nu` with common tools and options - better sets of parameters - better commands in `benches/README.md`
-
- May 13, 2024
-
-
STEVAN Antoine authored
this MR makes the plot a bit nicer. ## new figures       
-
- May 02, 2024
-
-
STEVAN Antoine authored
this MR adds `examples/inbreeding.rs` which allows to do two things - _naive recoding_: in order to generate a new random shard, we first $k$-decode the whole data and then $1$-encode a single shard - _true recoding_: to achieve the same goal, we directly $k$-recode shards into a new one ## the scenario regardless of the _recoding strategy_, the scenario is the same 1. data is split into $k$ shards and $n$ original shards are generated 2. for a given number of steps $s$, $k$ shards are drawn randomly with replacement and we count the number of successful decoding, given a measure of the _diversity_, $$\delta = \frac{\#success}{\#attempts}$$ 3. create a new _recoded shard_ and add it to the $n$ previous ones, i.e. $n$ increases by one 4. repeat steps 2. and 3. as long as you want ## results 
-
- Apr 29, 2024
-
-
we want to compare - _naive recoding_: k-decoding followed by 1-encoding - _komodo recoding_: k-recoding with $k = \#\text{shards}$ # results >
**Note** > we see that the _naive recoding_ is around 100 times slower compared to the _komodo recoding_ > **Note** > the format of the labels is always `{curve} / {k}`   -
STEVAN Antoine authored
i've moved the plotting scripts to [GPLT](https://gitlab.isae-supaero.fr/a.stevan/gplt) which allows to install a single command, called `gplt`, with two subcommands - `gplt plot` which is the same as old `python scripts/plot/plot.py` - `gplt multi_bar` which is the same as old `python scripts/plot/multi_bar.py`
-
STEVAN Antoine authored
otherwise, k doesn't play any role in the "recoding" benchmark
-
- Apr 26, 2024
-
-
STEVAN Antoine authored
this MR adds - `examples/benches/bench_fec.rs` to the list of example benches - instructions on how to run the new benchmark and plot the results ## results   
-
STEVAN Antoine authored
- fix the path to the "bench" readme and remove it from the plot scripts - "BLS-12-381" to "BLS12-381" for consistency
-
STEVAN Antoine authored
this MR goes from ```rust let xs = seq 0 5 | each { 2 ** $in } | wrap x let twice = $xs | insert measurement { 2 * $in.x } | insert error { 0.1 + 0.5 * $in.x } let square = $xs | insert measurement { $in.x ** 2 } | insert error { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square }, { group: "2 * x", items: $twice } ] | to json) ``` to ```rust let xs = seq 0 5 | each { 2 ** $in } let twice = $xs | wrap x | insert y { 2 * $in.x } | insert e { 0.1 + 0.5 * $in.x } let square = $xs | wrap x | insert y { $in.x ** 2 } | insert e { 1 + 1.5 * $in.x } python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { name: "x ^ 2", points: $square }, { name: "2 * x", points: $twice } ] | to json) ``` updates the "bench" README and adds type annotations to the `plot.py` script.
-
STEVAN Antoine authored
this MR - moves the last "recoding" benchmark to `examples/benches/` - moves the README, which is now all alone, to `examples/benches/` - adds a mention to `examples/benches/README.md` in `README.md` - some minor improvements to the bench README ## TODO - [x] find a way to plot the "recoding" results (thanks to !90)
-
STEVAN Antoine authored
>
**Note** > - in the following examples, any part of the `$.style` specification is optional and can either be ommitted or set to `null` > - the default values for `$.style` are given in `plot.py --help` ```rust let xs = seq 0 5 | each { 2 ** $in } | wrap x let twice = $xs | insert measurement { 2 * $in.x } | insert error { 0.1 + 0.5 * $in.x } let square = $xs | insert measurement { $in.x ** 2 } | insert error { 1 + 1.5 * $in.x } ``` and try ```rust python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square }, { group: "2 * x", items: $twice } ] | to json) ``` vs ```rust python scripts/plot/plot.py --title title --x-label x --y-label y --fullscreen ([ { group: "x ^ 2", items: $square, style: { color: "red", alpha: 0.5, line: { marker: 's', width: 2, type: "dashed", }, } }, { group: "2 * x", items: $twice, style: { color: "purple", alpha: 0.1, line: { marker: 'o', width: 5, type: "dotted", }, } } ] | to json) ```
-
- Apr 25, 2024
-
-
STEVAN Antoine authored
## changelog - use `out> *.ndjson` in the README to simplify running the benchmarks - create a `scripts/math.nu` module with `ns-to-ms` and `compute-stats` to refactor some of the most common operations - add `--fullscreen` to `plot.py` and `multi_bar.py` - add `--x-scale` and `--y-scale` to `plot.py`
-
STEVAN Antoine authored
## changelog - benchmarks - _commit_ has been removed in favor of `examples/benches/commit.rs` - _linalg_ has been migrated to `examples/benches/` as `bench_linalg` - _setup_ has been migrated to `examples/benches/` as `bench_setup` - `read-atomic-ops` command has been moved to `scripts/parse.nu` module - `scripts/plot/bench_commit.py` has been made more general and renamed to `scripts/plot/plot.py` - `scripts/plot/benches.py` has been removed because it's not required anymore => `plot.py` and `multi_bar.py` are general enough
-
STEVAN Antoine authored
this MR - bumps PLNK to 0.6.0 - update all existing code - uses the PLNK lib in `examples/benches/commit.rs` - fixes the y label of the plot in `scripts/plot/bench_commit.py`: was _ns_, should be _ms_
-
- Apr 24, 2024
-
-
STEVAN Antoine authored
i've basically refactored the whole "bench" framework that was inlined in `examples/benches/operations/field.rs` and `examples/benches/operations/curve_group.rs` into a new repo called [PLNK](https://gitlab.isae-supaero.fr/a.stevan/plnk). nothing effectively changes on the side of Komodo but now the code is much simpler here :)
-
STEVAN Antoine authored
this idea is to not use `criterion` and measure exactly what we want ## results    
-
STEVAN Antoine authored
we see that the Arkworks / Komodo versions of the same curves are basically the same. 
-
- Apr 23, 2024
-
-
STEVAN Antoine authored
as per title
-
- Apr 22, 2024
-
-
STEVAN Antoine authored
this MR improves the "_atomic_" script in `benches/README.md` to allow filtering _species_ to show in the _multibar_ plot. in addition to this, the warmup time and the number of samples of Criterion have been increased back to 3sec and 100 respectively. > **Note** > the benchmarks take 15min on my machine, i.e. by running the following two commands in Nushell > ```bash > cargo criterion --output-format verbose --message-format json --bench field_operations out> field.ndjson > cargo criterion --output-format verbose --message-format json --bench curve_group_operations out> curve.ndjson > ``` ## results   
-
STEVAN Antoine authored
this MR adds two now benchmarks: - `field_operations` in `benches/operations/field.rs` - `curve_group_operations` in `benches/operations/curve_group.rs` as well as `scripts/plot/multi_bar.py` to plot the results, see `benches/README.md` for the commands to run. ## results  
-
STEVAN Antoine authored
this MR - adds an Arkworks bench oneshot function to the `bench_commit` example - adapts the `measure!` macro to pass a _pairing-friendly_ curve - give different linestyle to curves in the Python script ## example measurements 
-
- Apr 15, 2024
-
-
STEVAN Antoine authored
add "unchecked" versions of `Matrix::{vandermonde,from_vec_vec}` and test both matrices (dragoon/komodo!75) ## changelog - replace `Matrix::vandermonde` with `Matrix::vandermonde_unchecked` - add a new `Matrix::vandermonde` which calls `Matrix::vandermonde_unchecked` after checking the seed points are distinct, otherwise, gives a `KomodoError::InvalidVandermonde` error - same with `Matrix::from_vec_vec` and `Matrix::from_vec_vec_unchecked` - add documentation tests for the two "checked" functions - run the main lib tests on both a random and a Vandermond matrix, just to be sure we do not take advantage of the Vandermonde structure
-
- Apr 12, 2024
-
-
STEVAN Antoine authored
-
STEVAN Antoine authored
## changelog - rename the `encode` function to `prove` and have it take _shards_ instead of an _encoding matrix_: this is to isolate the "encoding" process inside the `fec` module and leave the main `komodo::prove` only compute the "proof", i.e. the commits of the data from ```rust fn encode<F, G, P>( bytes: &[u8], encoding_mat: &Matrix<F>, powers: &Powers<F, G>, ) -> Result<Vec<Block<F, G>>, KomodoError> ``` to ```rust fn prove<F, G, P>( bytes: &[u8], powers: &Powers<F, G>, k: usize, ) -> Result<Vec<Commitment<F, G>>, KomodoError> ``` - rename `fec::Shard.combine` to `fec::Shard.recode_with` to get rid of "combine" - rename `fec::recode` to `fec::recode_with_coeffs` to show that this version takes a list of coefficients - rename `Block.commit` to `Block.proof`: "commit" should be "commits" and it's usually refered to as "proof" - split `prove` further into `prove` and `build`: `prove` now outputs a `Vec<Commitment<F>>`, `build` simply takes a `Vec<Shard<F>>` and a `Vec<Commitment<F>>` and outputs a `Vec<Block<F>>` - add `fec::recode_random` that does the "shard" part of `recode` to wrap around `fec::recode_with_coeffs` - remove `R: RngCore` from the signature of `zk::setup`, to avoid having to pass a generic `_` annotation everywhere `zk::setup` is used, same change has been applied to `recode` and the `generate_random_powers` in `main.rs` from ```rust fn setup<R: RngCore, F: PrimeField, G: CurveGroup<ScalarField = F>>( max_degree: usize, rng: &mut R, ) -> Result<Powers<F, G>, KomodoError> { ``` to ```rust fn setup<F: PrimeField, G: CurveGroup<ScalarField = F>>( max_degree: usize, rng: &mut impl RngCore, ) -> Result<Powers<F, G>, KomodoError> { ``` ### some extra minor changes - remove some useles generic type annotations, e.g. `prove::<F, G, P>` can become a simpler `prove` most of the time, i.e. when there is at least one generic annotation somewhere in the scope
-
- Apr 11, 2024
-
-
STEVAN Antoine authored
## changelog * eb1b1381 don't use a T in the lib `run_template` test * fbd503c6 remove the useless unwrap and TODO * b550d712 remove some pub * 339c3038 remove useless `.iter()` * 537993f0 remove useless `.add(...)` * d7720907 remove hiding_bound from timer in commit * eecab5a6 move `commit` to inlined `zk::batch_commit`
- Apr 10, 2024
-
-
STEVAN Antoine authored
i ended up adding a bunch of changes to the benchmarks
## changelog * 805a2454 reduce the number of loops and the warmup time * f7ce05c3 don't serialize for real to save time * 37a2a7e2 don't try to compress with validation * 409f3e3c don't multiply degree by 1_024 * 610024a9 fix setup * 3d5e7c58 fix setup * 3d3167fb run benchmarks on BLS12-381, BN-254 and PALLAS * da2a71a1 pass name of the curve as parameter * 954fd6d3 plot commit for all curves * f980b30f plot all curves in linalg * 5e41df1d rename `labels` to `keys` in commit * 8bb64f99 filter setup by curves * 0163c8f9 plot all curves in setup * 8c91c6d8 split the setup of Komodo and the serde benchmarks * 0784f294 add a manual benchmark to measure the commit * 608a3fd1 move the "example benches" to `examples/benches/` * 10f9a37c add a script to plot results from `bench_commit` * 6d512fa6 move plot script from `benches/` to `scripts/plot/` * a4e6ffbc measure VESTA
-