Conversation
There was a problem hiding this comment.
added a small tweak in this file to allow reusing the function ed25519_prepare_presign
There was a problem hiding this comment.
for simplicity I am just comparing with the same number of total participants, don't mind modifying to cover more special cases here or in a follow-up
There was a problem hiding this comment.
the core of the PoC is this file, specially functions run_simulation and drain_poke
|
PR title type suggestion: This PR adds benchmarking infrastructure and test utilities, which are development tools rather than user-facing functionality. Consider using Suggested title: |
|
Code Review: feat: add benchmarks with latency PoC No critical issues found. The simulation core (
A few minor observations (non-blocking):
✅ Approved — bench-only code, no production changes, simulation logic is sound. |
|
PR title type suggestion: This PR adds benchmarks and test utilities, which is development infrastructure rather than user-facing functionality. The type prefix should probably be Suggested title: |
|
PR title type suggestion: This PR adds benchmarks and test infrastructure only, so the type prefix should probably be |
| pub total_messages_received: u64, | ||
| pub bytes_sent_per_participant: HashMap<Participant, u64>, | ||
| pub bytes_received_per_participant: HashMap<Participant, u64>, | ||
| pub wall_clock_elapsed: Duration, |
There was a problem hiding this comment.
this one (wall_clock_elapsed) is not displayed now, so I am fine removing it. I initially thought they would take a long time, but it turns out they are relatively fast
SimonRastikian
left a comment
There was a problem hiding this comment.
Please add comments to these functions to help in readability.
netrome
left a comment
There was a problem hiding this comment.
Really nice stuff. Interestingly, this becomes very slow for me when I increase the participant count (tried with 20 participants and a threshold of 15).
Haven't gone deep into the code yet but while playing with the benches I've observed this bug:
NUM_PARTICIPANTS=20 THRESHOLD=15 LATENCY_MS=0 cargo bench -p threshold-signatures --bench simulate_dkg
Finished `bench` profile [optimized] target(s) in 0.17s
Running benches/simulate_dkg.rs (target/release/deps/simulate_dkg-5b4dacfa91efffed)
Protocol simulation: DKG
Participants: 20, threshold: 15, latency: 0ms, samples: 15
Warming up for 3s... done
thread 'main' panicked at crates/threshold-signatures/src/test_utils/simulator_bench.rs:93:9:
assertion `left == right` failed: secp256k1: total_messages_received changed
left: 68202
right: 67731
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I saw this with latency a couple of times but it happened consistently when running without suggesting something is wrong in the current implementation.
|
For the slowness, Claude suspects echo broadcast being the culprit: This would explain why our DKG gets stuck for ever when we run with higher participant counts. |
fixed in abb8e36 for reference, it does not get stuck, here you have values for 40 participants: Details |
netrome
left a comment
There was a problem hiding this comment.
A bit annoyed by all trait objects. Haven't gone deep into the code, but since this is standalone benchmarks that seem to work well I'm happy to do a semi-blind approval to get these in.
…possibility-of-simulating-multiparty-protocols-with-a-single-thread
|
PR title type suggestion: This PR modifies source files like Suggested title: |
SimonRastikian
left a comment
There was a problem hiding this comment.
Why is this different than the syntax from other legacy benchmarks namely in main? Could you make the main a one liner and add the rest in a proper function?
Also I see quite some repetition in the printed configs, this could benefit from a helper function in bench_utils.
I also suggest the uniformization of the implementations (in the previous legacy examples) that deduce num_participants from maxmalicious. You can otherwise modify the previous implementations to accept arbitrary num_participants. The important being that running the benches is quite uniform across the schemes.
Final nit, please care the order of functions from most important to least i.e. starting with the functions that use criterion and down to least important.
I cannot make full sense of this, as I am not using criterion. Please point where the requested changes need to happen. About making benchmarks uniform, I can do that in a follow-up, just to avoid making this too bloated. Agree taking explicitly the num_participants and threshold is better |
netrome
left a comment
There was a problem hiding this comment.
Thanks for the update, some new questions.
Closes #2243
Some initial results:
DKG
ECDSA
FROST