feat: add benchmarks, expanded proptest, and cargo-fuzz targets#9
Open
feat: add benchmarks, expanded proptest, and cargo-fuzz targets#9
Conversation
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds benches/cursor_write.rs with cursor put, del, append, and append_dup benchmarks in both sync and unsync (single-thread) variants. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds four new criterion benchmark suites: reserve (put vs with_reservation for 64–4096 byte values), nested_txn (flat baseline + nested commit depths 1–3 + write-then-read in child), concurrent (N-reader no-writer, N-reader one-writer, sync vs unsync single-thread), and scaling (sequential get, random get, full iteration, append-ordered put at 100–100k entries). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…d and new tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add missing cursor_set_range_correctness tests to proptest_cursor. Remove proptest_inputs.rs now that all tests are migrated. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Standardize keys to 32 bytes (key + 28-digit zero-padded int) - Add value size parameterization (32, 128, 512 bytes) to scaling benchmarks - Expand concurrent reader counts to include 32 and 128 - Standardize concurrent bench values to 128 bytes - Use get_key() for cursor_write append benchmarks - Add PARITY comments linking to evmdb equivalents Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix key_validation fuzz target to only feed valid-length keys to INTEGER_KEY databases (MDBX aborts on invalid sizes). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MDBX aborts the process on certain constraint violations (e.g. invalid INTEGER_KEY sizes). Document our debug-only validation model prominently in both the README and crate-level rustdoc. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Standardize get_data() to 128-byte values (was variable ~7-14 bytes) - Change cursor bench entry count from 100 to 1000 - Add PARITY comments for cursor_seek_first_iterate - Fix type signatures for String->Vec<u8> value change Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Measures commit() time separately from write time, parameterized over entry count (10-10K) and value size (32/128/512 bytes). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All benchmarks now use quick criterion config (10 samples, 1s warmup). Scaling benchmarks skip 100K entries and concurrent benchmarks skip 128 readers unless BENCH_FULL=1 is set. Skips print noisy warnings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add cold_random_get and cold_sequential_scan parity benchmarks (posix_fadvise FADV_DONTNEED for cache clearing, no-op on macOS) - Align all parity bench key/value encoding with evmdb: 32-byte binary keys (parity_key) and 128-byte binary values (parity_value) - Add parity benchmark instructions and table to CLAUDE.md - 10k rows for cold benchmarks Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
e800bef to
2ac4f42
Compare
Set max_readers = 256 on the mdbx environment for concurrent benchmarks to prevent ReadersFull panics at 128 readers. Hardcode READER_COUNTS to [1, 4, 8, 32, 128] (removing BENCH_FULL gating), add black_box to prevent dead-code elimination, and use parity key encoding throughout. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds put+commit and append+commit benchmarks that include txn.commit() in the measured closure, with both MDBX_SYNC_DURABLE and MDBX_SAFE_NOSYNC variants for parity comparison with evmdb's commit_blocking_durable and commit_blocking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move open_db out of timed regions (real apps open once at startup) - Advance key base across write iterations (measure tree growth, not overwrites) - Read full values instead of ObjectLength in get benchmarks - Use ObjectLength for iteration benchmarks (no per-entry Vec allocation) - New read txn each iteration in cursor iteration bench - Add value size matrix (32B, 128B, 512B, 4096B) to scaling benches - 4096B hits overflow pages on both engines - Fix writer key to N_ROWS+1 in readers_with_writer - Add parity_value_sized and setup_parity_env_sized helpers Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Matches evmdb change — 4096B triggers a known multi-page overflow bug on evmdb and is not representative of real EVM workload value sizes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace posix_fadvise on live mmap fd (unreliable — kernel ignores FADV_DONTNEED when active mappings pin pages) with close→fadvise on plain fd→reopen. Bump dataset to 1M rows / 1k lookups for realistic tree depth. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename parity_* helpers to bench_* and strip all PARITY comment annotations. The benchmarks are standalone and no longer need to track an external project. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Member
Author
|
[Claude Code] Code reviewNo issues found. Checked for bugs and CLAUDE.md compliance. 🤖 Generated with Claude Code - If this code review was useful, please react with 👍. Otherwise, react with 👎. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
proptest_inputs.rsinto 6 domain-focused files (kv, cursor, dupsort, dupfixed, iter, nested). Migrate all existing tests and add new cases for large values, multi-db isolation, DUPFIXED page spanning, nested txn semantics, and more.cargo-fuzzwith 6 targets focused on FFI/unsafe boundaries —Cow<[u8]>decode paths, fixed-size array decode, dirty page roundtrip, DUPFIXED page decode, and key validation.return-borrowedfeature documentation from iter module.Test plan
cargo t)cargo bench)--all-targets)cargo +nightly fmtappliedcargo +nightly fuzz run <target>)🤖 Generated with Claude Code