Benchmark Track: Stable dart:io (AOT, fletch-style) vs Experimental dart-zig#692
Benchmark Track: Stable dart:io (AOT, fletch-style) vs Experimental dart-zig#692kartikey321 wants to merge 4 commits intoMDA2AV:mainfrom
Conversation
|
/benchmark -f dart-io |
|
👋 |
|
/benchmark -f dart-zig |
|
👋 |
|
@MDA2AV I was trying to bring your or any other contributors attention to this pr I have not created this pr for adding new backends or merging to main. I had created it just for benchmarking the difference between dart io and an experimental version where all the http parsing happens in zig. What was the motivation behind it? Why do i call it experimental? If the current motive of httparena does not match the motives of this pr to store benchmarking record diffs for experimental projects , just notify me , i will be happy to arrange another sources for generating and storing benchmarking reords |
Benchmark ResultsFramework:
Full log |
Benchmark ResultsFramework:
Full log |
Yea this is perfectly fine and what exactly the engine section is for |
Purpose
This PR is a benchmark track PR only and is not intended to merge into
main.Goal:
dart:ioagainst experimentaldart-zigunder HttpArena using the same benchmark profiles.What This PR Changes
How
dart-zigDiffers fromdart:iodart:iosocket/event pipeline with direct kernel backends (io_uringon Linux,kqueueon macOS).@pragma('vm:external-name')path).SO_REUSEPORTsemantics, worker topology experiments).dart-zig+.dillvsdart-zig-aot+.so).dart:ioparity across all APIs.Benchmark Scope (current)
Profiles in this track:
baselinepipelinedlimited-connjson(Advanced profiles such as async-db/json-tls/json-comp are intentionally out of scope for this PR.)
Validation
bash scripts/validate.sh dart-io✅bash scripts/validate.sh dart-zig✅ (on this branch setup)Notes
PR Commands — comment on this PR to trigger (requires collaborator approval):
/benchmark -f <framework>/benchmark -f <framework> -t <test>/benchmark -f <framework> --saveAlways specify
-f <framework>. Results are automatically compared against the current leaderboard.Run benchmarks locally
You can validate and benchmark your framework locally with the lite script — no CPU pinning, fixed connection counts, all load generators run in Docker.
Requirements: Docker Engine on Linux. Load generators (gcannon, h2load, h2load-h3, wrk, ghz) are built as self-contained Docker images on first run.