Automated comparison of Owl Browser vs vanilla Playwright and Puppeteer on CreepJS — the industry-standard fingerprint detection tool. Run per release to verify detection and performance. Also includes a performance benchmark mode.
Results are displayed at owlbrowser.net/benchmark.
Screenshots and parsed fingerprint data showing:
- Playwright & Puppeteer have identical fingerprint hashes (canvas, WebGL, audio, fonts) — they leak the real device
- Owl Browser has completely different hashes per OS profile — genuine C++ source-level spoofing
- Headless detection: Playwright
100%, Puppeteer100%, Owl Browser0% - GPU: Playwright/Puppeteer expose SwiftShader (dead giveaway), Owl shows real GPU profiles
Times cold start, navigation, screenshot, and full cycle for all three browsers:
- 10 sequential iterations per browser
- Statistics: min, max, avg, median, p95
- Raw timing data included for reproducibility
- Same machine, same container, same network — fair comparison
docker pull ghcr.io/olib-ai/owl-detection-report:latestcp .env.example .env
# Edit with your valuesOWL_BROWSER_URL=http://your-owl-instance:8080
OWL_BROWSER_TOKEN=your-token
# Optional — S3 upload
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
S3_BUCKET=your-bucket
S3_PREFIX=detection-reportsdocker run --rm \
--network host \
--env-file .env \
-v $(pwd)/output:/output \
ghcr.io/olib-ai/owl-detection-report:latestdocker run --rm \
--network host \
--env-file .env \
-v $(pwd)/output:/output \
ghcr.io/olib-ai/owl-detection-report:latest --benchmarkKeeps existing sequential data, only updates concurrency results:
docker run --rm \
--network host \
--env-file .env \
-v $(pwd)/output:/output \
ghcr.io/olib-ai/owl-detection-report:latest --concurrencyRun the detection report and benchmark per Owl Browser release — results only change when the browser updates. No need for daily cron.
When using S3 upload, the
-vvolume mount is optional since files go directly to S3.
git clone https://github.com/Olib-AI/owl-detection-report.git
cd owl-detection-report
docker build -t owl-detection-report .
# Detection report
docker run --rm --network host --env-file .env -v $(pwd)/output:/output owl-detection-report
# Benchmark
docker run --rm --network host --env-file .env -v $(pwd)/output:/output owl-detection-report --benchmark| Variable | Required | Default | Description |
|---|---|---|---|
OWL_BROWSER_URL |
Yes | — | Owl Browser REST API endpoint (e.g. http://localhost:8080) |
OWL_BROWSER_TOKEN |
Yes | — | Owl Browser API token |
OUTPUT_DIR |
No | /output |
Output directory inside the container |
AWS_ACCESS_KEY_ID |
No | — | AWS credentials for S3 upload |
AWS_SECRET_ACCESS_KEY |
No | — | AWS credentials for S3 upload |
S3_BUCKET |
No | — | S3 bucket name |
S3_PREFIX |
No | detection-reports/ |
S3 key prefix |
AWS_REGION |
No | us-east-1 |
AWS region |
CLOUDFRONT_DISTRIBUTION_ID |
No | — | CloudFront distribution for cache invalidation |
/output/
report.json # Parsed metrics + screenshot paths
screenshots/
playwright.webp # Vanilla Playwright baseline
puppeteer.webp # Vanilla Puppeteer baseline
owl-windows.webp # Owl Browser with Windows profile
owl-macos.webp # Owl Browser with macOS profile
owl-linux.webp # Owl Browser with Linux profile
/output/
benchmark.json # Timing data for all three browsers
Both files are overwritten on each run — no historical data stored.
All three browsers are benchmarked sequentially in the same Docker container on the same machine:
- Cold start — Playwright/Puppeteer: launch a new browser process + create page. Owl Browser: create a new context within the running engine.
- Navigation — Navigate to
https://example.comand wait fornetworkidle. - Screenshot — Capture a viewport screenshot.
- Full cycle — Create → navigate → screenshot → close.
Each step is timed individually. 10 iterations per browser. Results include min, max, avg, median, p95, and raw timing arrays so anyone can verify.
The architectural difference: Playwright and Puppeteer launch a new OS process for each browser instance. Owl Browser creates lightweight contexts within an already-running engine — no process spawn overhead.
# Login to GitHub Container Registry
echo "YOUR_GITHUB_PAT" | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin
# Create env file
sudo nano /etc/owl-report.env
# Paste your production config (OWL_BROWSER_URL, token, AWS creds)
# Test detection report
docker pull ghcr.io/olib-ai/owl-detection-report:latest
docker run --rm --network host --env-file /etc/owl-report.env ghcr.io/olib-ai/owl-detection-report:latest
# Test benchmark
docker run --rm --network host --env-file /etc/owl-report.env ghcr.io/olib-ai/owl-detection-report:latest --benchmark
# Run per release — no cron neededdocker pull ghcr.io/olib-ai/owl-detection-report:latest