From 5aeaf864c9c10eb1554b19edb9012c110c2d092e Mon Sep 17 00:00:00 2001 From: vetler Date: Wed, 15 Apr 2026 10:46:08 +0200 Subject: [PATCH 01/30] Add async-profiler skill Add skill for installing, running, and analyzing async-profiler for Java. Covers CPU/memory/allocation profiling, flamegraph capture and interpretation, JFR recordings, and common setup errors (e.g. perf_events). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/README.skills.md | 1 + skills/async-profiler/README.md | 72 +++ skills/async-profiler/SKILL.md | 136 ++++++ skills/async-profiler/analyze/SKILL.md | 364 +++++++++++++++ skills/async-profiler/profile/SKILL.md | 414 ++++++++++++++++++ .../scripts/analyze_collapsed.py | 243 ++++++++++ skills/async-profiler/scripts/collect.sh | 364 +++++++++++++++ skills/async-profiler/scripts/install.sh | 147 +++++++ skills/async-profiler/scripts/run_profile.sh | 253 +++++++++++ skills/async-profiler/setup/SKILL.md | 199 +++++++++ 10 files changed, 2193 insertions(+) create mode 100644 skills/async-profiler/README.md create mode 100644 skills/async-profiler/SKILL.md create mode 100644 skills/async-profiler/analyze/SKILL.md create mode 100644 skills/async-profiler/profile/SKILL.md create mode 100644 skills/async-profiler/scripts/analyze_collapsed.py create mode 100755 skills/async-profiler/scripts/collect.sh create mode 100644 skills/async-profiler/scripts/install.sh create mode 100644 skills/async-profiler/scripts/run_profile.sh create mode 100644 skills/async-profiler/setup/SKILL.md diff --git a/docs/README.skills.md b/docs/README.skills.md index 400eb0554..1acad7dfa 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -48,6 +48,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to | [arize-trace](../skills/arize-trace/SKILL.md) | INVOKE THIS SKILL when downloading or exporting Arize traces and spans. Covers exporting traces by ID, sessions by ID, and debugging LLM application issues using the ax CLI. | `references/ax-profiles.md`
`references/ax-setup.md` | | [aspire](../skills/aspire/SKILL.md) | Aspire skill covering the Aspire CLI, AppHost orchestration, service discovery, integrations, MCP server, VS Code extension, Dev Containers, GitHub Codespaces, templates, dashboard, and deployment. Use when the user asks to create, run, debug, configure, deploy, or troubleshoot an Aspire distributed application. | `references/architecture.md`
`references/cli-reference.md`
`references/dashboard.md`
`references/deployment.md`
`references/integrations-catalog.md`
`references/mcp-server.md`
`references/polyglot-apis.md`
`references/testing.md`
`references/troubleshooting.md` | | [aspnet-minimal-api-openapi](../skills/aspnet-minimal-api-openapi/SKILL.md) | Create ASP.NET Minimal API endpoints with proper OpenAPI documentation | None | +| [async-profiler](../skills/async-profiler/SKILL.md) | Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what's causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance. | `README.md`
`analyze`
`profile`
`scripts/analyze_collapsed.py`
`scripts/collect.sh`
`scripts/install.sh`
`scripts/run_profile.sh`
`setup` | | [automate-this](../skills/automate-this/SKILL.md) | Analyze a screen recording of a manual process and produce targeted, working automation scripts. Extracts frames and audio narration from video files, reconstructs the step-by-step workflow, and proposes automation at multiple complexity levels using tools already installed on the user machine. | None | | [autoresearch](../skills/autoresearch/SKILL.md) | Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy's autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric. | None | | [aws-cdk-python-setup](../skills/aws-cdk-python-setup/SKILL.md) | Setup and initialization guide for developing AWS CDK (Cloud Development Kit) applications in Python. This skill enables users to configure environment prerequisites, create new CDK projects, manage dependencies, and deploy to AWS. | None | diff --git a/skills/async-profiler/README.md b/skills/async-profiler/README.md new file mode 100644 index 000000000..99f89fc7f --- /dev/null +++ b/skills/async-profiler/README.md @@ -0,0 +1,72 @@ +# async-profiler + +Install, run, and analyze async-profiler for Java — a low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. + +## What it does + +- Installs async-profiler automatically for macOS or Linux +- Captures CPU time, heap allocations, wall-clock time, and lock contention +- Produces interactive flamegraphs, JFR recordings, and collapsed stack traces +- Interprets profiling output: identifies hotspots, GC pressure, lock contention, N+1 Hibernate patterns + +## Compatibility + +Requires Python 3.7+ for the analysis script. async-profiler works on macOS and Linux with a running JVM process. + +## Installation + +**GitHub Copilot CLI:** + +Point Copilot at the skill directory from within a session: +``` +/skills add /path/to/async-profiler +``` + +Or copy manually to your personal skills directory (`~/.copilot/skills/` or `~/.agents/skills/` depending on your version): +```bash +cp -r async-profiler ~/.copilot/skills/ +# or +cp -r async-profiler ~/.agents/skills/ +``` + +**Claude Code:** +```bash +cp -r async-profiler ~/.claude/skills/async-profiler +``` + +**OpenCode:** +```bash +cp -r async-profiler ~/.config/opencode/skills/async-profiler +``` + +## Trigger phrases + +- "install async-profiler" +- "capture a flamegraph" +- "profile my Spring Boot app" +- "heap keeps growing" +- "what does this flamegraph mean" +- "I see a lot of GC in my profile" + +## Bundled scripts + +| Script | Purpose | +|---|---| +| `scripts/install.sh` | Auto-detect platform, download and verify async-profiler | +| `scripts/run_profile.sh` | Wrap `asprof` with defaults, timestamp output | +| `scripts/collect.sh` | Background collection: start all-event profiling, stop and retrieve flamegraphs | +| `scripts/analyze_collapsed.py` | Ranked self-time/inclusive-time table for `.collapsed` files | + +## Directory structure + +``` +async-profiler/ +├── SKILL.md # Entry point — routes to sub-guides +├── scripts/ # Bundled scripts +├── setup/ +│ └── SKILL.md # Installation and configuration +├── profile/ +│ └── SKILL.md # Running profiling sessions +└── analyze/ + └── SKILL.md # Interpreting profiling output +``` diff --git a/skills/async-profiler/SKILL.md b/skills/async-profiler/SKILL.md new file mode 100644 index 000000000..d1c2a7f56 --- /dev/null +++ b/skills/async-profiler/SKILL.md @@ -0,0 +1,136 @@ +--- +name: async-profiler +description: 'Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what''s causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance.' +compatibility: Requires Python 3.7+ for the analyze_collapsed.py script. +--- + +# async-profiler + +async-profiler is a production-safe, low-overhead sampling profiler for Java +that avoids the safepoint bias of standard JVM profilers. It can capture CPU +time, heap allocations, wall-clock time, and lock contention, and produce +interactive flamegraphs, JFR recordings, and collapsed stack traces. + +## Installing this skill + +### IntelliJ IDEA (Junie or GitHub Copilot) + +Skills live in a `.claude/skills/`, `.agents/skills/`, or `.github/skills/` +directory, either in your project repo or in your home directory. + +**Project-level — recommended for teams** (commit so everyone gets it): +```bash +# From your project root: +mkdir -p .github/skills +cd .github/skills +unzip /path/to/async-profiler.skill +git add async-profiler +git commit -m "Add async-profiler skill" +``` + +**Global — personal use across all projects:** +```bash +mkdir -p ~/.claude/skills +cd ~/.claude/skills +unzip /path/to/async-profiler.skill +``` + +> **Note for GitHub Copilot users:** There is a known issue where the Copilot +> JetBrains plugin does not reliably pick up skills from the global `~/.copilot/skills` +> directory. Use the project-level `.github/skills/` location to be safe. + +Alternatively, install the **Agent Skills Manager** plugin from the JetBrains +Marketplace (*Settings → Plugins → Marketplace* → "Agent Skills Manager") for +a UI that installs skills without unzipping manually. + +--- + +## Using this skill in IntelliJ IDEA + +### With Junie (JetBrains AI) + +Junie is JetBrains' native coding agent, available in the AI Chat panel. + +1. Open the AI Chat panel (*View → Tool Windows → AI Chat*, or the chat icon + in the right toolbar) +2. In the agent dropdown at the top of the chat, select **Junie** +3. Choose a mode: + - **Code mode** — Junie can run terminal commands, write files, and execute + the profiling scripts directly. Use this when you want it to actually run + `scripts/install.sh` or `scripts/run_profile.sh` for you. + - **Ask mode** — read-only; Junie analyzes and explains but won't touch + files. Use this when you want help interpreting a flamegraph or JFR file. +4. Just ask naturally — Junie loads the skill automatically when your question + matches the description. You don't need to invoke it by name. + +Example prompts that will trigger this skill in Junie: +- *"My Spring Boot app is using too much CPU. Help me capture a flamegraph."* +- *"I have this JFR file — open it and tell me what's slow."* +- *"Install async-profiler on this machine and set up the JVM flags."* + +In Code mode, Junie will run `scripts/install.sh`, execute `scripts/run_profile.sh` +with the right flags, and then walk you through the results — all without +leaving IntelliJ. + +### With GitHub Copilot in IntelliJ + +1. Enable agent mode: *Settings → GitHub Copilot → Chat → Agent* → turn on + **Agent mode** and **Agent Skills** +2. Open the Copilot Chat panel and make sure the mode selector shows **Agent** +3. Ask naturally — Copilot loads the skill when your prompt matches + +Example prompts: +- *"Profile my running Java app and show me where the CPU is going."* +- *"Analyze this collapsed stack file and tell me what's allocating the most."* + +GitHub Copilot's agent mode can also run the bundled scripts on your behalf — +it will propose the terminal command and ask for confirmation before executing. + +### GitHub Copilot CLI + +```bash +# Copilot CLI +mkdir -p ~/.copilot/skills +cd ~/.copilot/skills +unzip /path/to/async-profiler.skill + +# Or, if your version uses ~/.agents/skills/: +mkdir -p ~/.agents/skills +cd ~/.agents/skills +unzip /path/to/async-profiler.skill +``` + +Run `/skills list` to confirm it loaded. Then just ask naturally in the terminal. + +--- + +## Bundled scripts + +This skill includes four ready-to-run scripts in `scripts/`: + +| Script | What it does | +|---|---| +| `scripts/install.sh` | Auto-detects platform, downloads the right binary, verifies install | +| `scripts/run_profile.sh` | Wraps `asprof` with defaults, timestamps output, prints opening instructions | +| `scripts/collect.sh` | Agent-friendly background collection: start all-event profiling, do other work, then stop and get all flamegraphs | +| `scripts/analyze_collapsed.py` | Ranked self-time / inclusive-time table for `.collapsed` files, with filters | + +Always offer to run these scripts on the user's behalf when relevant. + +## How to use this skill + +This skill has three sub-guides. Read the one that matches what the user needs: + +| Situation | Read | +|---|---| +| User needs to install or configure async-profiler, or is hitting setup errors | `setup/SKILL.md` | +| User wants to run a profiling session (capture flamegraph, JFR, etc.) | `profile/SKILL.md` | +| User has profiling output and wants to understand or interpret it | `analyze/SKILL.md` | + +**When the conversation spans multiple phases** (e.g., the user just ran a +profile and now wants to understand the output), read whichever sub-guide is +most relevant to the current question. If the user needs both setup *and* +profiling guidance in one message, read `setup/SKILL.md` first and summarize +the setup steps before moving to `profile/SKILL.md`. + +Read the relevant sub-guide now before responding. diff --git a/skills/async-profiler/analyze/SKILL.md b/skills/async-profiler/analyze/SKILL.md new file mode 100644 index 000000000..b7f11e414 --- /dev/null +++ b/skills/async-profiler/analyze/SKILL.md @@ -0,0 +1,364 @@ +--- +name: async-profiler-analyze +description: 'Interpret and analyze async-profiler output: flamegraph HTML/SVG files, JFR recordings, and collapsed stack traces. Use this skill whenever a Java developer shares profiler output or wants help understanding profiling results. Trigger for: "what does this flamegraph mean", "how do I read this JFR", "what''s causing my CPU hotspot", "interpret my profiling results", "analyze this flamegraph", "what should I look for in my profile", "the wide frame in my flamegraph is X, what does that mean", "I see a lot of GC in my profile", "my profile shows 80% in X, is that bad", or whenever someone pastes or describes profiling output. Also trigger proactively when the async-profiler-profile skill just produced output and the user seems to want to understand it.' +compatibility: Requires Python 3.7+ for the analyze_collapsed.py script. +--- + +# async-profiler Output Analysis + +The three main output formats — flamegraph HTML, JFR recordings, and collapsed +stacks — each tell a different story. This skill walks you through reading each +one and turning the visual patterns into concrete action. + +--- + +## Flamegraphs (HTML or SVG) + +Open `.html` output in any browser. It's interactive: hover to see exact sample +counts, click to zoom into a subtree, press Escape or click "Reset Zoom" to go back. + +### How to read a flamegraph + +``` +▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ ← leaf frames (actual CPU consumers) + doWork() + processItem() + handleRequest() + run() +▔▔▔▔▔▔▔▔▔▔▔▔▔ ← base (thread entry points) +``` + +- **Width = time (or allocation volume)**. A frame that's wide consumed a lot + of the profiled resource. This is the most important thing to look at. +- **Height = call depth**. Taller stacks just mean more levels of method calls — + depth by itself isn't a problem. +- **X-axis is NOT time**. The horizontal position has no meaning; similar frames + are sorted alphabetically to make identical paths merge visually. +- **Leaf frames (top of each column)** are where execution actually spent time. + Wide leaf frames = actual hotspots. +- **Intermediate frames** show the call path to the hotspot. A wide intermediate + frame with a narrow leaf means the cost is spread across many callees, which + is harder to optimize than a single wide leaf. + +### What to look for first + +1. **Wide frames near the top** — these are your primary optimization targets. + If `serialize()` is 40% wide at the top, start there. + +2. **Plateau patterns** — a wide frame that suddenly narrows just above it + (like a plateau). The plateau frame is spending most of its time directly + in itself (i.e., not calling further). Classic hotspot. + +3. **Tall, narrow spikes** — deep call stacks that are thin. Usually framework + overhead (reflection, proxies, Spring AOP) or recursive algorithms. Often + hard to optimize directly. + +4. **Unexpected runtime/framework frames** — if 30% of your CPU flamegraph is + in GC or JIT compilation (`[Unknown]`, `Compiler::compile`, etc.), that's a + signal of memory pressure or cold-start behavior, not application logic. + +### Color coding + +Colors in the default scheme encode frame type, not performance severity: + +| Color | Frame type | +|---|---| +| Green | Java methods | +| Yellow / orange | JVM internal / native | +| Red | Kernel frames | +| Purple | C++ (JVM internals) | +| Grey | Inlined frames | + +Don't read too much into colors beyond type classification. + +### CPU vs allocation vs wall-clock flamegraphs + +The visual grammar is identical, but what "width" means differs: + +- **CPU flamegraph**: width = CPU sample count = time on-CPU +- **Allocation flamegraph**: width = bytes allocated from that call path +- **Wall-clock flamegraph**: width = wall-clock samples = total elapsed time + including blocking + +For latency investigations, compare CPU and wall-clock side by side: +- Frames that appear wide in wall-clock but narrow in CPU are spending time + *blocked* (waiting on I/O, locks, sleep) — these are candidates for async + refactoring or reducing external dependencies. +- Frames wide in CPU but narrow in wall-clock are actually compute-heavy. + +### Common flamegraph patterns and what they mean + +**"My app is mostly GC"** +Wide `GarbageCollector` or `ZGC` or `G1GC` frames in a CPU profile indicate +the JVM is spending significant CPU collecting heap. Switch to an allocation +profile (`-e alloc`) to find the code paths generating garbage. + +**"I see a lot of `Object.wait` or `LockSupport.park`"** +These show up in wall-clock profiles as threads blocking. Look at the frames +just below `park`/`wait` in the stack — those are the callers waiting on +something (a queue, a lock, a CompletableFuture). That's where to investigate. + +**"Everything is in reflection or proxies"** +Frames like `sun.reflect.GeneratedMethodAccessor`, Spring AOP proxies, or +Jackson deserializers. This is usually framework overhead and often not worth +optimizing unless it's genuinely dominant (>20%). Consider warming strategies +or native compilation (GraalVM). + +**"Wide frame is in a library I don't control"** +Look at *your* code just below it. Can you call this library less often? Can +you cache results? Can you batch calls? The library frame tells you what's +expensive; the frames below it tell you who's calling it. + +--- + +## JFR Files (Java Flight Recorder) + +JFR files are richer than flamegraphs — they contain timestamped events, +multiple event types, JVM metrics, and more. You need a viewer to explore them. + +### Opening JFR files in IntelliJ IDEA (recommended) + +IntelliJ IDEA is the richest viewer for async-profiler output and the +recommended choice for day-to-day profiling work. + +**IntelliJ IDEA Ultimate — built-in profiler** + +Ultimate has async-profiler and JFR support built in, no plugins needed. + +Open a captured `.jfr` file: +- *Run → Open Profiler Snapshot…* → select the file +- Or drag the `.jfr` file directly onto the editor + +You'll see five views across the top tab bar. Start here: + +1. **Flame Graph** — same reading rules as HTML flamegraphs (width = time). + Use the search box (Ctrl/Cmd+F) to highlight all frames matching a class + or package. Right-click any frame to jump to source. + +2. **Call Tree** — hierarchical breakdown. Expand hotspots top-down to see + exactly which call path is responsible. The "%" column shows inclusive time. + +3. **Method List** — flat ranked list of methods by self-time. The fastest + way to answer "what is the single hottest method?" Sort by *Self* to find + direct CPU consumers; sort by *Total* for inclusive time. + +4. **Timeline** — thread activity over the profiling window. Each thread is a + row; colours show running vs. blocked vs. waiting state. Use this to spot + contention (many threads blocked at the same moment) or to correlate a + spike with a specific time window. + +5. **Events** — raw JFR event log. Useful for GC events, class loading, JIT + compilations, and socket I/O — things that don't show up in the flame graph. + +**IntelliJ IDEA Community — install the Java JFR Profiler plugin** + +Search the Marketplace for **"Java JFR Profiler"** (by parttimenerd). It adds +full JFR and async-profiler support including flame graph, call tree, and +Firefox Profiler integration: +- *Settings → Plugins → Marketplace* → search "Java JFR Profiler" → Install +- After restart: *Tools → Open JFR File…* or drag the file into the editor + +**Launching async-profiler directly from IntelliJ (Ultimate)** + +You can skip the terminal entirely and profile a run configuration from inside +the IDE: +- Open *Run → Edit Configurations…* +- Select your run configuration → switch to the **Profiler** tab +- Choose **Async Profiler** from the dropdown +- Click the profile button (▶ with the flame icon) instead of the normal run +- IntelliJ attaches async-profiler automatically and opens results when done + +Configure which events to capture at *Settings → Build, Execution, Deployment +→ Java Profiler*. + +**Navigating from flamegraph frame → source** + +In IntelliJ's flamegraph view, right-clicking any frame shows: +- *Navigate to Source* — jumps directly to the method in the editor +- *Find Usages* — shows callers +- *Filter* — narrows the flame to stacks containing this frame + +This makes it much faster to go from "this frame is hot" to "here's the code" +compared to any other viewer. + +--- + +### Other viewers + +**JDK Mission Control (JMC)** +- Download from https://adoptium.net/jmc/ +- Strength: *Automated Analysis Report* — runs heuristics and flags findings + with explanations. Good for a second opinion or sharing with someone who + doesn't have IntelliJ. +- *File → Open File* or drag-and-drop + +**Command-line `jfr` utility** (ships with JDK 14+) +```bash +jfr summary recording.jfr # what event types are present +jfr print --events jdk.ExecutionSample recording.jfr # raw CPU samples +``` + +**`jfrconv`** (bundled with async-profiler — convert to flamegraph HTML) +```bash +jfrconv recording.jfr flamegraph.html # full flamegraph +jfrconv --alloc recording.jfr alloc.html # allocation-only flamegraph +jfrconv recording.jfr collapsed.txt # collapsed stacks for scripting +``` + +### What to examine in JMC / IntelliJ + +After opening a JFR file, prioritize these views: + +1. **Automated Analysis** (JMC only) — runs heuristics and flags findings + automatically. Always start here. + +2. **Method Profiling** → flame graph view of CPU samples + +3. **Memory** → allocation sites, heap occupancy over time, GC events + +4. **Threads** → thread states over time (runnable vs. blocked vs. waiting) + — useful for spotting lock contention + +5. **Lock Instances** → which monitors had the most contention + +6. **I/O** → socket and file read/write events with durations + +### Reading JFR from a `--all` combined profile + +When you capture with `--all`, the JFR contains multiple event streams. In JMC, +each event type appears as a separate section. Compare: +- CPU samples vs. wall-clock: identifies blocking vs. compute-bound time +- Allocation events: find garbage-producing call paths +- Lock events: find synchronization bottlenecks + +--- + +## Collapsed Stacks + +Collapsed stack files are plain text in the format: +``` +com/example/App.main;com/example/Service.process;java/util/HashMap.get 42 +com/example/App.main;com/example/Service.process;java/util/HashMap.put 18 +``` + +Each line is a semicolon-separated call stack (bottom to top) followed by +a sample count. They're the input format for the original +[FlameGraph scripts](https://github.com/brendangregg/FlameGraph) and useful +for programmatic analysis. + +### Quick analysis with the bundled script + +`scripts/analyze_collapsed.py` produces a ranked table of self-time and +inclusive-time frames, with percentage bars and filter support: + +```bash +# Top 20 self-time and inclusive frames +python3 scripts/analyze_collapsed.py profile.collapsed + +# Filter to your own code only +python3 scripts/analyze_collapsed.py profile.collapsed --grep 'com/yourcompany' + +# Group by package instead of method +python3 scripts/analyze_collapsed.py profile.collapsed --packages + +# Exclude framework noise +python3 scripts/analyze_collapsed.py profile.collapsed --exclude 'sun/reflect|\$\$Lambda' + +# Top 40 self-time frames as CSV (for further analysis) +python3 scripts/analyze_collapsed.py profile.collapsed --self-time --top 40 --csv +``` + +### Manual analysis with grep/awk + +```bash +# How much time in any HashMap operation? +awk '{if ($1 ~ /HashMap/) total += $NF} END {print total}' profile.collapsed + +# Everything involving serialization +grep -i "serial\|jackson\|json" profile.collapsed | awk '{sum+=$NF} END{print sum}' +``` + +### Convert collapsed → flamegraph + +```bash +# Using async-profiler's jfrconv +jfrconv collapsed.txt flamegraph.html + +# Or using the original FlameGraph perl script (if installed) +flamegraph.pl profile.collapsed > flamegraph.svg +``` + +--- + +## Interpreting allocation profiles + +Allocation flamegraphs answer "where is memory being created?" not "what's +alive on the heap?" (for live object analysis, use `-e live` or look at JFR +heap snapshots). + +Key things to look for: + +- **`byte[]` or `char[]` at the top** — string manipulation, serialization, or + logging are common culprits. Look at the callers. +- **`Object[]` allocations** — often from collections growing (`ArrayList.grow`, + `HashMap.resize`). Pre-size collections if you know the expected cardinality. +- **Allocation spikes in request-handling code** — objects created per-request + that could be pooled or cached. +- **Framework allocations** — ORM, serialization libraries often allocate heavily. + Consider caching deserialized objects or using streaming APIs. + +--- + +## Interpreting lock / wall-clock profiles + +When wall-clock shows threads blocked: + +- **`LockSupport.park` + `AbstractQueuedSynchronizer`** — JUC locks + (`ReentrantLock`, semaphores, etc.). Look two frames up to see which lock. +- **`Object.wait`** — classic `synchronized` monitors. The caller is your target. +- **`sun.nio.ch.EPoll.wait` or similar** — network I/O wait. Thread is blocked + on the network. Is connection pool exhausted? Is a remote service slow? +- **`Thread.sleep`** — deliberate sleep (scheduled polling, backoff, etc.). + Usually expected, but verify the intervals are appropriate. + +--- + +## Worked example: reading a flamegraph + +Suppose you see this pattern in a CPU flamegraph: + +``` +processOrder() ← wide frame (45% of samples) + | + ├── ProductService.loadProduct() ← 30% (wide) + │ └── HibernateSession.find() ← 30% (leaf) + │ + └── TaxCalculator.calculate() ← 10% + └── BigDecimal.multiply() ← 10% (leaf) +``` + +**Diagnosis:** +- 30% of CPU in `HibernateSession.find()` — likely N+1 query problem. + Each `processOrder()` call loads a product via Hibernate one at a time. +- 10% in `BigDecimal.multiply()` — tax calculations using high-precision + arithmetic. Often fine, but if this is called thousands of times per second, + consider pre-computing or caching tax rates. + +**Next steps:** +1. Check if `loadProduct()` could be batched or pre-fetched (JPA `@BatchSize`, + fetch joins, or a bulk load before the loop). +2. Profile with `-e alloc` to see if Hibernate is also creating a lot of garbage. +3. If the fix is non-trivial, capture a JFR (`--all`) to get a fuller picture + before committing to an approach. + +--- + +## When to reach for each output format + +| Situation | Best format | +|---|---| +| Quick overview, share with team | HTML flamegraph | +| Need timestamped events, JVM metrics | JFR + JMC/IntelliJ | +| Scripted / automated analysis | Collapsed stacks | +| Multi-event combined analysis | JFR with `--all` | +| Share with someone without a viewer | HTML flamegraph | diff --git a/skills/async-profiler/profile/SKILL.md b/skills/async-profiler/profile/SKILL.md new file mode 100644 index 000000000..0a83dc565 --- /dev/null +++ b/skills/async-profiler/profile/SKILL.md @@ -0,0 +1,414 @@ +--- +name: async-profiler-profile +description: 'Run async-profiler against a live JVM process to capture CPU, memory allocation, wall-clock, or lock contention profiles and generate flamegraphs or JFR recordings. Use this skill whenever a Java developer wants to start a profiling session, capture a flamegraph, find CPU hotspots, identify memory allocation pressure, measure thread blocking or lock contention, or asks: "how do I profile my running Java app", "capture a flamegraph", "find what''s using CPU", "profile heap allocations", "measure lock contention", "generate a JFR recording", "profile for N seconds", "what''s slow in my app". Assumes async-profiler is already installed (see async-profiler-setup skill if not).' +--- + +# async-profiler — Running Profiles + +## Agent-driven background profiling + +Use `scripts/collect.sh` when you need to profile while simultaneously +reproducing the workload — the standard blocking `run_profile.sh` would make +that impossible because it holds the terminal for the full duration. + +`collect.sh` captures all event types (CPU, allocation, wall-clock, lock) in +a single JFR recording and produces four separate flamegraphs when done. + +### When to use `collect.sh` vs `run_profile.sh` + +| Scenario | Use | +|---|---| +| You need the terminal free to run load, tests, or other commands during the capture | `collect.sh start` / `collect.sh stop` | +| Fixed duration, you can background the call | `collect.sh timed -d &` | +| Simple timed capture, terminal can block | `run_profile.sh --comprehensive` | + +### `start` / `stop` workflow — full agent control + +```bash +# 1. Find the JVM process +jps -l + +# 2. Start profiling (returns immediately, saves session state) +bash scripts/collect.sh start + +# 3. Reproduce the problem — run load tests, make requests, etc. +# The profiler is attached and collecting. + +# 4. Stop profiling and generate all flamegraphs +bash scripts/collect.sh stop +``` + +> ⚠️ **macOS: `asprof stop -f ` silently ignores the output path.** +> The JFR is written to `/var/folders//T/_/.jfr` +> regardless of the `-f` argument. `collect.sh` handles this automatically by +> creating a sentinel file at `start` time and using `find -newer` to locate the +> JFR after `stop`. If you call `asprof stop` directly, find the file with: +> ```bash +> find /var/folders -maxdepth 8 -name "*.jfr" 2>/dev/null +> ``` + +Output is written to `profile--/` in the current directory. +The directory contains: +- `combined.jfr` — the raw multi-event recording +- `profile-cpu.html`, `profile-alloc.html`, `profile-wall.html`, `profile-lock.html` — interactive flamegraphs + +### `timed` workflow — fixed-duration background capture + +Use this when you know exactly how long the workload takes: + +```bash +# Start a 60-second capture in the background +bash scripts/collect.sh timed -d 60 & +PROF_PID=$! + +# Run your workload here while profiling is active +./run-load-test.sh + +# Wait for the profiler to finish (if workload finished faster) +wait $PROF_PID +``` + +`timed` blocks for the specified duration, so run it in the background with `&`. + +### After collecting + +Once `stop` or `timed` completes, offer to analyze the results immediately. +Read `analyze/SKILL.md` before interpreting the flamegraphs. Each `.html` file +can be opened directly in a browser; pass `.collapsed` files to +`scripts/analyze_collapsed.py` for a ranked self-time table. + +--- + +## IntelliJ IDEA Ultimate — no terminal needed + +If the process you want to profile was launched from IntelliJ, the fastest path +is to use the built-in integration: + +1. Click the **flame icon** next to the run/debug buttons (▶🔥), or use + *Run → Profile '[configuration name]'* +2. IntelliJ attaches async-profiler automatically and opens results when done +3. To choose which events to capture (CPU, allocation, wall-clock): + *Settings → Build, Execution, Deployment → Java Profiler* + +Results open directly in IntelliJ's viewer — see `analyze/SKILL.md` for how to +navigate the flame graph, call tree, and timeline tabs. + +Use the terminal approach below when you need to profile a process that wasn't +started from IntelliJ (a remote server, a running Docker container, a +production JVM, etc.). + +--- + +## Always start with `--all` + +**`asprof start --all` records CPU, allocation, wall-clock, and lock contention +simultaneously in a single JFR file.** There is no meaningful overhead penalty +for capturing all events together compared to capturing just one. You then split +the JFR into separate flamegraphs with `jfrconv` after the fact. + +**Never run separate captures for each event type.** Each capture requires +reproducing the workload, which is disruptive and often impossible for realistic +or intermittent problems. Capture once, analyze everything. + +```bash +# Direct asprof — capture all events, produce a single JFR +jps -l # find your PID +asprof start --all # attach, collect everything +# ... reproduce the problem ... +asprof stop # stop; JFR written to disk + # ⚠️ macOS: see note below on output path + +# Then split into flamegraphs: +jfrconv --cpu combined.jfr cpu.html +jfrconv --alloc combined.jfr alloc.html +jfrconv --wall combined.jfr wall.html +jfrconv --lock combined.jfr lock.html +``` + +For agent-driven work, use `collect.sh` instead — it handles the macOS output +path bug, session state, and the JFR split automatically: + +```bash +bash scripts/collect.sh start +# ... reproduce the problem ... +bash scripts/collect.sh stop +# → outputs cpu.html, alloc.html, wall.html, lock.html +``` + +--- + +## Quick start (terminal / remote processes) + +### Using the bundled script + +`scripts/run_profile.sh` wraps `asprof` with sensible defaults and auto-timestamped +output files. + +**Default — capture all events:** +```bash +# One 30s capture → four separate flamegraphs generated in parallel +bash scripts/run_profile.sh --comprehensive -d 30 +``` +This runs a single `--all` JFR capture, then uses `jfrconv` in parallel to +split it into separate CPU, allocation, wall-clock, and lock flamegraphs. +On macOS all four open in the browser automatically. + +**When you already know which event type to focus on:** +```bash +# Allocation only (heap pressure / GC churn) +bash scripts/run_profile.sh -e alloc -d 60 + +# Wall-clock only (latency / blocking / I/O) +bash scripts/run_profile.sh -e wall + +# Target by app name instead of PID +bash scripts/run_profile.sh MyApplication +``` + +--- + +## Choose the right flamegraph to read + +`--all` records everything — use `jfrconv` to pick the view that matches your +symptom: + +| Symptom | View to read | `jfrconv` flag | +|---|---|---| +| High CPU, slow throughput | `--cpu` | CPU time by call stack | +| High GC pressure / heap churn | `--alloc` | Where objects are being allocated | +| Threads are blocked / latency spikes | `--wall` | All threads regardless of state | +| Slow synchronized methods | `--lock` | Java monitor contention time | + +All four are captured by `asprof start --all` — just open the flamegraph that +matches your symptom. When in doubt, read the **wall-clock** view first: it +shows blocked and sleeping threads that CPU profiling misses entirely. + +--- + +## Common profiling scenarios + +### The standard approach — capture all, read what you need + +```bash +# Capture everything in one session +asprof start --all +# ... reproduce the problem ... +asprof stop +# ⚠️ macOS: see output path note above — use collect.sh to handle this automatically + +# Generate whichever flamegraph(s) you need: +jfrconv --cpu combined.jfr cpu.html # CPU hotspots +jfrconv --alloc combined.jfr alloc.html # Garbage / allocation pressure +jfrconv --wall combined.jfr wall.html # Latency / blocking / I/O +jfrconv --lock combined.jfr lock.html # Lock contention +``` + +Open any `.html` in a browser. Wide frames at the top are your hotspots. + +**What each view shows:** +- **CPU** — where the CPU is spending time; misses sleeping/blocked threads +- **Alloc** — which call stacks produce the most heap; wide = large allocations +- **Wall** — all threads regardless of state; best for latency/I/O investigations +- **Lock** — time spent *waiting* to acquire monitors (not holding them) + +### Fixed-duration capture (blocks terminal) + +```bash +# All events, 60 seconds, output to JFR +asprof -d 60 --all -f combined.jfr +``` + +Use `collect.sh timed` or background with `&` if you need the terminal free. + +--- + +## Key flags to know + +### Duration and output + +```bash +-d N # Profile for N seconds (e.g., -d 30) +-f FILE # Output file; extension sets format: .html, .jfr, .txt, .collapsed +``` + +File extension drives format automatically: +- `.html` → interactive flamegraph (recommended for sharing) +- `.jfr` → JFR recording (for IntelliJ / JDK Mission Control) +- `.collapsed` → raw collapsed stacks (for FlameGraph scripts) +- `.txt` → plain-text summary + +### Targeting + +```bash +# Attach to a specific PID +asprof -d 30 -f out.html 12345 + +# Auto-detect if only one JVM is running +asprof -d 30 -f out.html jps + +# Target by application name +asprof -d 30 -f out.html MyApplication +``` + +### Thread-level breakdown + +```bash +# Separate flame per thread (useful for pinpointing which thread is the culprit) +asprof -d 30 -t -f out.html +``` + +### Sampling interval + +```bash +# Sample every 1ms (default is ~10ms; lower = more detail but higher overhead) +-i 1ms + +# Sample every N nanoseconds +-i 500000 # 0.5ms +``` + +Note: on macOS with itimer, the minimum effective interval is ~10ms regardless +of what you specify. + +### Stack depth and filtering + +```bash +-j 512 # Max stack depth (default 2048; reduce if stacks are very deep) + +# Include only frames matching a pattern +-I 'com/mycompany/*' + +# Exclude frames matching a pattern +-X 'sun/reflect/*' +``` + +### Long-running or manual start/stop + +Sometimes you want to start profiling, do a specific action, then stop rather +than time-boxing it: + +```bash +# Start profiling (runs indefinitely) +asprof start -e cpu + +# ... do your thing ... + +# Stop and write output +asprof stop -f profile.html + +# Or dump a snapshot without stopping (live sampling continues) +asprof dump -f snapshot.html +``` + +> ⚠️ **macOS only:** `asprof stop -f ` silently ignores the `-f` path. +> Use `bash scripts/collect.sh start/stop` instead — it handles this automatically. +> If calling `asprof` directly, find the output with: +> ```bash +> find /var/folders -maxdepth 8 -name "*.jfr" 2>/dev/null +> ``` + +--- + +## Continuous profiling + +For finding intermittent regressions, profile in a loop and dump results +periodically: + +```bash +# Dump a new flamegraph every 60 seconds, cycling indefinitely +# %t in the filename is replaced with a timestamp +asprof -e cpu --loop 60s -f /tmp/profile-%t.html +``` + +--- + +## Attach as Java agent (no dynamic attach) + +If the JVM doesn't allow dynamic attach (common in locked-down environments), +use the agent at startup: + +```bash +java -agentpath:/path/to/libasyncProfiler.so=start,event=cpu,interval=1ms,file=output.html,duration=60 \ + -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints \ + -jar myapp.jar +``` + +Agent options are comma-separated after the `=`. Duration is in seconds. + +--- + +## macOS-specific notes + +- Default CPU engine is **itimer** — works without elevated privileges +- No kernel frame collection (platform limitation, not a bug) +- itimer has a known bias toward system calls; wall-clock (`--wall`) is often + more representative for latency investigations on macOS +- Minimum sampling interval ~10ms (kernel timer resolution) + +These limitations don't make macOS profiling useless — CPU and wall-clock +flamegraphs are still highly actionable for application-level code. + +--- + +## Overhead and production use + +async-profiler is designed to be low-overhead: + +- **CPU profiling**: ~1-3% overhead at default intervals +- **Allocation profiling**: ~1-5% depending on allocation rate (uses TLAB sampling) +- **Wall-clock**: ~1% overhead (timer-based, not instruction-based) + +It's reasonable to run brief (30-60s) profiles in production. For longer sessions, +use the `--memlimit` flag to cap memory usage: + +```bash +asprof -d 300 --memlimit 256m -f profile.html +``` + +--- + +## jfrconv syntax + +Convert a JFR recording to flamegraphs: + +```bash +jfrconv --cpu combined.jfr cpu.html +jfrconv --alloc combined.jfr alloc.html +jfrconv --lock combined.jfr lock.html +jfrconv --wall combined.jfr wall.html +``` + +> ⚠️ The event flag (`--cpu`, `--alloc`, etc.) must come **before** the input +> file. The form `jfrconv input.jfr --event cpu output.html` does not work. + +--- + +## Session layout (recommended) + +Store all output in one versioned directory per session: + +``` +profiling/ + session-1/ + combined.jfr + profile-cpu.html + profile-alloc.html + profile-wall.html + profile-lock.html + findings.md +``` + +--- + +## After profiling: always offer to analyze + +Once a profile capture completes, **always offer to analyze the results +immediately** — don't wait for the user to ask. Say something like: + +> "The profile is saved at `profile-all-20250409-143201-cpu.html`. Want me to +> analyze it and identify the bottlenecks?" + +Then read `analyze/SKILL.md` and interpret the output. If it's a JFR file, +offer to run `jfrconv` to extract flamegraphs first. If it's collapsed stacks, +offer to run `scripts/analyze_collapsed.py`. The user has already done the hard +part (reproducing the problem) — close the loop for them. diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py new file mode 100644 index 000000000..221636ef6 --- /dev/null +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -0,0 +1,243 @@ +#!/usr/bin/env python3 +""" +analyze_collapsed.py — Quick analysis of async-profiler collapsed stack output. + +Collapsed stack format: each line is a semicolon-separated call stack +(bottom frame first) followed by a sample count: + com/example/App.main;com/example/Service.process;java/util/HashMap.get 42 + +Usage: + python analyze_collapsed.py [options] + +Options: + --top N Show top N frames (default: 20) + --grep PATTERN Filter: only include stacks matching PATTERN + --exclude PATTERN Filter: exclude stacks matching PATTERN + --packages Group results by top-level package instead of method + --self-time Show only leaf (self-time) frames, not inclusive time + --csv Output as CSV instead of table +""" + +from __future__ import annotations + +import sys +import re +from collections import defaultdict +from pathlib import Path + + +def parse_collapsed(path: str) -> list[tuple[list[str], int]]: + """Parse a collapsed stack file into (frames, count) tuples.""" + stacks = [] + with open(path, "r", encoding="utf-8", errors="replace") as f: + for lineno, line in enumerate(f, 1): + line = line.strip() + if not line or line.startswith("#"): + continue + # Last token is the count; everything before is the stack + parts = line.rsplit(" ", 1) + if len(parts) != 2: + continue + try: + count = int(parts[1]) + except ValueError: + continue + frames = parts[0].split(";") + stacks.append((frames, count)) + return stacks + + +def top_leaf_frames(stacks, n=20, grep=None, exclude=None): + """Count samples where each frame is the leaf (top of stack = actual work).""" + counts = defaultdict(int) + for frames, count in stacks: + if not frames: + continue + stack_str = ";".join(frames) + if grep and not re.search(grep, stack_str, re.IGNORECASE): + continue + if exclude and re.search(exclude, stack_str, re.IGNORECASE): + continue + leaf = frames[-1] + counts[leaf] += count + return sorted(counts.items(), key=lambda x: x[1], reverse=True)[:n] + + +def top_inclusive_frames(stacks, n=20, grep=None, exclude=None): + """Count samples where each frame appears anywhere in the stack (inclusive time).""" + counts = defaultdict(int) + for frames, count in stacks: + stack_str = ";".join(frames) + if grep and not re.search(grep, stack_str, re.IGNORECASE): + continue + if exclude and re.search(exclude, stack_str, re.IGNORECASE): + continue + seen = set() + for frame in frames: + if frame not in seen: + counts[frame] += count + seen.add(frame) + return sorted(counts.items(), key=lambda x: x[1], reverse=True)[:n] + + +def top_packages(stacks, n=20, grep=None, exclude=None): + """Group inclusive time by top-level Java package.""" + counts = defaultdict(int) + for frames, count in stacks: + stack_str = ";".join(frames) + if grep and not re.search(grep, stack_str, re.IGNORECASE): + continue + if exclude and re.search(exclude, stack_str, re.IGNORECASE): + continue + seen_pkgs = set() + for frame in frames: + # Extract package: everything up to the last '/' before the class name + # e.g. "com/example/Service.process" → "com/example" + # e.g. "[vmlinux]" → "[kernel]" + if frame.startswith("["): + pkg = frame # kernel / JVM internal frame + elif "/" in frame: + pkg = frame.rsplit("/", 1)[0].replace("/", ".") + elif "." in frame: + pkg = frame.rsplit(".", 1)[0] + else: + pkg = frame + if pkg not in seen_pkgs: + counts[pkg] += count + seen_pkgs.add(pkg) + return sorted(counts.items(), key=lambda x: x[1], reverse=True)[:n] + + +def print_table(rows, total, header_left, header_right="Samples", csv_mode=False): + if csv_mode: + print(f"{header_left},{header_right},Pct") + for name, count in rows: + pct = 100.0 * count / total if total else 0 + print(f"{name},{count},{pct:.1f}") + return + + if not rows: + print(" (no data)") + return + + max_name = max(len(r[0]) for r in rows) + max_name = max(max_name, len(header_left)) + col_w = min(max_name, 80) + + bar_total = rows[0][1] if rows else 1 + print(f" {'─' * (col_w + 32)}") + print(f" {header_left:<{col_w}} {header_right:>8} {'%':>6} {'bar'}") + print(f" {'─' * (col_w + 32)}") + + for name, count in rows: + pct = 100.0 * count / total if total else 0 + bar_len = int(30 * count / bar_total) if bar_total else 0 + bar = "█" * bar_len + display = name if len(name) <= col_w else "…" + name[-(col_w - 1) :] + print(f" {display:<{col_w}} {count:>8,} {pct:>5.1f}% {bar}") + + print(f" {'─' * (col_w + 32)}") + + +def main(): + import argparse + + parser = argparse.ArgumentParser( + description="Analyze async-profiler collapsed stack output", + formatter_class=argparse.RawDescriptionHelpFormatter, + ) + parser.add_argument("file", help="Path to .collapsed stack file") + parser.add_argument( + "--top", type=int, default=20, help="Number of top frames to show" + ) + parser.add_argument( + "--grep", metavar="PATTERN", help="Only include stacks matching this regex" + ) + parser.add_argument( + "--exclude", metavar="PATTERN", help="Exclude stacks matching this regex" + ) + parser.add_argument( + "--packages", action="store_true", help="Group by package instead of method" + ) + parser.add_argument( + "--self-time", + action="store_true", + dest="self_time", + help="Show only leaf frames (self-time), not inclusive", + ) + parser.add_argument("--csv", action="store_true", help="Output as CSV") + args = parser.parse_args() + + path = args.file + if not Path(path).exists(): + print(f"❌ File not found: {path}", file=sys.stderr) + sys.exit(1) + + print("\n📊 async-profiler collapsed stack analysis") + print(f" File: {path}\n") + + stacks = parse_collapsed(path) + if not stacks: + print("❌ No stack data found. Is this a valid .collapsed file?") + sys.exit(1) + + total_samples = sum(c for _, c in stacks) + total_stacks = len(stacks) + + filters = "" + if args.grep: + filters += f" grep={args.grep}" + if args.exclude: + filters += f" exclude={args.exclude}" + if filters: + # count how many survive the filter + surviving = sum( + c + for frames, c in stacks + if (not args.grep or re.search(args.grep, ";".join(frames), re.IGNORECASE)) + and ( + not args.exclude + or not re.search(args.exclude, ";".join(frames), re.IGNORECASE) + ) + ) + matching_pct = 0.0 if total_samples == 0 else 100 * surviving / total_samples + print(f" Filters applied:{filters}") + print( + f" Matching samples: {surviving:,} / {total_samples:,} " + f"({matching_pct:.1f}%)\n" + ) + + print(f" Total samples : {total_samples:,}") + print(f" Unique stacks : {total_stacks:,}\n") + + if args.packages: + rows = top_packages(stacks, args.top, args.grep, args.exclude) + print(f" Top {args.top} packages by inclusive time:\n") + print_table(rows, total_samples, "Package", csv_mode=args.csv) + elif args.self_time: + rows = top_leaf_frames(stacks, args.top, args.grep, args.exclude) + print(f" Top {args.top} methods by self-time (leaf frames):\n") + print_table(rows, total_samples, "Method (leaf / self-time)", csv_mode=args.csv) + else: + # Default: show both self-time and inclusive for context + leaf_rows = top_leaf_frames(stacks, args.top, args.grep, args.exclude) + incl_rows = top_inclusive_frames(stacks, args.top, args.grep, args.exclude) + + print(f" Top {args.top} by self-time (leaf frames — actual CPU consumers):\n") + print_table(leaf_rows, total_samples, "Method (self-time)", csv_mode=args.csv) + print() + print(f" Top {args.top} by inclusive time (appears anywhere in stack):\n") + print_table(incl_rows, total_samples, "Method (inclusive)", csv_mode=args.csv) + + print() + print(" Tips:") + print(" • High self-time → direct optimization target") + print(" • High inclusive but low self-time → dispatcher/framework overhead") + print(" • Filter to your code: --grep 'com/yourcompany'") + print(" • Exclude noise: --exclude 'sun/reflect|\\$\\$Lambda'") + print(" • Group by package: --packages") + print() + + +if __name__ == "__main__": + main() diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh new file mode 100755 index 000000000..cb5e40fba --- /dev/null +++ b/skills/async-profiler/scripts/collect.sh @@ -0,0 +1,364 @@ +#!/usr/bin/env bash +# collect.sh — Agent-friendly async-profiler background collection. +# +# Designed for coding agents that need to start profiling without blocking +# so they can reproduce the problem, run load, or do other work while data +# is being collected. +# +# Usage: +# bash scripts/collect.sh start [--asprof PATH] +# bash scripts/collect.sh stop [--asprof PATH] +# bash scripts/collect.sh timed [-d N] [--asprof PATH] +# +# Subcommands: +# start Attach asprof and begin recording all events; returns immediately. +# Session state is saved in $XDG_RUNTIME_DIR when available, otherwise +# under /tmp, so 'stop' knows where to write output. +# stop Stop the active session, split the JFR into four per-event flamegraphs +# in parallel (cpu, alloc, wall, lock), then print paths to all outputs. +# timed Fixed-duration all-event capture that blocks for the duration. +# Run with & to let the agent continue working; then: wait $PROF_PID +# +# Agent workflow — start/stop (full control): +# bash scripts/collect.sh start 12345 +# # ... reproduce the problem, trigger load, wait for requests, etc. ... +# bash scripts/collect.sh stop 12345 +# +# Agent workflow — timed background: +# bash scripts/collect.sh timed -d 30 12345 & +# PROF_PID=$! +# # ... trigger load while profiling runs ... +# wait $PROF_PID +# +# Output layout: +# profile--/ +# combined.jfr — multi-event JFR (open in IntelliJ or JMC) +# profile-cpu.html — CPU flamegraph +# profile-alloc.html — allocation flamegraph +# profile-wall.html — wall-clock flamegraph +# profile-lock.html — lock contention flamegraph + +set -euo pipefail + +# ── Parse subcommand ────────────────────────────────────────────────────────── +if [[ $# -eq 0 ]]; then + sed -n '2,35p' "$0" | grep '^#' | sed 's/^# \?//' + exit 0 +fi + +SUBCMD="$1"; shift + +# ── Parse options ───────────────────────────────────────────────────────────── +DURATION=30 +TARGET="" +ASPROF_ARG="" + +while [[ $# -gt 0 ]]; do + case "$1" in + -d|--duration) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; DURATION="$2"; shift 2 ;; + --asprof) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; ASPROF_ARG="$2"; shift 2 ;; + -h|--help) + sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' + exit 0 + ;; + -*) + echo "❌ Unknown option: $1" >&2 + exit 1 + ;; + *) + TARGET="$1"; shift ;; + esac +done + +if [[ -z "$TARGET" && "$SUBCMD" != "help" ]]; then + echo "❌ No target specified. Provide a PID or app name." >&2 + echo " List Java processes: jps -l" >&2 + exit 1 +fi + +# ── Helpers ─────────────────────────────────────────────────────────────────── +locate_asprof() { + local asprof="" + if [[ -n "$ASPROF_ARG" ]]; then + asprof="$ASPROF_ARG" + elif command -v asprof &>/dev/null; then + asprof="$(command -v asprof)" + else + for candidate in \ + "$HOME/async-profiler-4.3/bin/asprof" \ + "$HOME/async-profiler/bin/asprof" \ + "/opt/async-profiler/bin/asprof" \ + "/usr/local/bin/asprof" + do + if [[ -x "$candidate" ]]; then + asprof="$candidate" + break + fi + done + fi + if [[ -z "$asprof" ]]; then + echo "❌ asprof not found. Install with: bash scripts/install.sh" >&2 + exit 1 + fi + echo "$asprof" +} + +locate_jfrconv() { + local asprof="$1" + if command -v jfrconv &>/dev/null; then + command -v jfrconv + elif [[ -x "$(dirname "$asprof")/jfrconv" ]]; then + echo "$(dirname "$asprof")/jfrconv" + else + echo "" + fi +} + +# Session state file — stores output path and asprof path between start/stop. +session_file() { + local safe uid state_dir + safe="${TARGET//[^a-zA-Z0-9_-]/_}" + uid="$(id -u)" + + if [[ -n "${XDG_RUNTIME_DIR:-}" && -d "${XDG_RUNTIME_DIR}" && -w "${XDG_RUNTIME_DIR}" ]]; then + state_dir="${XDG_RUNTIME_DIR}" + else + state_dir="/tmp" + fi + + echo "${state_dir}/asprof-session-${uid}-${safe}" +} + +split_jfr() { + local jfrconv="$1" + local jfr_path="$2" + local base="$3" + + local cpu_html="${base}-cpu.html" + local alloc_html="${base}-alloc.html" + local wall_html="${base}-wall.html" + local lock_html="${base}-lock.html" + + echo "Splitting JFR into per-event flamegraphs in parallel..." + # jfrconv: event flag must come FIRST, before the input file + "$jfrconv" --cpu "$jfr_path" "$cpu_html" & + local pid_cpu=$! + "$jfrconv" --alloc "$jfr_path" "$alloc_html" & + local pid_alloc=$! + "$jfrconv" --wall "$jfr_path" "$wall_html" & + local pid_wall=$! + "$jfrconv" --lock "$jfr_path" "$lock_html" & + local pid_lock=$! + local wait_failed=0 + local _pid _label + for _pid in "$pid_cpu" "$pid_alloc" "$pid_wall" "$pid_lock"; do + case "$_pid" in + "$pid_cpu") _label="cpu" ;; + "$pid_alloc") _label="alloc" ;; + "$pid_wall") _label="wall" ;; + "$pid_lock") _label="lock" ;; + esac + if ! wait "$_pid"; then + echo "ERROR: jfrconv ${_label} conversion failed." >&2 + wait_failed=1 + fi + done + if [[ "$wait_failed" -ne 0 ]]; then + return 1 + fi + + echo "" + echo "📊 Flamegraphs ready:" + echo " CPU time : $cpu_html" + echo " Allocations : $alloc_html" + echo " Wall-clock : $wall_html" + echo " Lock contention : $lock_html" + echo " Combined JFR : $jfr_path" + + if [[ "$(uname)" == "Darwin" ]]; then + echo "" + echo "Opening all flamegraphs in browser..." + open "$cpu_html" "$alloc_html" "$wall_html" "$lock_html" + fi + + local base_dir; base_dir="$(dirname "$jfr_path")" + echo "" + echo "💡 Next step: analyze results." + echo " For collapsed stack analysis (CPU):" + echo " jfrconv --cpu $jfr_path ${base}-cpu.collapsed" + echo " python3 scripts/analyze_collapsed.py ${base}-cpu.collapsed" +} + +# ── start ───────────────────────────────────────────────────────────────────── +cmd_start() { + local asprof; asprof="$(locate_asprof)" + local timestamp; timestamp="$(date +%Y%m%d-%H%M%S)" + local safe_target; safe_target="$(printf '%s' "$TARGET" | tr -c '[:alnum:]._-' '_')" + [[ -n "$safe_target" ]] || safe_target="unknown" + local outdir="profile-${safe_target}-${timestamp}" + mkdir -p "$outdir" + local jfr_path; jfr_path="$(pwd)/${outdir}/combined.jfr" + local sess; sess="$(session_file)" + + echo "▶ Starting all-event async-profiler on target: $TARGET" + echo " Binary : $asprof" + echo " Output dir: $outdir/" + echo " Events : cpu + alloc + wall + lock (combined JFR)" + echo "" + + # macOS: asprof stop ignores -f and writes to /var/folders instead. + # Create a sentinel so we can find the JFR after stop via find -newer. + local sentinel; sentinel="$(mktemp "/tmp/asprof-sentinel.XXXXXX")" + if [[ -L "$sentinel" ]]; then + echo "❌ mktemp created a symlink for the sentinel file: $sentinel" >&2 + exit 1 + fi + + "$asprof" start --all "$TARGET" + + # Save session state (jfr_path, asprof binary, sentinel path) + if [[ -L "$sess" ]]; then + echo "❌ Session file path is a symlink — refusing to use it." >&2 + rm -f "$sentinel"; exit 1 + fi + (umask 077; printf '%s\n%s\n%s\n' "$jfr_path" "$asprof" "$sentinel" > "$sess") + + echo "✅ Profiling started. Session state: $sess" + echo "" + echo "Now reproduce the problem — make requests, run load, wait for the" + echo "slow operation, etc. asprof is collecting all event types." + echo "" + echo "When ready to collect results:" + echo " bash scripts/collect.sh stop $TARGET" +} + +# ── stop ────────────────────────────────────────────────────────────────────── +cmd_stop() { + local sess; sess="$(session_file)" + + if [[ ! -f "$sess" ]]; then + echo "❌ No active session found for target '$TARGET'." >&2 + echo " Expected state file: $sess" >&2 + echo " Run first: bash scripts/collect.sh start $TARGET" >&2 + exit 1 + fi + + local jfr_path; jfr_path="$(sed -n '1p' "$sess")" + local asprof; asprof="$(sed -n '2p' "$sess")" + local sentinel; sentinel="$(sed -n '3p' "$sess")" + [[ -n "$ASPROF_ARG" ]] && asprof="$ASPROF_ARG" + + echo "⏹ Stopping profiler on target: $TARGET" + # Note: on macOS, -f is silently ignored by asprof stop — handled below. + "$asprof" stop -f "$jfr_path" "$TARGET" + # Session file is removed only after the JFR is confirmed written (see end of block). + + # ── macOS JFR path workaround ──────────────────────────────────────────── + # On macOS, asprof stop ignores -f and writes the JFR to: + # /var/folders//T/_/.jfr + # Use the sentinel (created at 'start') to find the file via find -newer. + if [[ "$(uname)" == "Darwin" ]] && [[ -n "$sentinel" ]] && [[ -f "$sentinel" ]]; then + echo "" + echo "⚠️ macOS: -f is ignored by asprof stop — locating JFR in /var/folders..." + local found_jfr="" + local -a jfr_matches=() + local jfr_candidate + while IFS= read -r -d '' jfr_candidate; do + jfr_matches+=("$jfr_candidate") + done < <(find /var/folders -maxdepth 8 -name "*.jfr" -newer "$sentinel" -print0 2>/dev/null) + + # Sort by mtime (newest first) to avoid picking up an unrelated recording. + if [[ ${#jfr_matches[@]} -gt 0 ]]; then + found_jfr=$(ls -1t "${jfr_matches[@]}" 2>/dev/null | head -1) + fi + if [[ -n "$found_jfr" ]]; then + cp "$found_jfr" "$jfr_path" + rm -f "$sentinel" + echo " Found: $found_jfr" + echo " Copied to: $jfr_path" + else + echo "❌ Could not find JFR in /var/folders. Try:" + echo " find /var/folders -maxdepth 8 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + echo " (The JFR may still be there — copy it manually to $jfr_path)" + echo " Sentinel preserved at: $sentinel for retry" + echo " Session state preserved at: $sess" + exit 1 + fi + else + rm -f "$sentinel" 2>/dev/null || true + fi + # ──────────────────────────────────────────────────────────────────────── + if [[ ! -s "$jfr_path" ]]; then + echo "❌ Profiling stopped but expected JFR output is missing or empty: $jfr_path" + echo " Session state preserved at: $sess" + exit 1 + fi + rm -f "$sess" + + echo "" + echo "✅ Capture saved: $jfr_path" + echo "" + + local jfrconv; jfrconv="$(locate_jfrconv "$asprof")" + if [[ -z "$jfrconv" ]]; then + echo "⚠️ jfrconv not found — skipping flamegraph split." + echo " Convert manually: jfrconv --cpu $jfr_path cpu.html" + echo " Or open in IntelliJ IDEA or JDK Mission Control." + return + fi + + local base; base="$(dirname "$jfr_path")/profile" + split_jfr "$jfrconv" "$jfr_path" "$base" +} + +# ── timed ───────────────────────────────────────────────────────────────────── +cmd_timed() { + local asprof; asprof="$(locate_asprof)" + local timestamp; timestamp="$(date +%Y%m%d-%H%M%S)" + local safe_target; safe_target="$(printf '%s' "$TARGET" | tr -c '[:alnum:]._-' '_')" + [[ -n "$safe_target" ]] || safe_target="unknown" + local outdir="profile-${safe_target}-${timestamp}" + mkdir -p "$outdir" + local jfr_path="${outdir}/combined.jfr" + + echo "⏱ ${DURATION}s all-event capture on target: $TARGET" + echo " Binary : $asprof" + echo " Output : $jfr_path" + echo " Events : cpu + alloc + wall + lock" + echo "" + echo "Running for ${DURATION}s — trigger your workload now." + echo "(If called with &, the agent can do other work and then: wait \$PROF_PID)" + echo "" + + "$asprof" -d "$DURATION" --all -f "$jfr_path" "$TARGET" + + echo "" + echo "✅ Capture complete: $jfr_path" + echo "" + + local jfrconv; jfrconv="$(locate_jfrconv "$asprof")" + if [[ -z "$jfrconv" ]]; then + echo "⚠️ jfrconv not found — skipping flamegraph split." + echo " Open $jfr_path in IntelliJ IDEA or JDK Mission Control." + return + fi + + local base="${outdir}/profile" + split_jfr "$jfrconv" "$jfr_path" "$base" +} + +# ── Dispatch ────────────────────────────────────────────────────────────────── +case "$SUBCMD" in + start) cmd_start ;; + stop) cmd_stop ;; + timed) cmd_timed ;; + help|-h|--help) + sed -n '2,35p' "$0" | grep '^#' | sed 's/^# \?//' + exit 0 + ;; + *) + echo "❌ Unknown subcommand: '$SUBCMD'" >&2 + echo " Valid subcommands: start | stop | timed" >&2 + exit 1 + ;; +esac diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh new file mode 100644 index 000000000..2987eca76 --- /dev/null +++ b/skills/async-profiler/scripts/install.sh @@ -0,0 +1,147 @@ +#!/usr/bin/env bash +# install.sh — Download and install async-profiler for the current platform. +# +# Usage: +# ./install.sh # installs to ~/async-profiler-4.3 +# ./install.sh /opt/profilers # installs to /opt/profilers/async-profiler-4.3 +# ./install.sh --path-only # just prints the install path (for scripting) +# +# After install, the script prints the path to the asprof binary. + +set -euo pipefail + +VERSION="4.3" +BASE_URL="https://github.com/async-profiler/async-profiler/releases/download/v${VERSION}" +INSTALL_PARENT="${1:-$HOME}" + +# --path-only: don't install, just print where asprof would end up +if [[ "${1:-}" == "--path-only" ]]; then + echo "$HOME/async-profiler-${VERSION}/bin/asprof" + exit 0 +fi + +# ── Detect platform ────────────────────────────────────────────────────────── +OS="$(uname -s)" +ARCH="$(uname -m)" + +case "$OS" in + Darwin) + PLATFORM="macos" + ;; + Linux) + PLATFORM="linux" + ;; + *) + echo "❌ Unsupported OS: $OS (async-profiler supports Linux and macOS)" + exit 1 + ;; +esac + +case "$ARCH" in + x86_64|amd64) ARCH_LABEL="x64" ;; + aarch64|arm64) ARCH_LABEL="arm64" ;; + *) + echo "❌ Unsupported architecture: $ARCH" + exit 1 + ;; +esac + +# macOS ships as a single universal binary (covers both x64 and arm64) +if [[ "$PLATFORM" == "macos" ]]; then + ARCHIVE="async-profiler-${VERSION}-macos.zip" + EXTRACTED_DIR="async-profiler-${VERSION}-macos" +else + ARCHIVE="async-profiler-${VERSION}-linux-${ARCH_LABEL}.tar.gz" + EXTRACTED_DIR="async-profiler-${VERSION}-linux-${ARCH_LABEL}" +fi + +INSTALL_DIR="${INSTALL_PARENT}/async-profiler-${VERSION}" +DOWNLOAD_URL="${BASE_URL}/${ARCHIVE}" + +# ── Already installed? ─────────────────────────────────────────────────────── +if [[ -x "${INSTALL_DIR}/bin/asprof" ]]; then + echo "✅ async-profiler ${VERSION} is already installed at: ${INSTALL_DIR}" + echo " Binary: ${INSTALL_DIR}/bin/asprof" + exit 0 +fi + +# Destination exists but is not a valid installation — refuse to clobber. +if [[ -e "${INSTALL_DIR}" ]]; then + echo "❌ Install destination already exists but does not appear to be a valid async-profiler installation:" + echo " ${INSTALL_DIR}" + echo " Expected executable: ${INSTALL_DIR}/bin/asprof" + echo " Remove it manually and re-run, or choose a different parent directory:" + echo " bash scripts/install.sh /path/to/dir" + exit 1 +fi + +# ── Download ───────────────────────────────────────────────────────────────── +echo "📦 Installing async-profiler ${VERSION} for ${PLATFORM}-${ARCH_LABEL}..." +echo " Downloading: ${DOWNLOAD_URL}" + +TMP_DIR="$(mktemp -d)" +trap 'rm -rf "$TMP_DIR"' EXIT + +cd "$TMP_DIR" + +if command -v curl &>/dev/null; then + curl -fsSL -o "$ARCHIVE" "$DOWNLOAD_URL" +elif command -v wget &>/dev/null; then + wget -q -O "$ARCHIVE" "$DOWNLOAD_URL" +else + echo "❌ Neither curl nor wget found. Install one and retry." + exit 1 +fi + +# ── Extract ────────────────────────────────────────────────────────────────── +echo " Extracting..." +if [[ "$ARCHIVE" == *.zip ]]; then + if ! command -v unzip &>/dev/null; then + echo "❌ 'unzip' is required to extract the macOS archive but was not found." + echo " Install it with: brew install unzip" + exit 1 + fi + unzip -q "$ARCHIVE" +else + tar xf "$ARCHIVE" +fi + +# Move into place +mkdir -p "$INSTALL_PARENT" +mv "$EXTRACTED_DIR" "$INSTALL_DIR" +chmod +x "${INSTALL_DIR}/bin/asprof" + +# macOS: remove quarantine flag so Gatekeeper doesn't block it +if [[ "$PLATFORM" == "macos" ]]; then + xattr -dr com.apple.quarantine "${INSTALL_DIR}" 2>/dev/null || true +fi + +# ── Verify ─────────────────────────────────────────────────────────────────── +ASPROF="${INSTALL_DIR}/bin/asprof" +if ! "$ASPROF" --version &>/dev/null; then + echo "❌ Installed but 'asprof --version' failed. Check $INSTALL_DIR" + exit 1 +fi + +INSTALLED_VERSION="$("$ASPROF" --version 2>&1 | head -1)" + +echo "" +echo "✅ async-profiler installed successfully!" +echo " Version : $INSTALLED_VERSION" +echo " Location: ${INSTALL_DIR}" +echo " Binary : ${ASPROF}" +echo "" +echo "To add asprof to your PATH, add this to ~/.zshrc or ~/.bashrc:" +echo " export PATH=\"${INSTALL_DIR}/bin:\$PATH\"" +echo "" + +# ── macOS: print limitation note ───────────────────────────────────────────── +if [[ "$PLATFORM" == "macos" ]]; then + echo "ℹ️ macOS note: async-profiler uses the itimer CPU engine on macOS." + echo " Kernel stack frames are not available (platform limitation)." + echo " CPU and allocation profiles are still highly useful." + echo "" +fi + +echo "Quick test (requires a running JVM — find PID with: jps -l):" +echo " asprof -d 5 " diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh new file mode 100644 index 000000000..09a6867a1 --- /dev/null +++ b/skills/async-profiler/scripts/run_profile.sh @@ -0,0 +1,253 @@ +#!/usr/bin/env bash +# run_profile.sh — Wrapper around asprof for common profiling scenarios. +# +# Usage: +# ./run_profile.sh [options] +# +# Options: +# -e, --event cpu|alloc|wall|lock Single event (default: cpu) +# -d, --duration N Seconds to profile (default: 30) +# -f, --format html|jfr|collapsed Output format for single-event (default: html) +# -o, --output FILE Output path (default: auto-named) +# -t, --threads Profile threads separately +# --all Capture all events to a JFR file +# --comprehensive Capture all events AND split into per-event +# flamegraphs in parallel (recommended for +# diagnosis when you don't know the cause) +# --asprof PATH Path to asprof binary (auto-detected) +# -h, --help Show this help +# +# Examples: +# ./run_profile.sh 12345 # 30s CPU flamegraph +# ./run_profile.sh --comprehensive 12345 # all events, split into flamegraphs +# ./run_profile.sh -e alloc -d 60 MyApp # 60s allocation flamegraph +# ./run_profile.sh -e wall -f jfr 12345 # wall-clock JFR recording +# ./run_profile.sh --all -d 120 12345 # all events, single JFR file + +set -euo pipefail + +# ── Defaults ───────────────────────────────────────────────────────────────── +EVENT="cpu" +DURATION=30 +FORMAT="html" +OUTPUT="" +THREADS=false +ALL_EVENTS=false +COMPREHENSIVE=false +ASPROF="" +TARGET="" + +# ── Parse arguments ─────────────────────────────────────────────────────────── +while [[ $# -gt 0 ]]; do + case "$1" in + -e|--event) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; EVENT="$2"; shift 2 ;; + -d|--duration) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; DURATION="$2"; shift 2 ;; + -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; shift 2 ;; + -o|--output) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; OUTPUT="$2"; shift 2 ;; + -t|--threads) THREADS=true; shift ;; + --all) ALL_EVENTS=true; FORMAT="jfr"; shift ;; + --comprehensive) COMPREHENSIVE=true; ALL_EVENTS=true; FORMAT="jfr"; shift ;; + --asprof) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; ASPROF="$2"; shift 2 ;; + -h|--help) + sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' + exit 0 + ;; + -*) + echo "❌ Unknown option: $1" >&2 + exit 1 + ;; + *) + TARGET="$1" + shift + ;; + esac +done + +if [[ -z "$TARGET" ]]; then + echo "❌ No target specified. Provide a PID or app name." + echo " Usage: $0 [options] " + echo " List Java processes: jps -l" + exit 1 +fi + +# ── Locate asprof ───────────────────────────────────────────────────────────── +if [[ -z "$ASPROF" ]]; then + if command -v asprof &>/dev/null; then + ASPROF="$(command -v asprof)" + else + for candidate in \ + "$HOME/async-profiler-4.3/bin/asprof" \ + "$HOME/async-profiler/bin/asprof" \ + "/opt/async-profiler/bin/asprof" \ + "/usr/local/bin/asprof" + do + if [[ -x "$candidate" ]]; then + ASPROF="$candidate" + break + fi + done + fi +fi + +if [[ -z "$ASPROF" ]]; then + echo "❌ asprof not found. Install with: bash scripts/install.sh" + echo " Or specify path: --asprof /path/to/asprof" + exit 1 +fi + +# ── Build output filename ───────────────────────────────────────────────────── +TIMESTAMP="$(date +%Y%m%d-%H%M%S)" + +if [[ -z "$OUTPUT" ]]; then + if $ALL_EVENTS; then + OUTPUT="profile-all-${TIMESTAMP}.jfr" + else + EXT="$FORMAT" + OUTPUT="profile-${EVENT}-${TIMESTAMP}.${EXT}" + fi +fi + +# ── Build asprof command ────────────────────────────────────────────────────── +CMD=("$ASPROF" "-d" "$DURATION" "-f" "$OUTPUT") +$ALL_EVENTS && CMD+=("--all") || CMD+=("-e" "$EVENT") +$THREADS && CMD+=("-t") +CMD+=("$TARGET") + +# ── Print plan ──────────────────────────────────────────────────────────────── +echo "🔍 async-profiler run" +echo " Binary : $ASPROF" +echo " Target : $TARGET" +if $COMPREHENSIVE; then + echo " Mode : comprehensive (all events → JFR → split into flamegraphs)" +elif $ALL_EVENTS; then + echo " Events : all (cpu + alloc + wall + lock)" +else + echo " Event : $EVENT" +fi +echo " Duration: ${DURATION}s" +echo " Output : $OUTPUT" +$THREADS && echo " Threads : separate" +echo "" +echo "▶ ${CMD[*]}" +echo "Press Ctrl+C to stop early (partial results will be saved)." +echo "" + +# ── Execute ─────────────────────────────────────────────────────────────────── +"${CMD[@]}" + +echo "" +echo "✅ Capture complete: $OUTPUT" +echo "" + +# ── Comprehensive mode: split JFR into per-event flamegraphs in parallel ────── +if $COMPREHENSIVE; then + if ! command -v jfrconv &>/dev/null; then + # jfrconv ships alongside asprof + JFRCONV="$(dirname "$ASPROF")/jfrconv" + if [[ ! -x "$JFRCONV" ]]; then + echo "⚠️ jfrconv not found — skipping flamegraph split." + echo " You can convert manually: jfrconv $OUTPUT flamegraph.html" + COMPREHENSIVE=false + fi + else + JFRCONV="jfrconv" + fi +fi + +if $COMPREHENSIVE; then + BASE="${OUTPUT%.jfr}" + CPU_HTML="${BASE}-cpu.html" + ALLOC_HTML="${BASE}-alloc.html" + WALL_HTML="${BASE}-wall.html" + LOCK_HTML="${BASE}-lock.html" + + echo "Splitting into per-event flamegraphs in parallel..." + + "$JFRCONV" --cpu "$OUTPUT" "$CPU_HTML" & PID_CPU=$! + "$JFRCONV" --alloc "$OUTPUT" "$ALLOC_HTML" & PID_ALLOC=$! + "$JFRCONV" --wall "$OUTPUT" "$WALL_HTML" & PID_WALL=$! + "$JFRCONV" --lock "$OUTPUT" "$LOCK_HTML" & PID_LOCK=$! + + CONVERSION_FAILED=false + for pid in "$PID_CPU" "$PID_ALLOC" "$PID_WALL" "$PID_LOCK"; do + if ! wait "$pid"; then + CONVERSION_FAILED=true + fi + done + + if $CONVERSION_FAILED; then + echo "Error: one or more jfrconv conversions failed." >&2 + exit 1 + fi + + echo "" + echo "📊 Flamegraphs ready:" + echo " CPU time : $CPU_HTML" + echo " Allocations : $ALLOC_HTML" + echo " Wall-clock : $WALL_HTML" + echo " Lock contention: $LOCK_HTML" + echo " Combined JFR : $OUTPUT (open in IntelliJ or JDK Mission Control)" + echo "" + + # Open all flamegraphs at once if on macOS + if [[ "$(uname)" == "Darwin" ]]; then + echo "Opening all flamegraphs in browser..." + open "$CPU_HTML" "$ALLOC_HTML" "$WALL_HTML" "$LOCK_HTML" + else + echo "Open flamegraphs with:" + echo " xdg-open $CPU_HTML" + echo " xdg-open $ALLOC_HTML" + echo " xdg-open $WALL_HTML" + echo " xdg-open $LOCK_HTML" + fi + + echo "" + echo "💡 Next step — analyze results:" + echo " Ask your AI assistant: 'Analyze these profiles and tell me where" + echo " to focus: $CPU_HTML, $ALLOC_HTML, $WALL_HTML, $LOCK_HTML'" + echo "" + echo " Or for collapsed stack analysis:" + echo " jfrconv $OUTPUT ${BASE}-cpu.collapsed" + echo " python3 scripts/analyze_collapsed.py ${BASE}-cpu.collapsed" + +else + # Single-event post-run guidance + case "$FORMAT" in + html) + echo "Open in browser:" + if [[ "$(uname)" == "Darwin" ]]; then + open "$OUTPUT" + else + echo " xdg-open $OUTPUT" + fi + echo "" + echo "What to look for:" + echo " • Wide frames near the top = hot code (primary optimization targets)" + echo " • Wide leaf frames = direct CPU/allocation consumers" + echo " • LockSupport.park / Object.wait (wall profile) = blocked threads" + echo "" + echo "💡 Next step — ask your AI assistant to analyze:" + echo " 'I have a flamegraph at $OUTPUT — what's causing the bottleneck?'" + ;; + jfr) + echo "Open in IntelliJ IDEA: File → Open → select $OUTPUT" + echo "Open in JDK Mission Control: File → Open File → select $OUTPUT" + echo "" + echo "Or convert to flamegraph:" + echo " jfrconv $OUTPUT flamegraph.html" + echo "" + echo "💡 Next step — ask your AI assistant to analyze:" + echo " 'I have a JFR recording at $OUTPUT — help me interpret it.'" + ;; + collapsed) + echo "Analyze with:" + echo " python3 scripts/analyze_collapsed.py $OUTPUT" + echo "" + echo "Or convert to flamegraph:" + echo " jfrconv $OUTPUT flamegraph.html" + echo "" + echo "💡 Next step — ask your AI assistant to analyze:" + echo " 'Run analyze_collapsed.py on $OUTPUT and tell me what's slow.'" + ;; + esac +fi diff --git a/skills/async-profiler/setup/SKILL.md b/skills/async-profiler/setup/SKILL.md new file mode 100644 index 000000000..79f2fda40 --- /dev/null +++ b/skills/async-profiler/setup/SKILL.md @@ -0,0 +1,199 @@ +--- +name: async-profiler-setup +description: 'Install, configure, and verify async-profiler for Java on macOS or Linux. Use this skill whenever a Java developer wants to profile their JVM and needs to get async-profiler installed first. Trigger for: "install async-profiler", "how do I set up async-profiler", "get started with Java profiling", "async-profiler not found", "profiler setup", "download asprof", or any question about system requirements, permissions, or JVM flags for profiling. Also trigger when someone says "I want to profile my Java app" and hasn''t mentioned having async-profiler installed yet.' +--- + +# async-profiler Setup + +async-profiler (v4.3+) is a low-overhead sampling profiler for Java. It avoids the +"safepoint bias" of standard JVM profilers by using HotSpot-specific APIs, and it +can profile CPU, memory allocation, wall-clock time, and lock contention. + +## Do you need to install anything? + +**If you're using IntelliJ IDEA Ultimate**, async-profiler is already bundled — +no installation needed for profiling apps you run from the IDE. You can profile +any run configuration right now by clicking the flame icon (▶🔥) next to the run +button, or via *Run → Profile*. Jump straight to the **async-profiler-profile** +skill if that's your use case. + +You do still need a standalone install if you want to: +- Profile a process not launched from IntelliJ (remote server, Docker, SSH) +- Use `asprof` from the terminal or CI pipeline +- Run `scripts/run_profile.sh` or `scripts/analyze_collapsed.py` +- Use IntelliJ IDEA Community (no built-in profiler) + +**Everyone else** (Community edition, terminal-only, production servers): +continue below. + +--- + +## Step 1 — Download + +The latest stable release is **v4.3** (January 2025). The skill includes an +install script that handles everything automatically. + +### Option A — use the bundled install script (recommended) + +`scripts/install.sh` auto-detects the platform (macOS arm64/x64, Linux x64/arm64), +downloads the right binary, removes the macOS Gatekeeper quarantine flag, and +verifies the install: + +```bash +bash scripts/install.sh # installs to ~/async-profiler-4.3/ +bash scripts/install.sh /opt # installs to /opt/async-profiler-4.3/ +``` + +It prints the exact binary path and a one-liner to add it to your PATH. + +### Option B — manual install + +**macOS (Intel or Apple Silicon):** +```bash +# Using Homebrew (easiest) +brew install async-profiler + +# Or download directly +curl -LO https://github.com/async-profiler/async-profiler/releases/download/v4.3/async-profiler-4.3-macos.zip +unzip async-profiler-4.3-macos.zip +``` + +**Linux x64:** +```bash +curl -LO https://github.com/async-profiler/async-profiler/releases/download/v4.3/async-profiler-4.3-linux-x64.tar.gz +tar xf async-profiler-4.3-linux-x64.tar.gz +``` + +**Linux arm64:** +```bash +curl -LO https://github.com/async-profiler/async-profiler/releases/download/v4.3/async-profiler-4.3-linux-arm64.tar.gz +tar xf async-profiler-4.3-linux-arm64.tar.gz +``` + +After extracting, add `bin/` to your PATH: +```bash +export PATH="$PWD/bin:$PATH" +# Or permanently in ~/.zshrc / ~/.bashrc +``` + +Verify: +```bash +asprof --version +``` + +## Step 2 — Platform-specific configuration + +### macOS + +On macOS, async-profiler works out of the box with no extra configuration. The +default CPU sampling engine is **itimer**, which works without elevated privileges. + +**Important limitation to communicate to the user:** On macOS, async-profiler +cannot collect kernel stack frames and the itimer engine has a known bias toward +system calls. CPU profiles are still very useful, but they reflect user-space +time more faithfully than kernel time. This is a platform constraint, not a bug. + +### Linux — enabling kernel stack traces (optional but recommended) + +On Linux, async-profiler prefers the **perf_events** engine, which gives the most +accurate profiles and includes kernel frames. It requires: + +```bash +# Allow non-root perf_events (set once, persists until reboot) +sudo sysctl kernel.perf_event_paranoid=1 +sudo sysctl kernel.kptr_restrict=0 +``` + +To make these permanent across reboots, add to `/etc/sysctl.d/99-perf.conf`: +``` +kernel.perf_event_paranoid=1 +kernel.kptr_restrict=0 +``` + +If perf_events isn't available (e.g., inside a container), async-profiler +automatically falls back to **ctimer** — no action needed. + +### Linux — container / Docker + +In containers, perf_events is typically restricted by seccomp. async-profiler +still works via the itimer/ctimer fallback. If you want full perf_events inside a +container, the container needs `--cap-add SYS_ADMIN` or `--privileged` (use +judiciously in production). + +## Step 3 — Configure the JVM for better profiles + +Add these flags when starting your Java application. They're optional but make +profiles significantly more accurate by allowing the JVM to provide stack frames +even between safepoints: + +```bash +java -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints -jar myapp.jar +``` + +If you're using a framework that manages JVM startup (Spring Boot, Quarkus, etc.), +set these in `JAVA_TOOL_OPTIONS`: +```bash +export JAVA_TOOL_OPTIONS="-XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints" +``` + +## Step 4 — Verify everything works + +Find your Java process PID first: +```bash +jps -l # lists all JVM processes with their main class +# or +ps aux | grep java +``` + +Then run a quick 5-second test profile: +```bash +asprof -d 5 +``` + +You should see output like: +``` +Profiling for 5 seconds +--- Execution profile --- +Total samples : 453 +... +``` + +If it works, you're ready to profile. If you hit errors, see the troubleshooting +section below. + +## Troubleshooting common issues + +**"Could not attach to "** +- The JVM may need `-XX:+PerfDataSaveToFile` or you may lack permissions. Run as + the same user that owns the JVM process, or use `sudo`. + +**"Failed to open perf_events"** +- Run the sysctl commands in Step 2, or use `-e itimer` to force the itimer engine. + +**"No such process"** +- Double-check the PID with `jps -l`. JVM processes can restart under a new PID. + +**Homebrew install on macOS says "permission denied" running asprof** +- `chmod +x $(brew --prefix async-profiler)/bin/asprof` + +**macOS Gatekeeper blocks the binary** +- `xattr -d com.apple.quarantine /path/to/asprof` (removes the quarantine flag) + +## Using async-profiler as a Java agent + +If you can't attach dynamically (e.g., the JVM was started with +`-XX:-UseDynamicCodeDeoptimization`), use the Java agent mode: + +```bash +java -agentpath:/path/to/libasyncProfiler.so=start,event=cpu,file=profile.html \ + -jar myapp.jar +``` + +This starts profiling from the first moment the JVM launches, which is useful +for capturing startup performance. + +## What's next + +Once installed, use the **async-profiler-profile** skill to run a profiling +session and choose the right event type for your problem (CPU, memory, wall-clock, +or lock contention). From 5804c27e53a3e61b94b28716126a4d53cdcf4394 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:15:21 +0200 Subject: [PATCH 02/30] fix(async-profiler): align skill layout with spec Move the setup, profile, and analyze guides into references/ so the skill has a single root SKILL.md with supporting docs under references/, matching the Agent Skills spec and the PR feedback. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/README.skills.md | 2 +- skills/async-profiler/README.md | 15 +++++++-------- skills/async-profiler/SKILL.md | 18 ++++++++++-------- .../SKILL.md => references/analyze.md} | 6 ------ .../SKILL.md => references/profile.md} | 15 +++++---------- .../{setup/SKILL.md => references/setup.md} | 15 +++++---------- 6 files changed, 28 insertions(+), 43 deletions(-) rename skills/async-profiler/{analyze/SKILL.md => references/analyze.md} (94%) rename skills/async-profiler/{profile/SKILL.md => references/profile.md} (92%) rename skills/async-profiler/{setup/SKILL.md => references/setup.md} (86%) diff --git a/docs/README.skills.md b/docs/README.skills.md index c79aa3d72..09d7b8260 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -49,7 +49,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to | [arize-trace](../skills/arize-trace/SKILL.md) | INVOKE THIS SKILL when downloading or exporting Arize traces and spans. Covers exporting traces by ID, sessions by ID, and debugging LLM application issues using the ax CLI. | `references/ax-profiles.md`
`references/ax-setup.md` | | [aspire](../skills/aspire/SKILL.md) | Aspire skill covering the Aspire CLI, AppHost orchestration, service discovery, integrations, MCP server, VS Code extension, Dev Containers, GitHub Codespaces, templates, dashboard, and deployment. Use when the user asks to create, run, debug, configure, deploy, or troubleshoot an Aspire distributed application. | `references/architecture.md`
`references/cli-reference.md`
`references/dashboard.md`
`references/deployment.md`
`references/integrations-catalog.md`
`references/mcp-server.md`
`references/polyglot-apis.md`
`references/testing.md`
`references/troubleshooting.md` | | [aspnet-minimal-api-openapi](../skills/aspnet-minimal-api-openapi/SKILL.md) | Create ASP.NET Minimal API endpoints with proper OpenAPI documentation | None | -| [async-profiler](../skills/async-profiler/SKILL.md) | Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what's causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance. | `README.md`
`analyze`
`profile`
`scripts/analyze_collapsed.py`
`scripts/collect.sh`
`scripts/install.sh`
`scripts/run_profile.sh`
`setup` | +| [async-profiler](../skills/async-profiler/SKILL.md) | Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what's causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance. | `README.md`
`references/analyze.md`
`references/profile.md`
`references/setup.md`
`scripts/analyze_collapsed.py`
`scripts/collect.sh`
`scripts/install.sh`
`scripts/run_profile.sh` | | [automate-this](../skills/automate-this/SKILL.md) | Analyze a screen recording of a manual process and produce targeted, working automation scripts. Extracts frames and audio narration from video files, reconstructs the step-by-step workflow, and proposes automation at multiple complexity levels using tools already installed on the user machine. | None | | [autoresearch](../skills/autoresearch/SKILL.md) | Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy's autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric. | None | | [aws-cdk-python-setup](../skills/aws-cdk-python-setup/SKILL.md) | Setup and initialization guide for developing AWS CDK (Cloud Development Kit) applications in Python. This skill enables users to configure environment prerequisites, create new CDK projects, manage dependencies, and deploy to AWS. | None | diff --git a/skills/async-profiler/README.md b/skills/async-profiler/README.md index 99f89fc7f..30002cc32 100644 --- a/skills/async-profiler/README.md +++ b/skills/async-profiler/README.md @@ -61,12 +61,11 @@ cp -r async-profiler ~/.config/opencode/skills/async-profiler ``` async-profiler/ -├── SKILL.md # Entry point — routes to sub-guides -├── scripts/ # Bundled scripts -├── setup/ -│ └── SKILL.md # Installation and configuration -├── profile/ -│ └── SKILL.md # Running profiling sessions -└── analyze/ - └── SKILL.md # Interpreting profiling output +├── SKILL.md # Entry point — routes to focused reference guides +├── README.md # Human-readable overview and installation help +├── references/ +│ ├── setup.md # Installation and configuration +│ ├── profile.md # Running profiling sessions +│ └── analyze.md # Interpreting profiling output +└── scripts/ # Bundled scripts ``` diff --git a/skills/async-profiler/SKILL.md b/skills/async-profiler/SKILL.md index d1c2a7f56..e20061ac6 100644 --- a/skills/async-profiler/SKILL.md +++ b/skills/async-profiler/SKILL.md @@ -119,18 +119,20 @@ Always offer to run these scripts on the user's behalf when relevant. ## How to use this skill -This skill has three sub-guides. Read the one that matches what the user needs: +This skill keeps detailed guidance in `references/` so the root `SKILL.md` +stays focused and loads quickly. Read only the guide that matches the user's +current need: | Situation | Read | |---|---| -| User needs to install or configure async-profiler, or is hitting setup errors | `setup/SKILL.md` | -| User wants to run a profiling session (capture flamegraph, JFR, etc.) | `profile/SKILL.md` | -| User has profiling output and wants to understand or interpret it | `analyze/SKILL.md` | +| User needs to install or configure async-profiler, or is hitting setup errors | `references/setup.md` | +| User wants to run a profiling session (capture flamegraph, JFR, etc.) | `references/profile.md` | +| User has profiling output and wants to understand or interpret it | `references/analyze.md` | **When the conversation spans multiple phases** (e.g., the user just ran a -profile and now wants to understand the output), read whichever sub-guide is +profile and now wants to understand the output), read whichever guide is most relevant to the current question. If the user needs both setup *and* -profiling guidance in one message, read `setup/SKILL.md` first and summarize -the setup steps before moving to `profile/SKILL.md`. +profiling guidance in one message, read `references/setup.md` first and +summarize the setup steps before moving to `references/profile.md`. -Read the relevant sub-guide now before responding. +Read the relevant reference now before responding. diff --git a/skills/async-profiler/analyze/SKILL.md b/skills/async-profiler/references/analyze.md similarity index 94% rename from skills/async-profiler/analyze/SKILL.md rename to skills/async-profiler/references/analyze.md index b7f11e414..4632c6a8e 100644 --- a/skills/async-profiler/analyze/SKILL.md +++ b/skills/async-profiler/references/analyze.md @@ -1,9 +1,3 @@ ---- -name: async-profiler-analyze -description: 'Interpret and analyze async-profiler output: flamegraph HTML/SVG files, JFR recordings, and collapsed stack traces. Use this skill whenever a Java developer shares profiler output or wants help understanding profiling results. Trigger for: "what does this flamegraph mean", "how do I read this JFR", "what''s causing my CPU hotspot", "interpret my profiling results", "analyze this flamegraph", "what should I look for in my profile", "the wide frame in my flamegraph is X, what does that mean", "I see a lot of GC in my profile", "my profile shows 80% in X, is that bad", or whenever someone pastes or describes profiling output. Also trigger proactively when the async-profiler-profile skill just produced output and the user seems to want to understand it.' -compatibility: Requires Python 3.7+ for the analyze_collapsed.py script. ---- - # async-profiler Output Analysis The three main output formats — flamegraph HTML, JFR recordings, and collapsed diff --git a/skills/async-profiler/profile/SKILL.md b/skills/async-profiler/references/profile.md similarity index 92% rename from skills/async-profiler/profile/SKILL.md rename to skills/async-profiler/references/profile.md index 0a83dc565..8d925a189 100644 --- a/skills/async-profiler/profile/SKILL.md +++ b/skills/async-profiler/references/profile.md @@ -1,8 +1,3 @@ ---- -name: async-profiler-profile -description: 'Run async-profiler against a live JVM process to capture CPU, memory allocation, wall-clock, or lock contention profiles and generate flamegraphs or JFR recordings. Use this skill whenever a Java developer wants to start a profiling session, capture a flamegraph, find CPU hotspots, identify memory allocation pressure, measure thread blocking or lock contention, or asks: "how do I profile my running Java app", "capture a flamegraph", "find what''s using CPU", "profile heap allocations", "measure lock contention", "generate a JFR recording", "profile for N seconds", "what''s slow in my app". Assumes async-profiler is already installed (see async-profiler-setup skill if not).' ---- - # async-profiler — Running Profiles ## Agent-driven background profiling @@ -73,8 +68,8 @@ wait $PROF_PID ### After collecting Once `stop` or `timed` completes, offer to analyze the results immediately. -Read `analyze/SKILL.md` before interpreting the flamegraphs. Each `.html` file -can be opened directly in a browser; pass `.collapsed` files to +Read `references/analyze.md` before interpreting the flamegraphs. Each `.html` +file can be opened directly in a browser; pass `.collapsed` files to `scripts/analyze_collapsed.py` for a ranked self-time table. --- @@ -90,7 +85,7 @@ is to use the built-in integration: 3. To choose which events to capture (CPU, allocation, wall-clock): *Settings → Build, Execution, Deployment → Java Profiler* -Results open directly in IntelliJ's viewer — see `analyze/SKILL.md` for how to +Results open directly in IntelliJ's viewer — see `references/analyze.md` for how to navigate the flame graph, call tree, and timeline tabs. Use the terminal approach below when you need to profile a process that wasn't @@ -132,7 +127,7 @@ path bug, session state, and the JFR split automatically: bash scripts/collect.sh start # ... reproduce the problem ... bash scripts/collect.sh stop -# → outputs cpu.html, alloc.html, wall.html, lock.html +# → outputs profile-cpu.html, profile-alloc.html, profile-wall.html, profile-lock.html ``` --- @@ -408,7 +403,7 @@ immediately** — don't wait for the user to ask. Say something like: > "The profile is saved at `profile-all-20250409-143201-cpu.html`. Want me to > analyze it and identify the bottlenecks?" -Then read `analyze/SKILL.md` and interpret the output. If it's a JFR file, +Then read `references/analyze.md` and interpret the output. If it's a JFR file, offer to run `jfrconv` to extract flamegraphs first. If it's collapsed stacks, offer to run `scripts/analyze_collapsed.py`. The user has already done the hard part (reproducing the problem) — close the loop for them. diff --git a/skills/async-profiler/setup/SKILL.md b/skills/async-profiler/references/setup.md similarity index 86% rename from skills/async-profiler/setup/SKILL.md rename to skills/async-profiler/references/setup.md index 79f2fda40..50fcebb23 100644 --- a/skills/async-profiler/setup/SKILL.md +++ b/skills/async-profiler/references/setup.md @@ -1,8 +1,3 @@ ---- -name: async-profiler-setup -description: 'Install, configure, and verify async-profiler for Java on macOS or Linux. Use this skill whenever a Java developer wants to profile their JVM and needs to get async-profiler installed first. Trigger for: "install async-profiler", "how do I set up async-profiler", "get started with Java profiling", "async-profiler not found", "profiler setup", "download asprof", or any question about system requirements, permissions, or JVM flags for profiling. Also trigger when someone says "I want to profile my Java app" and hasn''t mentioned having async-profiler installed yet.' ---- - # async-profiler Setup async-profiler (v4.3+) is a low-overhead sampling profiler for Java. It avoids the @@ -14,8 +9,8 @@ can profile CPU, memory allocation, wall-clock time, and lock contention. **If you're using IntelliJ IDEA Ultimate**, async-profiler is already bundled — no installation needed for profiling apps you run from the IDE. You can profile any run configuration right now by clicking the flame icon (▶🔥) next to the run -button, or via *Run → Profile*. Jump straight to the **async-profiler-profile** -skill if that's your use case. +button, or via *Run → Profile*. Jump straight to `references/profile.md` if +that's your use case. You do still need a standalone install if you want to: - Profile a process not launched from IntelliJ (remote server, Docker, SSH) @@ -194,6 +189,6 @@ for capturing startup performance. ## What's next -Once installed, use the **async-profiler-profile** skill to run a profiling -session and choose the right event type for your problem (CPU, memory, wall-clock, -or lock contention). +Once installed, move to `references/profile.md` to run a profiling session and +choose the right event type for your problem (CPU, memory, wall-clock, or lock +contention). From 461b66060a686d30100042b61e57ed33c48a06b4 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:34:56 +0200 Subject: [PATCH 03/30] fix(async-profiler): address review feedback Validate regex filters in analyze_collapsed.py, use csv.writer for CSV output, derive guidance format from the output extension in run_profile.sh, and make the macOS JFR lookup in collect.sh narrower and safer. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../scripts/analyze_collapsed.py | 62 +++++++++++-------- skills/async-profiler/scripts/collect.sh | 32 ++++++++-- skills/async-profiler/scripts/run_profile.sh | 28 +++++++++ 3 files changed, 92 insertions(+), 30 deletions(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index 221636ef6..a8c67d349 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -20,6 +20,7 @@ from __future__ import annotations +import csv import sys import re from collections import defaultdict @@ -47,30 +48,43 @@ def parse_collapsed(path: str) -> list[tuple[list[str], int]]: return stacks -def top_leaf_frames(stacks, n=20, grep=None, exclude=None): +def compile_pattern(name: str, pattern: str | None): + if not pattern: + return None + try: + return re.compile(pattern, re.IGNORECASE) + except re.error as exc: + print(f"❌ Invalid regex for --{name}: {exc}", file=sys.stderr) + sys.exit(1) + + +def matches_filters(frames, grep_re=None, exclude_re=None): + stack_str = ";".join(frames) + if grep_re and not grep_re.search(stack_str): + return False + if exclude_re and exclude_re.search(stack_str): + return False + return True + + +def top_leaf_frames(stacks, n=20, grep_re=None, exclude_re=None): """Count samples where each frame is the leaf (top of stack = actual work).""" counts = defaultdict(int) for frames, count in stacks: if not frames: continue - stack_str = ";".join(frames) - if grep and not re.search(grep, stack_str, re.IGNORECASE): - continue - if exclude and re.search(exclude, stack_str, re.IGNORECASE): + if not matches_filters(frames, grep_re, exclude_re): continue leaf = frames[-1] counts[leaf] += count return sorted(counts.items(), key=lambda x: x[1], reverse=True)[:n] -def top_inclusive_frames(stacks, n=20, grep=None, exclude=None): +def top_inclusive_frames(stacks, n=20, grep_re=None, exclude_re=None): """Count samples where each frame appears anywhere in the stack (inclusive time).""" counts = defaultdict(int) for frames, count in stacks: - stack_str = ";".join(frames) - if grep and not re.search(grep, stack_str, re.IGNORECASE): - continue - if exclude and re.search(exclude, stack_str, re.IGNORECASE): + if not matches_filters(frames, grep_re, exclude_re): continue seen = set() for frame in frames: @@ -80,14 +94,11 @@ def top_inclusive_frames(stacks, n=20, grep=None, exclude=None): return sorted(counts.items(), key=lambda x: x[1], reverse=True)[:n] -def top_packages(stacks, n=20, grep=None, exclude=None): +def top_packages(stacks, n=20, grep_re=None, exclude_re=None): """Group inclusive time by top-level Java package.""" counts = defaultdict(int) for frames, count in stacks: - stack_str = ";".join(frames) - if grep and not re.search(grep, stack_str, re.IGNORECASE): - continue - if exclude and re.search(exclude, stack_str, re.IGNORECASE): + if not matches_filters(frames, grep_re, exclude_re): continue seen_pkgs = set() for frame in frames: @@ -110,10 +121,11 @@ def top_packages(stacks, n=20, grep=None, exclude=None): def print_table(rows, total, header_left, header_right="Samples", csv_mode=False): if csv_mode: - print(f"{header_left},{header_right},Pct") + writer = csv.writer(sys.stdout) + writer.writerow([header_left, header_right, "Pct"]) for name, count in rows: pct = 100.0 * count / total if total else 0 - print(f"{name},{count},{pct:.1f}") + writer.writerow([name, count, f"{pct:.1f}"]) return if not rows: @@ -167,6 +179,8 @@ def main(): ) parser.add_argument("--csv", action="store_true", help="Output as CSV") args = parser.parse_args() + grep_re = compile_pattern("grep", args.grep) + exclude_re = compile_pattern("exclude", args.exclude) path = args.file if not Path(path).exists(): @@ -194,11 +208,7 @@ def main(): surviving = sum( c for frames, c in stacks - if (not args.grep or re.search(args.grep, ";".join(frames), re.IGNORECASE)) - and ( - not args.exclude - or not re.search(args.exclude, ";".join(frames), re.IGNORECASE) - ) + if matches_filters(frames, grep_re, exclude_re) ) matching_pct = 0.0 if total_samples == 0 else 100 * surviving / total_samples print(f" Filters applied:{filters}") @@ -211,17 +221,17 @@ def main(): print(f" Unique stacks : {total_stacks:,}\n") if args.packages: - rows = top_packages(stacks, args.top, args.grep, args.exclude) + rows = top_packages(stacks, args.top, grep_re, exclude_re) print(f" Top {args.top} packages by inclusive time:\n") print_table(rows, total_samples, "Package", csv_mode=args.csv) elif args.self_time: - rows = top_leaf_frames(stacks, args.top, args.grep, args.exclude) + rows = top_leaf_frames(stacks, args.top, grep_re, exclude_re) print(f" Top {args.top} methods by self-time (leaf frames):\n") print_table(rows, total_samples, "Method (leaf / self-time)", csv_mode=args.csv) else: # Default: show both self-time and inclusive for context - leaf_rows = top_leaf_frames(stacks, args.top, args.grep, args.exclude) - incl_rows = top_inclusive_frames(stacks, args.top, args.grep, args.exclude) + leaf_rows = top_leaf_frames(stacks, args.top, grep_re, exclude_re) + incl_rows = top_inclusive_frames(stacks, args.top, grep_re, exclude_re) print(f" Top {args.top} by self-time (leaf frames — actual CPU consumers):\n") print_table(leaf_rows, total_samples, "Method (self-time)", csv_mode=args.csv) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index cb5e40fba..5cd9939ce 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -114,6 +114,20 @@ locate_jfrconv() { fi } +newest_by_mtime() { + local newest="" + local newest_mtime=0 + local candidate mtime + for candidate in "$@"; do + mtime="$(stat -f '%m' "$candidate" 2>/dev/null || echo 0)" + if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then + newest="$candidate" + newest_mtime="$mtime" + fi + done + echo "$newest" +} + # Session state file — stores output path and asprof path between start/stop. session_file() { local safe uid state_dir @@ -261,15 +275,25 @@ cmd_stop() { echo "" echo "⚠️ macOS: -f is ignored by asprof stop — locating JFR in /var/folders..." local found_jfr="" + local search_maxdepth=2 + local search_hint="find /var/folders/*/*/T -maxdepth 2 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + local -a search_roots=() local -a jfr_matches=() local jfr_candidate + shopt -s nullglob + search_roots=(/var/folders/*/*/T) + shopt -u nullglob + if [[ ${#search_roots[@]} -eq 0 ]]; then + search_roots=(/var/folders) + search_maxdepth=8 + search_hint="find /var/folders -maxdepth 8 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + fi while IFS= read -r -d '' jfr_candidate; do jfr_matches+=("$jfr_candidate") - done < <(find /var/folders -maxdepth 8 -name "*.jfr" -newer "$sentinel" -print0 2>/dev/null) + done < <(find "${search_roots[@]}" -maxdepth "$search_maxdepth" -name "*.jfr" -newer "$sentinel" -print0 2>/dev/null) - # Sort by mtime (newest first) to avoid picking up an unrelated recording. if [[ ${#jfr_matches[@]} -gt 0 ]]; then - found_jfr=$(ls -1t "${jfr_matches[@]}" 2>/dev/null | head -1) + found_jfr="$(newest_by_mtime "${jfr_matches[@]}")" fi if [[ -n "$found_jfr" ]]; then cp "$found_jfr" "$jfr_path" @@ -278,7 +302,7 @@ cmd_stop() { echo " Copied to: $jfr_path" else echo "❌ Could not find JFR in /var/folders. Try:" - echo " find /var/folders -maxdepth 8 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + echo " $search_hint" echo " (The JFR may still be there — copy it manually to $jfr_path)" echo " Sentinel preserved at: $sentinel for retry" echo " Session state preserved at: $sess" diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 09a6867a1..9368d8e06 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -37,6 +37,14 @@ COMPREHENSIVE=false ASPROF="" TARGET="" +detect_format_from_output() { + local output_path="$1" + case "${output_path##*.}" in + html|jfr|collapsed|txt) echo "${output_path##*.}" ;; + *) echo "" ;; + esac +} + # ── Parse arguments ─────────────────────────────────────────────────────────── while [[ $# -gt 0 ]]; do case "$1" in @@ -107,6 +115,19 @@ if [[ -z "$OUTPUT" ]]; then fi fi +OUTPUT_FORMAT="$(detect_format_from_output "$OUTPUT")" +if [[ -z "$OUTPUT_FORMAT" ]]; then + echo "❌ Unsupported output extension in '$OUTPUT'." >&2 + echo " Use one of: .html, .jfr, .collapsed, .txt" >&2 + exit 1 +fi +if $ALL_EVENTS && [[ "$OUTPUT_FORMAT" != "jfr" ]]; then + echo "❌ --all/--comprehensive require a .jfr output file." >&2 + echo " Received: $OUTPUT" >&2 + exit 1 +fi +FORMAT="$OUTPUT_FORMAT" + # ── Build asprof command ────────────────────────────────────────────────────── CMD=("$ASPROF" "-d" "$DURATION" "-f" "$OUTPUT") $ALL_EVENTS && CMD+=("--all") || CMD+=("-e" "$EVENT") @@ -249,5 +270,12 @@ else echo "💡 Next step — ask your AI assistant to analyze:" echo " 'Run analyze_collapsed.py on $OUTPUT and tell me what's slow.'" ;; + txt) + echo "Plain-text summary saved at:" + echo " $OUTPUT" + echo "" + echo "Review with:" + echo " cat $OUTPUT" + ;; esac fi From 4ef1ab375721f1a7b1da5a87bf7acf0d1c201e23 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:38:54 +0200 Subject: [PATCH 04/30] fix(readme): make generated ordering deterministic Use normalized string comparison in the README generator so generated sections sort consistently across environments. Regenerate the affected agent and instruction README files. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/README.agents.md | 2 +- docs/README.instructions.md | 2 +- eng/update-readme.mjs | 32 ++++++++++++++++++++++++-------- 3 files changed, 26 insertions(+), 10 deletions(-) diff --git a/docs/README.agents.md b/docs/README.agents.md index 5c2e6f1db..6b37f6ba5 100644 --- a/docs/README.agents.md +++ b/docs/README.agents.md @@ -47,8 +47,8 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-agents) for guidelines on how to | [Azure Logic Apps Expert Mode](../agents/azure-logic-apps-expert.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-logic-apps-expert.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-logic-apps-expert.agent.md) | Expert guidance for Azure Logic Apps development focusing on workflow design, integration patterns, and JSON-based Workflow Definition Language. | | | [Azure Policy Analyzer](../agents/azure-policy-analyzer.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-policy-analyzer.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-policy-analyzer.agent.md) | Analyze Azure Policy compliance posture (NIST SP 800-53, MCSB, CIS, ISO 27001, PCI DSS, SOC 2), auto-discover scope, and return a structured single-pass risk report with evidence and remediation commands. | | | [Azure Principal Architect mode instructions](../agents/azure-principal-architect.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-principal-architect.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-principal-architect.agent.md) | Provide expert Azure Principal Architect guidance using Azure Well-Architected Framework principles and Microsoft best practices. | | -| [Azure Smart City IoT Architect](../agents/azure-smart-city-iot-architect.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-smart-city-iot-architect.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-smart-city-iot-architect.agent.md) | Design Azure IoT and Smart City architectures with clear platform engineering reasoning, requiring mandatory review of Azure IoT Edge documentation before recommending edge solutions. | | | [Azure SaaS Architect mode instructions](../agents/azure-saas-architect.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-saas-architect.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-saas-architect.agent.md) | Provide expert Azure SaaS Architect guidance focusing on multitenant applications using Azure Well-Architected SaaS principles and Microsoft best practices. | | +| [Azure Smart City IoT Architect](../agents/azure-smart-city-iot-architect.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-smart-city-iot-architect.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fazure-smart-city-iot-architect.agent.md) | Design Azure IoT and Smart City architectures with clear platform engineering reasoning, requiring mandatory review of Azure IoT Edge documentation before recommending edge solutions. | | | [Azure Terraform IaC Implementation Specialist](../agents/terraform-azure-implement.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fterraform-azure-implement.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fterraform-azure-implement.agent.md) | Act as an Azure Terraform Infrastructure as Code coding specialist that creates and reviews Terraform for Azure resources. | | | [Azure Terraform Infrastructure Planning](../agents/terraform-azure-planning.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fterraform-azure-planning.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fterraform-azure-planning.agent.md) | Act as implementation planner for your Azure Terraform Infrastructure as Code task. | | | [Bicep Planning](../agents/bicep-plan.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fbicep-plan.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fbicep-plan.agent.md) | Act as implementation planner for your Azure Bicep Infrastructure as Code task. | | diff --git a/docs/README.instructions.md b/docs/README.instructions.md index 7b953ff13..0c09edbcb 100644 --- a/docs/README.instructions.md +++ b/docs/README.instructions.md @@ -45,8 +45,8 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-instructions) for guidelines on | [Blazor](../instructions/blazor.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fblazor.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fblazor.instructions.md) | Blazor component and application patterns | | [C# Development](../instructions/csharp.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp.instructions.md) | Guidelines for building C# applications | | [C# MCP Server Development](../instructions/csharp-mcp-server.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-mcp-server.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-mcp-server.instructions.md) | Instructions for building Model Context Protocol (MCP) servers using the C# SDK | -| [C# 코드 작성 규칙](../instructions/csharp-ko.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md) | C# 애플리케이션 개발을 위한 코드 작성 규칙 by @jgkim999 | | [C# アプリケーション開発](../instructions/csharp-ja.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md) | C# アプリケーション構築指針 by @tsubakimoto | +| [C# 코드 작성 규칙](../instructions/csharp-ko.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md) | C# 애플리케이션 개발을 위한 코드 작성 규칙 by @jgkim999 | | [Caveman Mode](../instructions/caveman-mode.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcaveman-mode.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcaveman-mode.instructions.md) | Terse, low-token responses. Minimal words, no fluff. Full capabilities preserved. Use when: optimize token usage, low-token mode, concise output, caveman mode, reduce verbosity, token-efficient, brief responses. | | [CentOS Administration Guidelines](../instructions/centos-linux.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcentos-linux.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcentos-linux.instructions.md) | Guidance for CentOS administration, RHEL-compatible tooling, and SELinux-aware operations. | | [Clojure Development Instructions](../instructions/clojure.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fclojure.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fclojure.instructions.md) | Clojure-specific coding patterns, inline def usage, code block templates, and namespace handling for Clojure development. | diff --git a/eng/update-readme.mjs b/eng/update-readme.mjs index 147a91c14..c19e2553b 100644 --- a/eng/update-readme.mjs +++ b/eng/update-readme.mjs @@ -268,6 +268,20 @@ function formatTableCell(text) { return s.trim(); } +function compareNormalizedStrings(a, b) { + const left = String(a).toLowerCase(); + const right = String(b).toLowerCase(); + + if (left < right) return -1; + if (left > right) return 1; + + const originalLeft = String(a); + const originalRight = String(b); + if (originalLeft < originalRight) return -1; + if (originalLeft > originalRight) return 1; + return 0; +} + function makeBadges(link, type) { const aka = AKA_INSTALL_URLS[type] || AKA_INSTALL_URLS.instructions; @@ -303,7 +317,9 @@ function generateInstructionsSection(instructionsDir) { }); // Sort by title alphabetically - instructionEntries.sort((a, b) => a.title.localeCompare(b.title)); + instructionEntries.sort((a, b) => + compareNormalizedStrings(a.title, b.title) + ); console.log(`Found ${instructionEntries.length} instruction files`); @@ -492,7 +508,7 @@ function generateHooksSection(hooksDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => a.name.localeCompare(b.name)); + .sort((a, b) => compareNormalizedStrings(a.name, b.name)); console.log(`Found ${hookEntries.length} hook(s)`); @@ -551,7 +567,7 @@ function generateWorkflowsSection(workflowsDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => a.name.localeCompare(b.name)); + .sort((a, b) => compareNormalizedStrings(a.name, b.name)); console.log(`Found ${workflowEntries.length} workflow(s)`); @@ -607,7 +623,7 @@ function generateSkillsSection(skillsDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => a.name.localeCompare(b.name)); + .sort((a, b) => compareNormalizedStrings(a.name, b.name)); console.log(`Found ${skillEntries.length} skill(s)`); @@ -673,7 +689,7 @@ function generateUnifiedModeSection(cfg) { return { file, filePath, title: extractTitle(filePath) }; }); - entries.sort((a, b) => a.title.localeCompare(b.title)); + entries.sort((a, b) => compareNormalizedStrings(a.title, b.title)); console.log( `Unified mode generator: ${entries.length} files for extension ${extension}` ); @@ -760,8 +776,8 @@ function generatePluginsSection(pluginsDir) { const regularPlugins = pluginEntries.filter((entry) => !entry.isFeatured); // Sort each group alphabetically by name - featuredPlugins.sort((a, b) => a.name.localeCompare(b.name)); - regularPlugins.sort((a, b) => a.name.localeCompare(b.name)); + featuredPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); + regularPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); // Combine: featured first, then regular const sortedEntries = [...featuredPlugins, ...regularPlugins]; @@ -852,7 +868,7 @@ function generateFeaturedPluginsSection(pluginsDir) { .filter((entry) => entry !== null); // Sort by name alphabetically - featuredPlugins.sort((a, b) => a.name.localeCompare(b.name)); + featuredPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); console.log(`Found ${featuredPlugins.length} featured plugin(s)`); From 4e80619657f0a9d68b9497a93e68bfe5f5a60f2f Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:40:30 +0200 Subject: [PATCH 05/30] revert(readme): keep generator unchanged Keep this branch limited to the async-profiler skill changes and the generated README outputs required by CI. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- eng/update-readme.mjs | 32 ++++++++------------------------ 1 file changed, 8 insertions(+), 24 deletions(-) diff --git a/eng/update-readme.mjs b/eng/update-readme.mjs index c19e2553b..147a91c14 100644 --- a/eng/update-readme.mjs +++ b/eng/update-readme.mjs @@ -268,20 +268,6 @@ function formatTableCell(text) { return s.trim(); } -function compareNormalizedStrings(a, b) { - const left = String(a).toLowerCase(); - const right = String(b).toLowerCase(); - - if (left < right) return -1; - if (left > right) return 1; - - const originalLeft = String(a); - const originalRight = String(b); - if (originalLeft < originalRight) return -1; - if (originalLeft > originalRight) return 1; - return 0; -} - function makeBadges(link, type) { const aka = AKA_INSTALL_URLS[type] || AKA_INSTALL_URLS.instructions; @@ -317,9 +303,7 @@ function generateInstructionsSection(instructionsDir) { }); // Sort by title alphabetically - instructionEntries.sort((a, b) => - compareNormalizedStrings(a.title, b.title) - ); + instructionEntries.sort((a, b) => a.title.localeCompare(b.title)); console.log(`Found ${instructionEntries.length} instruction files`); @@ -508,7 +492,7 @@ function generateHooksSection(hooksDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => compareNormalizedStrings(a.name, b.name)); + .sort((a, b) => a.name.localeCompare(b.name)); console.log(`Found ${hookEntries.length} hook(s)`); @@ -567,7 +551,7 @@ function generateWorkflowsSection(workflowsDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => compareNormalizedStrings(a.name, b.name)); + .sort((a, b) => a.name.localeCompare(b.name)); console.log(`Found ${workflowEntries.length} workflow(s)`); @@ -623,7 +607,7 @@ function generateSkillsSection(skillsDir) { }; }) .filter((entry) => entry !== null) - .sort((a, b) => compareNormalizedStrings(a.name, b.name)); + .sort((a, b) => a.name.localeCompare(b.name)); console.log(`Found ${skillEntries.length} skill(s)`); @@ -689,7 +673,7 @@ function generateUnifiedModeSection(cfg) { return { file, filePath, title: extractTitle(filePath) }; }); - entries.sort((a, b) => compareNormalizedStrings(a.title, b.title)); + entries.sort((a, b) => a.title.localeCompare(b.title)); console.log( `Unified mode generator: ${entries.length} files for extension ${extension}` ); @@ -776,8 +760,8 @@ function generatePluginsSection(pluginsDir) { const regularPlugins = pluginEntries.filter((entry) => !entry.isFeatured); // Sort each group alphabetically by name - featuredPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); - regularPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); + featuredPlugins.sort((a, b) => a.name.localeCompare(b.name)); + regularPlugins.sort((a, b) => a.name.localeCompare(b.name)); // Combine: featured first, then regular const sortedEntries = [...featuredPlugins, ...regularPlugins]; @@ -868,7 +852,7 @@ function generateFeaturedPluginsSection(pluginsDir) { .filter((entry) => entry !== null); // Sort by name alphabetically - featuredPlugins.sort((a, b) => compareNormalizedStrings(a.name, b.name)); + featuredPlugins.sort((a, b) => a.name.localeCompare(b.name)); console.log(`Found ${featuredPlugins.length} featured plugin(s)`); From 54b346f04e5b9ec9d6f49f3a2541f1698262478f Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:43:29 +0200 Subject: [PATCH 06/30] docs(readme): align generated instructions order Update the generated instructions README to match the order produced by CI without changing the generator. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/README.instructions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.instructions.md b/docs/README.instructions.md index 0c09edbcb..7b953ff13 100644 --- a/docs/README.instructions.md +++ b/docs/README.instructions.md @@ -45,8 +45,8 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-instructions) for guidelines on | [Blazor](../instructions/blazor.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fblazor.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fblazor.instructions.md) | Blazor component and application patterns | | [C# Development](../instructions/csharp.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp.instructions.md) | Guidelines for building C# applications | | [C# MCP Server Development](../instructions/csharp-mcp-server.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-mcp-server.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-mcp-server.instructions.md) | Instructions for building Model Context Protocol (MCP) servers using the C# SDK | -| [C# アプリケーション開発](../instructions/csharp-ja.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md) | C# アプリケーション構築指針 by @tsubakimoto | | [C# 코드 작성 규칙](../instructions/csharp-ko.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ko.instructions.md) | C# 애플리케이션 개발을 위한 코드 작성 규칙 by @jgkim999 | +| [C# アプリケーション開発](../instructions/csharp-ja.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcsharp-ja.instructions.md) | C# アプリケーション構築指針 by @tsubakimoto | | [Caveman Mode](../instructions/caveman-mode.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcaveman-mode.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcaveman-mode.instructions.md) | Terse, low-token responses. Minimal words, no fluff. Full capabilities preserved. Use when: optimize token usage, low-token mode, concise output, caveman mode, reduce verbosity, token-efficient, brief responses. | | [CentOS Administration Guidelines](../instructions/centos-linux.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcentos-linux.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fcentos-linux.instructions.md) | Guidance for CentOS administration, RHEL-compatible tooling, and SELinux-aware operations. | | [Clojure Development Instructions](../instructions/clojure.instructions.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fclojure.instructions.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/instructions?url=vscode-insiders%3Achat-instructions%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Finstructions%2Fclojure.instructions.md) | Clojure-specific coding patterns, inline def usage, code block templates, and namespace handling for Clojure development. | From d819191637403a7f817450afa937174c5e15c990 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 12:46:35 +0200 Subject: [PATCH 07/30] fix(async-profiler): address remaining review notes Complete collect.sh help output, document txt output in run_profile.sh, and restore Python 3.7-compatible type hints in analyze_collapsed.py. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/analyze_collapsed.py | 3 ++- skills/async-profiler/scripts/collect.sh | 4 ++-- skills/async-profiler/scripts/run_profile.sh | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index a8c67d349..5374d1ff9 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -25,6 +25,7 @@ import re from collections import defaultdict from pathlib import Path +from typing import Optional, Pattern def parse_collapsed(path: str) -> list[tuple[list[str], int]]: @@ -48,7 +49,7 @@ def parse_collapsed(path: str) -> list[tuple[list[str], int]]: return stacks -def compile_pattern(name: str, pattern: str | None): +def compile_pattern(name: str, pattern: Optional[str]) -> Optional[Pattern[str]]: if not pattern: return None try: diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 5cd9939ce..5ba02b3b0 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -42,7 +42,7 @@ set -euo pipefail # ── Parse subcommand ────────────────────────────────────────────────────────── if [[ $# -eq 0 ]]; then - sed -n '2,35p' "$0" | grep '^#' | sed 's/^# \?//' + sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' exit 0 fi @@ -377,7 +377,7 @@ case "$SUBCMD" in stop) cmd_stop ;; timed) cmd_timed ;; help|-h|--help) - sed -n '2,35p' "$0" | grep '^#' | sed 's/^# \?//' + sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' exit 0 ;; *) diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 9368d8e06..6ed9aec64 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -7,7 +7,7 @@ # Options: # -e, --event cpu|alloc|wall|lock Single event (default: cpu) # -d, --duration N Seconds to profile (default: 30) -# -f, --format html|jfr|collapsed Output format for single-event (default: html) +# -f, --format html|jfr|collapsed|txt Output format for single-event (default: html) # -o, --output FILE Output path (default: auto-named) # -t, --threads Profile threads separately # --all Capture all events to a JFR file From b36aef4818bea96cf9d4b9441979a3a5acd62d6d Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 13:15:04 +0200 Subject: [PATCH 08/30] fix(async-profiler): harden collect session state Store session state in a private per-user directory and validate ownership and permissions before reading it during stop. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 86 ++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 7 deletions(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 5ba02b3b0..aed2eda2c 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -128,19 +128,89 @@ newest_by_mtime() { echo "$newest" } -# Session state file — stores output path and asprof path between start/stop. -session_file() { - local safe uid state_dir - safe="${TARGET//[^a-zA-Z0-9_-]/_}" +stat_uid() { + if [[ "$(uname)" == "Darwin" ]]; then + stat -f '%u' "$1" + else + stat -c '%u' "$1" + fi +} + +stat_mode() { + if [[ "$(uname)" == "Darwin" ]]; then + stat -f '%Lp' "$1" + else + stat -c '%a' "$1" + fi +} + +ensure_private_state_dir() { + local uid base_dir state_dir owner mode uid="$(id -u)" if [[ -n "${XDG_RUNTIME_DIR:-}" && -d "${XDG_RUNTIME_DIR}" && -w "${XDG_RUNTIME_DIR}" ]]; then - state_dir="${XDG_RUNTIME_DIR}" + base_dir="${XDG_RUNTIME_DIR}" + else + base_dir="/tmp" + fi + + state_dir="${base_dir}/asprof-session-${uid}" + if [[ -L "$state_dir" ]]; then + echo "❌ Session state directory is a symlink — refusing to use it: $state_dir" >&2 + exit 1 + fi + if [[ -e "$state_dir" ]]; then + if [[ ! -d "$state_dir" ]]; then + echo "❌ Session state path exists but is not a directory: $state_dir" >&2 + exit 1 + fi + owner="$(stat_uid "$state_dir")" + if [[ "$owner" != "$uid" ]]; then + echo "❌ Session state directory is not owned by the current user: $state_dir" >&2 + exit 1 + fi + chmod 700 "$state_dir" else - state_dir="/tmp" + mkdir -m 700 -p "$state_dir" fi - echo "${state_dir}/asprof-session-${uid}-${safe}" + mode="$(stat_mode "$state_dir")" + if [[ "$mode" != "700" ]]; then + echo "❌ Session state directory must have mode 700: $state_dir (found $mode)" >&2 + exit 1 + fi + + echo "$state_dir" +} + +validate_session_file() { + local sess="$1" uid owner mode + uid="$(id -u)" + + if [[ -L "$sess" ]]; then + echo "❌ Session file path is a symlink — refusing to use it: $sess" >&2 + exit 1 + fi + + owner="$(stat_uid "$sess")" + if [[ "$owner" != "$uid" ]]; then + echo "❌ Session file is not owned by the current user: $sess" >&2 + exit 1 + fi + + mode="$(stat_mode "$sess")" + if [[ "$mode" != "600" ]]; then + echo "❌ Session file must have mode 600: $sess (found $mode)" >&2 + exit 1 + fi +} + +# Session state file — stores output path and asprof path between start/stop. +session_file() { + local safe state_dir + safe="${TARGET//[^a-zA-Z0-9_-]/_}" + state_dir="$(ensure_private_state_dir)" + echo "${state_dir}/${safe}" } split_jfr() { @@ -236,6 +306,7 @@ cmd_start() { rm -f "$sentinel"; exit 1 fi (umask 077; printf '%s\n%s\n%s\n' "$jfr_path" "$asprof" "$sentinel" > "$sess") + chmod 600 "$sess" echo "✅ Profiling started. Session state: $sess" echo "" @@ -256,6 +327,7 @@ cmd_stop() { echo " Run first: bash scripts/collect.sh start $TARGET" >&2 exit 1 fi + validate_session_file "$sess" local jfr_path; jfr_path="$(sed -n '1p' "$sess")" local asprof; asprof="$(sed -n '2p' "$sess")" From 888feb207ccb3aabfad46879f9a54652a66d34fd Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 13:33:32 +0200 Subject: [PATCH 09/30] fix(async-profiler): address latest review feedback Improve collapsed-stack parsing, normalize the [vmlinux] package bucket, validate explicit --asprof paths, and reject conflicting --format/--output combinations. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/analyze_collapsed.py | 4 ++-- skills/async-profiler/scripts/collect.sh | 4 ++++ skills/async-profiler/scripts/run_profile.sh | 8 +++++++- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index 5374d1ff9..c2b1a325a 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -37,7 +37,7 @@ def parse_collapsed(path: str) -> list[tuple[list[str], int]]: if not line or line.startswith("#"): continue # Last token is the count; everything before is the stack - parts = line.rsplit(" ", 1) + parts = line.rsplit(None, 1) if len(parts) != 2: continue try: @@ -107,7 +107,7 @@ def top_packages(stacks, n=20, grep_re=None, exclude_re=None): # e.g. "com/example/Service.process" → "com/example" # e.g. "[vmlinux]" → "[kernel]" if frame.startswith("["): - pkg = frame # kernel / JVM internal frame + pkg = "[kernel]" if frame == "[vmlinux]" else frame elif "/" in frame: pkg = frame.rsplit("/", 1)[0].replace("/", ".") elif "." in frame: diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index aed2eda2c..4cd34894b 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -81,6 +81,10 @@ locate_asprof() { local asprof="" if [[ -n "$ASPROF_ARG" ]]; then asprof="$ASPROF_ARG" + if [[ ! -f "$asprof" || ! -x "$asprof" ]]; then + echo "❌ --asprof must point to an executable asprof binary: $asprof" >&2 + exit 1 + fi elif command -v asprof &>/dev/null; then asprof="$(command -v asprof)" else diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 6ed9aec64..c6d6abd20 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -30,6 +30,7 @@ set -euo pipefail EVENT="cpu" DURATION=30 FORMAT="html" +FORMAT_SET=false OUTPUT="" THREADS=false ALL_EVENTS=false @@ -50,7 +51,7 @@ while [[ $# -gt 0 ]]; do case "$1" in -e|--event) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; EVENT="$2"; shift 2 ;; -d|--duration) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; DURATION="$2"; shift 2 ;; - -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; shift 2 ;; + -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; FORMAT_SET=true; shift 2 ;; -o|--output) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; OUTPUT="$2"; shift 2 ;; -t|--threads) THREADS=true; shift ;; --all) ALL_EVENTS=true; FORMAT="jfr"; shift ;; @@ -121,6 +122,11 @@ if [[ -z "$OUTPUT_FORMAT" ]]; then echo " Use one of: .html, .jfr, .collapsed, .txt" >&2 exit 1 fi +if $FORMAT_SET && [[ "$FORMAT" != "$OUTPUT_FORMAT" ]]; then + echo "❌ --format '$FORMAT' conflicts with output extension '.$OUTPUT_FORMAT'." >&2 + echo " Use matching values or omit --format when --output already sets the format." >&2 + exit 1 +fi if $ALL_EVENTS && [[ "$OUTPUT_FORMAT" != "jfr" ]]; then echo "❌ --all/--comprehensive require a .jfr output file." >&2 echo " Received: $OUTPUT" >&2 From 586556c49f5c1e9226b3632dbf8053c88521c270 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 13:49:21 +0200 Subject: [PATCH 10/30] fix(async-profiler): align scripts with docs Add checksum verification to install.sh, align shell usage examples with the documented invocation style, support svg output in run_profile.sh, revalidate --asprof overrides in collect.sh, and correct the Linux perf_events guidance. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/setup.md | 6 ++-- skills/async-profiler/scripts/collect.sh | 4 ++- skills/async-profiler/scripts/install.sh | 35 ++++++++++++++++++-- skills/async-profiler/scripts/run_profile.sh | 20 +++++------ 4 files changed, 48 insertions(+), 17 deletions(-) diff --git a/skills/async-profiler/references/setup.md b/skills/async-profiler/references/setup.md index 50fcebb23..ebfa36453 100644 --- a/skills/async-profiler/references/setup.md +++ b/skills/async-profiler/references/setup.md @@ -94,14 +94,14 @@ On Linux, async-profiler prefers the **perf_events** engine, which gives the mos accurate profiles and includes kernel frames. It requires: ```bash -# Allow non-root perf_events (set once, persists until reboot) -sudo sysctl kernel.perf_event_paranoid=1 +# Allow non-root perf_events with kernel stack traces for processes you own +sudo sysctl kernel.perf_event_paranoid=0 sudo sysctl kernel.kptr_restrict=0 ``` To make these permanent across reboots, add to `/etc/sysctl.d/99-perf.conf`: ``` -kernel.perf_event_paranoid=1 +kernel.perf_event_paranoid=0 kernel.kptr_restrict=0 ``` diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 4cd34894b..722fbbfcd 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -336,7 +336,9 @@ cmd_stop() { local jfr_path; jfr_path="$(sed -n '1p' "$sess")" local asprof; asprof="$(sed -n '2p' "$sess")" local sentinel; sentinel="$(sed -n '3p' "$sess")" - [[ -n "$ASPROF_ARG" ]] && asprof="$ASPROF_ARG" + if [[ -n "$ASPROF_ARG" ]]; then + asprof="$(locate_asprof)" + fi echo "⏹ Stopping profiler on target: $TARGET" # Note: on macOS, -f is silently ignored by asprof stop — handled below. diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index 2987eca76..4ebb9e73e 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -2,9 +2,9 @@ # install.sh — Download and install async-profiler for the current platform. # # Usage: -# ./install.sh # installs to ~/async-profiler-4.3 -# ./install.sh /opt/profilers # installs to /opt/profilers/async-profiler-4.3 -# ./install.sh --path-only # just prints the install path (for scripting) +# bash scripts/install.sh # installs to ~/async-profiler-4.3 +# bash scripts/install.sh /opt/profilers # installs to /opt/profilers/async-profiler-4.3 +# bash scripts/install.sh --path-only # just prints the install path (for scripting) # # After install, the script prints the path to the asprof binary. @@ -13,6 +13,9 @@ set -euo pipefail VERSION="4.3" BASE_URL="https://github.com/async-profiler/async-profiler/releases/download/v${VERSION}" INSTALL_PARENT="${1:-$HOME}" +MACOS_SHA256="8df875b8e40bd2d46bce0f07d3f78892f79791ea0b905c416817a7ae8b7bbcf7" +LINUX_X64_SHA256="69a16462c34c06ff55618f41653cffad1f8946822d30842512a3e0e774841c06" +LINUX_ARM64_SHA256="4f95e98ad12b8461386628d714e6a622f9d0b21bb7420004de0a9a3f7ea88131" # --path-only: don't install, just print where asprof would end up if [[ "${1:-}" == "--path-only" ]]; then @@ -50,9 +53,15 @@ esac if [[ "$PLATFORM" == "macos" ]]; then ARCHIVE="async-profiler-${VERSION}-macos.zip" EXTRACTED_DIR="async-profiler-${VERSION}-macos" + EXPECTED_SHA256="$MACOS_SHA256" else ARCHIVE="async-profiler-${VERSION}-linux-${ARCH_LABEL}.tar.gz" EXTRACTED_DIR="async-profiler-${VERSION}-linux-${ARCH_LABEL}" + if [[ "$ARCH_LABEL" == "x64" ]]; then + EXPECTED_SHA256="$LINUX_X64_SHA256" + else + EXPECTED_SHA256="$LINUX_ARM64_SHA256" + fi fi INSTALL_DIR="${INSTALL_PARENT}/async-profiler-${VERSION}" @@ -84,6 +93,17 @@ trap 'rm -rf "$TMP_DIR"' EXIT cd "$TMP_DIR" +sha256_file() { + if command -v shasum &>/dev/null; then + shasum -a 256 "$1" | awk '{print $1}' + elif command -v sha256sum &>/dev/null; then + sha256sum "$1" | awk '{print $1}' + else + echo "❌ Need shasum or sha256sum to verify the downloaded archive." >&2 + exit 1 + fi +} + if command -v curl &>/dev/null; then curl -fsSL -o "$ARCHIVE" "$DOWNLOAD_URL" elif command -v wget &>/dev/null; then @@ -93,6 +113,15 @@ else exit 1 fi +ACTUAL_SHA256="$(sha256_file "$ARCHIVE")" +if [[ "$ACTUAL_SHA256" != "$EXPECTED_SHA256" ]]; then + echo "❌ Downloaded archive checksum mismatch for $ARCHIVE" >&2 + echo " Expected: $EXPECTED_SHA256" >&2 + echo " Actual : $ACTUAL_SHA256" >&2 + exit 1 +fi +echo " SHA-256 verified: $ACTUAL_SHA256" + # ── Extract ────────────────────────────────────────────────────────────────── echo " Extracting..." if [[ "$ARCHIVE" == *.zip ]]; then diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index c6d6abd20..1359634ed 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -2,12 +2,12 @@ # run_profile.sh — Wrapper around asprof for common profiling scenarios. # # Usage: -# ./run_profile.sh [options] +# bash scripts/run_profile.sh [options] # # Options: # -e, --event cpu|alloc|wall|lock Single event (default: cpu) # -d, --duration N Seconds to profile (default: 30) -# -f, --format html|jfr|collapsed|txt Output format for single-event (default: html) +# -f, --format html|svg|jfr|collapsed|txt Output format for single-event (default: html) # -o, --output FILE Output path (default: auto-named) # -t, --threads Profile threads separately # --all Capture all events to a JFR file @@ -18,11 +18,11 @@ # -h, --help Show this help # # Examples: -# ./run_profile.sh 12345 # 30s CPU flamegraph -# ./run_profile.sh --comprehensive 12345 # all events, split into flamegraphs -# ./run_profile.sh -e alloc -d 60 MyApp # 60s allocation flamegraph -# ./run_profile.sh -e wall -f jfr 12345 # wall-clock JFR recording -# ./run_profile.sh --all -d 120 12345 # all events, single JFR file +# bash scripts/run_profile.sh 12345 # 30s CPU flamegraph +# bash scripts/run_profile.sh --comprehensive 12345 # all events, split into flamegraphs +# bash scripts/run_profile.sh -e alloc -d 60 MyApp # 60s allocation flamegraph +# bash scripts/run_profile.sh -e wall -f jfr 12345 # wall-clock JFR recording +# bash scripts/run_profile.sh --all -d 120 12345 # all events, single JFR file set -euo pipefail @@ -41,7 +41,7 @@ TARGET="" detect_format_from_output() { local output_path="$1" case "${output_path##*.}" in - html|jfr|collapsed|txt) echo "${output_path##*.}" ;; + html|svg|jfr|collapsed|txt) echo "${output_path##*.}" ;; *) echo "" ;; esac } @@ -119,7 +119,7 @@ fi OUTPUT_FORMAT="$(detect_format_from_output "$OUTPUT")" if [[ -z "$OUTPUT_FORMAT" ]]; then echo "❌ Unsupported output extension in '$OUTPUT'." >&2 - echo " Use one of: .html, .jfr, .collapsed, .txt" >&2 + echo " Use one of: .html, .svg, .jfr, .collapsed, .txt" >&2 exit 1 fi if $FORMAT_SET && [[ "$FORMAT" != "$OUTPUT_FORMAT" ]]; then @@ -240,7 +240,7 @@ if $COMPREHENSIVE; then else # Single-event post-run guidance case "$FORMAT" in - html) + html|svg) echo "Open in browser:" if [[ "$(uname)" == "Darwin" ]]; then open "$OUTPUT" From f6800625ca38d8ac30f4a64250956c574821cd90 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 14:10:45 +0200 Subject: [PATCH 11/30] fix(async-profiler): address latest copilot feedback Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../scripts/analyze_collapsed.py | 2 +- skills/async-profiler/scripts/collect.sh | 5 +++++ skills/async-profiler/scripts/run_profile.sh | 19 +++++++++++++++++++ 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index c2b1a325a..ce6f98763 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -32,7 +32,7 @@ def parse_collapsed(path: str) -> list[tuple[list[str], int]]: """Parse a collapsed stack file into (frames, count) tuples.""" stacks = [] with open(path, "r", encoding="utf-8", errors="replace") as f: - for lineno, line in enumerate(f, 1): + for line in f: line = line.strip() if not line or line.startswith("#"): continue diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 722fbbfcd..656d1668e 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -66,6 +66,11 @@ while [[ $# -gt 0 ]]; do exit 1 ;; *) + if [[ -n "$TARGET" ]]; then + echo "❌ Multiple targets provided: '$TARGET' and '$1'." >&2 + echo " Provide exactly one PID or app name." >&2 + exit 1 + fi TARGET="$1"; shift ;; esac done diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 1359634ed..36ed23616 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -66,6 +66,11 @@ while [[ $# -gt 0 ]]; do exit 1 ;; *) + if [[ -n "$TARGET" ]]; then + echo "❌ Multiple targets provided: '$TARGET' and '$1'." >&2 + echo " Provide exactly one PID or app name." >&2 + exit 1 + fi TARGET="$1" shift ;; @@ -104,6 +109,11 @@ if [[ -z "$ASPROF" ]]; then exit 1 fi +if [[ ! -f "$ASPROF" || ! -x "$ASPROF" ]]; then + echo "❌ --asprof must point to an executable asprof binary: $ASPROF" >&2 + exit 1 +fi + # ── Build output filename ───────────────────────────────────────────────────── TIMESTAMP="$(date +%Y%m%d-%H%M%S)" @@ -122,6 +132,15 @@ if [[ -z "$OUTPUT_FORMAT" ]]; then echo " Use one of: .html, .svg, .jfr, .collapsed, .txt" >&2 exit 1 fi + +OUTPUT_DIR="$(dirname "$OUTPUT")" +if [[ "$OUTPUT_DIR" != "." ]] && [[ ! -d "$OUTPUT_DIR" ]]; then + mkdir -p "$OUTPUT_DIR" || { + echo "❌ Failed to create output directory: $OUTPUT_DIR" >&2 + exit 1 + } +fi + if $FORMAT_SET && [[ "$FORMAT" != "$OUTPUT_FORMAT" ]]; then echo "❌ --format '$FORMAT' conflicts with output extension '.$OUTPUT_FORMAT'." >&2 echo " Use matching values or omit --format when --output already sets the format." >&2 From 28f62dab3f762af8d1fdca2e0ecfd8c397128087 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 14:36:28 +0200 Subject: [PATCH 12/30] fix(async-profiler): address latest review feedback Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/setup.md | 7 ++++ .../scripts/analyze_collapsed.py | 2 +- skills/async-profiler/scripts/collect.sh | 13 +++++++- skills/async-profiler/scripts/run_profile.sh | 33 +++++++++++++++++-- 4 files changed, 51 insertions(+), 4 deletions(-) diff --git a/skills/async-profiler/references/setup.md b/skills/async-profiler/references/setup.md index ebfa36453..57dc40eed 100644 --- a/skills/async-profiler/references/setup.md +++ b/skills/async-profiler/references/setup.md @@ -99,6 +99,13 @@ sudo sysctl kernel.perf_event_paranoid=0 sudo sysctl kernel.kptr_restrict=0 ``` +These settings reduce host hardening, especially `kernel.kptr_restrict=0`, +which exposes real kernel pointers. Prefer this only on dev/staging systems or +for a short-lived profiling window, then restore your previous values +afterward. If you need a safer post-profiling baseline, many environments use +higher settings such as `kernel.perf_event_paranoid=2` and +`kernel.kptr_restrict=1` or stricter. + To make these permanent across reboots, add to `/etc/sysctl.d/99-perf.conf`: ``` kernel.perf_event_paranoid=0 diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index ce6f98763..3e1117606 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -7,7 +7,7 @@ com/example/App.main;com/example/Service.process;java/util/HashMap.get 42 Usage: - python analyze_collapsed.py [options] + python3 analyze_collapsed.py [options] Options: --top N Show top N frames (default: 20) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 656d1668e..3b106f5e6 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -82,6 +82,15 @@ if [[ -z "$TARGET" && "$SUBCMD" != "help" ]]; then fi # ── Helpers ─────────────────────────────────────────────────────────────────── +default_installed_asprof() { + local script_dir install_script + script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + install_script="${script_dir}/install.sh" + if [[ -f "$install_script" ]]; then + bash "$install_script" --path-only 2>/dev/null || true + fi +} + locate_asprof() { local asprof="" if [[ -n "$ASPROF_ARG" ]]; then @@ -93,8 +102,10 @@ locate_asprof() { elif command -v asprof &>/dev/null; then asprof="$(command -v asprof)" else + local installed_asprof="" + installed_asprof="$(default_installed_asprof)" for candidate in \ - "$HOME/async-profiler-4.3/bin/asprof" \ + "$installed_asprof" \ "$HOME/async-profiler/bin/asprof" \ "/opt/async-profiler/bin/asprof" \ "/usr/local/bin/asprof" diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 36ed23616..03c99c307 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -46,6 +46,15 @@ detect_format_from_output() { esac } +default_installed_asprof() { + local script_dir install_script + script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + install_script="${script_dir}/install.sh" + if [[ -f "$install_script" ]]; then + bash "$install_script" --path-only 2>/dev/null || true + fi +} + # ── Parse arguments ─────────────────────────────────────────────────────────── while [[ $# -gt 0 ]]; do case "$1" in @@ -89,8 +98,9 @@ if [[ -z "$ASPROF" ]]; then if command -v asprof &>/dev/null; then ASPROF="$(command -v asprof)" else + INSTALLED_ASPROF="$(default_installed_asprof)" for candidate in \ - "$HOME/async-profiler-4.3/bin/asprof" \ + "$INSTALLED_ASPROF" \ "$HOME/async-profiler/bin/asprof" \ "/opt/async-profiler/bin/asprof" \ "/usr/local/bin/asprof" @@ -179,10 +189,29 @@ echo "Press Ctrl+C to stop early (partial results will be saved)." echo "" # ── Execute ─────────────────────────────────────────────────────────────────── +CAPTURE_INTERRUPTED=false +set +e "${CMD[@]}" +ASPROF_STATUS=$? +set -e + +if [[ "$ASPROF_STATUS" -eq 130 ]]; then + CAPTURE_INTERRUPTED=true + if [[ ! -f "$OUTPUT" ]]; then + echo "❌ Profiling was interrupted before async-profiler wrote output: $OUTPUT" >&2 + exit 130 + fi +elif [[ "$ASPROF_STATUS" -ne 0 ]]; then + echo "❌ async-profiler failed with exit code $ASPROF_STATUS." >&2 + exit "$ASPROF_STATUS" +fi echo "" -echo "✅ Capture complete: $OUTPUT" +if $CAPTURE_INTERRUPTED; then + echo "⚠️ Capture interrupted; using partial results: $OUTPUT" +else + echo "✅ Capture complete: $OUTPUT" +fi echo "" # ── Comprehensive mode: split JFR into per-event flamegraphs in parallel ────── From 1056f3790c5a85abb92122de8faea3db799b6a48 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 14:49:11 +0200 Subject: [PATCH 13/30] fix(async-profiler): improve install detection Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 31 +++++++++++++- skills/async-profiler/scripts/install.sh | 37 +++++++++++++--- skills/async-profiler/scripts/run_profile.sh | 45 +++++++++++++++++++- 3 files changed, 104 insertions(+), 9 deletions(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 3b106f5e6..a42f945eb 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -83,11 +83,31 @@ fi # ── Helpers ─────────────────────────────────────────────────────────────────── default_installed_asprof() { - local script_dir install_script + local script_dir install_script candidate newest_versioned="" + local -a versioned_candidates=() script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" install_script="${script_dir}/install.sh" if [[ -f "$install_script" ]]; then - bash "$install_script" --path-only 2>/dev/null || true + for candidate in \ + "$(bash "$install_script" --path-only 2>/dev/null || true)" \ + "$(bash "$install_script" /opt --path-only 2>/dev/null || true)" + do + if [[ -x "$candidate" ]]; then + echo "$candidate" + return 0 + fi + done + fi + + shopt -s nullglob + versioned_candidates=("$HOME"/async-profiler-*/bin/asprof /opt/async-profiler-*/bin/asprof) + shopt -u nullglob + if [[ ${#versioned_candidates[@]} -gt 0 ]]; then + newest_versioned="$(newest_by_mtime "${versioned_candidates[@]}")" + if [[ -x "$newest_versioned" ]]; then + echo "$newest_versioned" + return 0 + fi fi } @@ -310,6 +330,13 @@ cmd_start() { echo " Events : cpu + alloc + wall + lock (combined JFR)" echo "" + if [[ -e "$sess" ]]; then + echo "❌ An active session already exists for target '$TARGET'." >&2 + echo " Session state: $sess" >&2 + echo " Stop it first: bash scripts/collect.sh stop $TARGET" >&2 + exit 1 + fi + # macOS: asprof stop ignores -f and writes to /var/folders instead. # Create a sentinel so we can find the JFR after stop via find -newer. local sentinel; sentinel="$(mktemp "/tmp/asprof-sentinel.XXXXXX")" diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index 4ebb9e73e..dc77a2663 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -4,7 +4,8 @@ # Usage: # bash scripts/install.sh # installs to ~/async-profiler-4.3 # bash scripts/install.sh /opt/profilers # installs to /opt/profilers/async-profiler-4.3 -# bash scripts/install.sh --path-only # just prints the install path (for scripting) +# bash scripts/install.sh --path-only # prints the default install path +# bash scripts/install.sh /opt --path-only # prints /opt/async-profiler-4.3/bin/asprof # # After install, the script prints the path to the asprof binary. @@ -12,14 +13,41 @@ set -euo pipefail VERSION="4.3" BASE_URL="https://github.com/async-profiler/async-profiler/releases/download/v${VERSION}" -INSTALL_PARENT="${1:-$HOME}" +INSTALL_PARENT="$HOME" +INSTALL_PARENT_SET=false +PATH_ONLY=false MACOS_SHA256="8df875b8e40bd2d46bce0f07d3f78892f79791ea0b905c416817a7ae8b7bbcf7" LINUX_X64_SHA256="69a16462c34c06ff55618f41653cffad1f8946822d30842512a3e0e774841c06" LINUX_ARM64_SHA256="4f95e98ad12b8461386628d714e6a622f9d0b21bb7420004de0a9a3f7ea88131" +while [[ $# -gt 0 ]]; do + case "$1" in + --path-only) + PATH_ONLY=true + shift + ;; + -*) + echo "❌ Unknown option: $1" >&2 + exit 1 + ;; + *) + if $INSTALL_PARENT_SET; then + echo "❌ Unexpected extra argument: $1" >&2 + echo " Usage: bash scripts/install.sh [install-parent] [--path-only]" >&2 + exit 1 + fi + INSTALL_PARENT="$1" + INSTALL_PARENT_SET=true + shift + ;; + esac +done + +INSTALL_DIR="${INSTALL_PARENT}/async-profiler-${VERSION}" + # --path-only: don't install, just print where asprof would end up -if [[ "${1:-}" == "--path-only" ]]; then - echo "$HOME/async-profiler-${VERSION}/bin/asprof" +if $PATH_ONLY; then + echo "${INSTALL_DIR}/bin/asprof" exit 0 fi @@ -64,7 +92,6 @@ else fi fi -INSTALL_DIR="${INSTALL_PARENT}/async-profiler-${VERSION}" DOWNLOAD_URL="${BASE_URL}/${ARCHIVE}" # ── Already installed? ─────────────────────────────────────────────────────── diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 03c99c307..c92b6ecb3 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -46,12 +46,53 @@ detect_format_from_output() { esac } +stat_mtime() { + if [[ "$(uname)" == "Darwin" ]]; then + stat -f '%m' "$1" 2>/dev/null || echo 0 + else + stat -c '%Y' "$1" 2>/dev/null || echo 0 + fi +} + +newest_by_mtime() { + local newest="" newest_mtime=0 candidate mtime + for candidate in "$@"; do + [[ -n "$candidate" ]] || continue + mtime="$(stat_mtime "$candidate")" + if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then + newest="$candidate" + newest_mtime="$mtime" + fi + done + echo "$newest" +} + default_installed_asprof() { - local script_dir install_script + local script_dir install_script candidate newest_versioned="" + local -a versioned_candidates=() script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" install_script="${script_dir}/install.sh" if [[ -f "$install_script" ]]; then - bash "$install_script" --path-only 2>/dev/null || true + for candidate in \ + "$(bash "$install_script" --path-only 2>/dev/null || true)" \ + "$(bash "$install_script" /opt --path-only 2>/dev/null || true)" + do + if [[ -x "$candidate" ]]; then + echo "$candidate" + return 0 + fi + done + fi + + shopt -s nullglob + versioned_candidates=("$HOME"/async-profiler-*/bin/asprof /opt/async-profiler-*/bin/asprof) + shopt -u nullglob + if [[ ${#versioned_candidates[@]} -gt 0 ]]; then + newest_versioned="$(newest_by_mtime "${versioned_candidates[@]}")" + if [[ -x "$newest_versioned" ]]; then + echo "$newest_versioned" + return 0 + fi fi } From c3a883676351c9660c5cd71247858a7328bbae49 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 15:08:14 +0200 Subject: [PATCH 14/30] docs(async-profiler): refine profiling guidance Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/analyze.md | 2 +- skills/async-profiler/references/profile.md | 17 +++++++++-------- skills/async-profiler/references/setup.md | 9 +++++---- skills/async-profiler/scripts/collect.sh | 6 +++++- 4 files changed, 20 insertions(+), 14 deletions(-) diff --git a/skills/async-profiler/references/analyze.md b/skills/async-profiler/references/analyze.md index 4632c6a8e..963086817 100644 --- a/skills/async-profiler/references/analyze.md +++ b/skills/async-profiler/references/analyze.md @@ -196,7 +196,7 @@ jfr print --events jdk.ExecutionSample recording.jfr # raw CPU samples ```bash jfrconv recording.jfr flamegraph.html # full flamegraph jfrconv --alloc recording.jfr alloc.html # allocation-only flamegraph -jfrconv recording.jfr collapsed.txt # collapsed stacks for scripting +jfrconv --cpu recording.jfr cpu.collapsed # collapsed stacks for scripting ``` ### What to examine in JMC / IntelliJ diff --git a/skills/async-profiler/references/profile.md b/skills/async-profiler/references/profile.md index 8d925a189..8f677dfcf 100644 --- a/skills/async-profiler/references/profile.md +++ b/skills/async-profiler/references/profile.md @@ -94,16 +94,17 @@ production JVM, etc.). --- -## Always start with `--all` +## Usually start with `--all` **`asprof start --all` records CPU, allocation, wall-clock, and lock contention -simultaneously in a single JFR file.** There is no meaningful overhead penalty -for capturing all events together compared to capturing just one. You then split -the JFR into separate flamegraphs with `jfrconv` after the fact. - -**Never run separate captures for each event type.** Each capture requires -reproducing the workload, which is disruptive and often impossible for realistic -or intermittent problems. Capture once, analyze everything. +simultaneously in a single JFR file.** That is usually the best default when +you can afford one richer capture and want optionality during analysis. You can +then split the JFR into separate flamegraphs with `jfrconv` after the fact. + +`--all` does trade a broader signal set for more output to store and post-process. +Start there for intermittent or one-shot reproductions; switch to single-event +captures when you already know the signal you need or must minimize overhead, +output size, or post-processing time. ```bash # Direct asprof — capture all events, produce a single JFR diff --git a/skills/async-profiler/references/setup.md b/skills/async-profiler/references/setup.md index 57dc40eed..c3353430b 100644 --- a/skills/async-profiler/references/setup.md +++ b/skills/async-profiler/references/setup.md @@ -25,8 +25,9 @@ continue below. ## Step 1 — Download -The latest stable release is **v4.3** (January 2025). The skill includes an -install script that handles everything automatically. +The bundled installer currently pins **async-profiler v4.3** by default. If +that changes, treat `scripts/install.sh` as the source of truth for the exact +version and install path. ### Option A — use the bundled install script (recommended) @@ -35,8 +36,8 @@ downloads the right binary, removes the macOS Gatekeeper quarantine flag, and verifies the install: ```bash -bash scripts/install.sh # installs to ~/async-profiler-4.3/ -bash scripts/install.sh /opt # installs to /opt/async-profiler-4.3/ +bash scripts/install.sh # installs to ~/async-profiler-/ +bash scripts/install.sh /opt # installs to /opt/async-profiler-/ ``` It prints the exact binary path and a one-liner to add it to your PATH. diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index a42f945eb..3a5628c11 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -159,7 +159,11 @@ newest_by_mtime() { local newest_mtime=0 local candidate mtime for candidate in "$@"; do - mtime="$(stat -f '%m' "$candidate" 2>/dev/null || echo 0)" + if [[ "$(uname)" == "Darwin" ]]; then + mtime="$(stat -f '%m' "$candidate" 2>/dev/null || echo 0)" + else + mtime="$(stat -c '%Y' "$candidate" 2>/dev/null || echo 0)" + fi if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then newest="$candidate" newest_mtime="$mtime" From 3b0d2eefe1f9326274ae0c877c1d1cbf30d2425f Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 15:23:15 +0200 Subject: [PATCH 15/30] fix(async-profiler): tighten collapsed guidance Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/analyze.md | 8 ++++---- skills/async-profiler/scripts/collect.sh | 10 +++++++++- skills/async-profiler/scripts/run_profile.sh | 7 ++++--- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/skills/async-profiler/references/analyze.md b/skills/async-profiler/references/analyze.md index 963086817..82e642150 100644 --- a/skills/async-profiler/references/analyze.md +++ b/skills/async-profiler/references/analyze.md @@ -275,11 +275,11 @@ grep -i "serial\|jackson\|json" profile.collapsed | awk '{sum+=$NF} END{print su ### Convert collapsed → flamegraph ```bash -# Using async-profiler's jfrconv -jfrconv collapsed.txt flamegraph.html - -# Or using the original FlameGraph perl script (if installed) +# Using the original FlameGraph perl script (if installed) flamegraph.pl profile.collapsed > flamegraph.svg + +# Or regenerate HTML directly from the original JFR recording +jfrconv --cpu recording.jfr cpu.html ``` --- diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 3a5628c11..6746190d4 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -48,6 +48,13 @@ fi SUBCMD="$1"; shift +case "$SUBCMD" in + help|-h|--help) + sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' + exit 0 + ;; +esac + # ── Parse options ───────────────────────────────────────────────────────────── DURATION=30 TARGET="" @@ -310,11 +317,12 @@ split_jfr() { fi local base_dir; base_dir="$(dirname "$jfr_path")" + local script_dir; script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "" echo "💡 Next step: analyze results." echo " For collapsed stack analysis (CPU):" echo " jfrconv --cpu $jfr_path ${base}-cpu.collapsed" - echo " python3 scripts/analyze_collapsed.py ${base}-cpu.collapsed" + echo " python3 ${script_dir}/analyze_collapsed.py ${base}-cpu.collapsed" } # ── start ───────────────────────────────────────────────────────────────────── diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index c92b6ecb3..497be678c 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -356,11 +356,12 @@ else echo " 'I have a JFR recording at $OUTPUT — help me interpret it.'" ;; collapsed) + SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "Analyze with:" - echo " python3 scripts/analyze_collapsed.py $OUTPUT" + echo " python3 ${SCRIPT_DIR}/analyze_collapsed.py $OUTPUT" echo "" - echo "Or convert to flamegraph:" - echo " jfrconv $OUTPUT flamegraph.html" + echo "Or render an SVG flamegraph (if FlameGraph is installed):" + echo " flamegraph.pl $OUTPUT > flamegraph.svg" echo "" echo "💡 Next step — ask your AI assistant to analyze:" echo " 'Run analyze_collapsed.py on $OUTPUT and tell me what's slow.'" From fe61bec571d9d38df1be2ed2903414a3836f1255 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 16:52:52 +0200 Subject: [PATCH 16/30] fix(async-profiler): quote suggested file paths Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 6 ++--- skills/async-profiler/scripts/run_profile.sh | 23 ++++++++++---------- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 6746190d4..36f937d03 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -321,8 +321,8 @@ split_jfr() { echo "" echo "💡 Next step: analyze results." echo " For collapsed stack analysis (CPU):" - echo " jfrconv --cpu $jfr_path ${base}-cpu.collapsed" - echo " python3 ${script_dir}/analyze_collapsed.py ${base}-cpu.collapsed" + echo " jfrconv --cpu \"$jfr_path\" \"${base}-cpu.collapsed\"" + echo " python3 \"${script_dir}/analyze_collapsed.py\" \"${base}-cpu.collapsed\"" } # ── start ───────────────────────────────────────────────────────────────────── @@ -459,7 +459,7 @@ cmd_stop() { local jfrconv; jfrconv="$(locate_jfrconv "$asprof")" if [[ -z "$jfrconv" ]]; then echo "⚠️ jfrconv not found — skipping flamegraph split." - echo " Convert manually: jfrconv --cpu $jfr_path cpu.html" + echo " Convert manually: jfrconv --cpu \"$jfr_path\" cpu.html" echo " Or open in IntelliJ IDEA or JDK Mission Control." return fi diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 497be678c..4eb5ca392 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -311,10 +311,10 @@ if $COMPREHENSIVE; then open "$CPU_HTML" "$ALLOC_HTML" "$WALL_HTML" "$LOCK_HTML" else echo "Open flamegraphs with:" - echo " xdg-open $CPU_HTML" - echo " xdg-open $ALLOC_HTML" - echo " xdg-open $WALL_HTML" - echo " xdg-open $LOCK_HTML" + echo " xdg-open \"$CPU_HTML\"" + echo " xdg-open \"$ALLOC_HTML\"" + echo " xdg-open \"$WALL_HTML\"" + echo " xdg-open \"$LOCK_HTML\"" fi echo "" @@ -323,8 +323,9 @@ if $COMPREHENSIVE; then echo " to focus: $CPU_HTML, $ALLOC_HTML, $WALL_HTML, $LOCK_HTML'" echo "" echo " Or for collapsed stack analysis:" - echo " jfrconv $OUTPUT ${BASE}-cpu.collapsed" - echo " python3 scripts/analyze_collapsed.py ${BASE}-cpu.collapsed" + SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + echo " jfrconv \"$OUTPUT\" \"${BASE}-cpu.collapsed\"" + echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"${BASE}-cpu.collapsed\"" else # Single-event post-run guidance @@ -334,7 +335,7 @@ else if [[ "$(uname)" == "Darwin" ]]; then open "$OUTPUT" else - echo " xdg-open $OUTPUT" + echo " xdg-open \"$OUTPUT\"" fi echo "" echo "What to look for:" @@ -350,7 +351,7 @@ else echo "Open in JDK Mission Control: File → Open File → select $OUTPUT" echo "" echo "Or convert to flamegraph:" - echo " jfrconv $OUTPUT flamegraph.html" + echo " jfrconv \"$OUTPUT\" flamegraph.html" echo "" echo "💡 Next step — ask your AI assistant to analyze:" echo " 'I have a JFR recording at $OUTPUT — help me interpret it.'" @@ -358,10 +359,10 @@ else collapsed) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "Analyze with:" - echo " python3 ${SCRIPT_DIR}/analyze_collapsed.py $OUTPUT" + echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"$OUTPUT\"" echo "" echo "Or render an SVG flamegraph (if FlameGraph is installed):" - echo " flamegraph.pl $OUTPUT > flamegraph.svg" + echo " flamegraph.pl \"$OUTPUT\" > flamegraph.svg" echo "" echo "💡 Next step — ask your AI assistant to analyze:" echo " 'Run analyze_collapsed.py on $OUTPUT and tell me what's slow.'" @@ -371,7 +372,7 @@ else echo " $OUTPUT" echo "" echo "Review with:" - echo " cat $OUTPUT" + echo " cat \"$OUTPUT\"" ;; esac fi From 910f76157e99a63d19fce93edec6d79196fb6814 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 17:10:26 +0200 Subject: [PATCH 17/30] fix(async-profiler): tolerate missing installed asprof Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 2 ++ skills/async-profiler/scripts/run_profile.sh | 2 ++ 2 files changed, 4 insertions(+) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 36f937d03..b0fc9893f 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -116,6 +116,8 @@ default_installed_asprof() { return 0 fi fi + + return 0 } locate_asprof() { diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 4eb5ca392..f73f1fe56 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -94,6 +94,8 @@ default_installed_asprof() { return 0 fi fi + + return 0 } # ── Parse arguments ─────────────────────────────────────────────────────────── From 89d453ce0be9a5013127660065aae481e6c961e0 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 17:29:37 +0200 Subject: [PATCH 18/30] fix(async-profiler): remove unused base_dir Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 1 - 1 file changed, 1 deletion(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index b0fc9893f..2cd7680b1 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -318,7 +318,6 @@ split_jfr() { open "$cpu_html" "$alloc_html" "$wall_html" "$lock_html" fi - local base_dir; base_dir="$(dirname "$jfr_path")" local script_dir; script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "" echo "💡 Next step: analyze results." From ffa602130581ad6e6ce8731d2051756c291cea51 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 17:34:08 +0200 Subject: [PATCH 19/30] fix(async-profiler): reject incompatible all-event formats Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/run_profile.sh | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index f73f1fe56..ae5fed548 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -31,6 +31,7 @@ EVENT="cpu" DURATION=30 FORMAT="html" FORMAT_SET=false +REQUESTED_FORMAT="" OUTPUT="" THREADS=false ALL_EVENTS=false @@ -103,7 +104,7 @@ while [[ $# -gt 0 ]]; do case "$1" in -e|--event) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; EVENT="$2"; shift 2 ;; -d|--duration) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; DURATION="$2"; shift 2 ;; - -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; FORMAT_SET=true; shift 2 ;; + -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; REQUESTED_FORMAT="$2"; FORMAT_SET=true; shift 2 ;; -o|--output) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; OUTPUT="$2"; shift 2 ;; -t|--threads) THREADS=true; shift ;; --all) ALL_EVENTS=true; FORMAT="jfr"; shift ;; @@ -136,6 +137,15 @@ if [[ -z "$TARGET" ]]; then exit 1 fi +if $ALL_EVENTS; then + if $FORMAT_SET && [[ "$REQUESTED_FORMAT" != "jfr" ]]; then + echo "❌ --all/--comprehensive only support --format jfr." >&2 + echo " Received: --format $REQUESTED_FORMAT" >&2 + exit 1 + fi + FORMAT="jfr" +fi + # ── Locate asprof ───────────────────────────────────────────────────────────── if [[ -z "$ASPROF" ]]; then if command -v asprof &>/dev/null; then From 38a5beca851b767f555cfc722bb1e3689339f47e Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 17:46:18 +0200 Subject: [PATCH 20/30] Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/run_profile.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index ae5fed548..f1c08564a 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -336,7 +336,7 @@ if $COMPREHENSIVE; then echo "" echo " Or for collapsed stack analysis:" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - echo " jfrconv \"$OUTPUT\" \"${BASE}-cpu.collapsed\"" + echo " jfrconv --cpu \"$OUTPUT\" \"${BASE}-cpu.collapsed\"" echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"${BASE}-cpu.collapsed\"" else From cd1c4d25c21402c6433cd3cda6ca5158aab92e26 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 17:55:37 +0200 Subject: [PATCH 21/30] fix(async-profiler): tighten troubleshooting and cleanup Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/setup.md | 6 ++++-- skills/async-profiler/scripts/collect.sh | 5 ++++- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/skills/async-profiler/references/setup.md b/skills/async-profiler/references/setup.md index c3353430b..69e0da273 100644 --- a/skills/async-profiler/references/setup.md +++ b/skills/async-profiler/references/setup.md @@ -167,8 +167,10 @@ section below. ## Troubleshooting common issues **"Could not attach to "** -- The JVM may need `-XX:+PerfDataSaveToFile` or you may lack permissions. Run as - the same user that owns the JVM process, or use `sudo`. +- Run as the same user that owns the JVM process and check whether the JVM was + started with `-XX:+DisableAttachMechanism`. In containers, ptrace / seccomp + restrictions can also block dynamic attach; if attach is disabled entirely, + use the Java agent mode described below instead. **"Failed to open perf_events"** - Run the sysctl commands in Step 2, or use `-e itimer` to force the itimer engine. diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 2cd7680b1..35d6f06ca 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -358,7 +358,10 @@ cmd_start() { exit 1 fi - "$asprof" start --all "$TARGET" + if ! "$asprof" start --all "$TARGET"; then + rm -f "$sentinel" + exit 1 + fi # Save session state (jfr_path, asprof binary, sentinel path) if [[ -L "$sess" ]]; then From 8fe901b60d8731f2c784f4136b2d4f722c9ea987 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 18:07:46 +0200 Subject: [PATCH 22/30] fix(async-profiler): tighten path and stderr handling Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/install.sh | 4 ++++ skills/async-profiler/scripts/run_profile.sh | 6 +++--- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index dc77a2663..9341cf61e 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -37,6 +37,10 @@ while [[ $# -gt 0 ]]; do exit 1 fi INSTALL_PARENT="$1" + case "$INSTALL_PARENT" in + "~") INSTALL_PARENT="$HOME" ;; + "~/"*) INSTALL_PARENT="${HOME}/${INSTALL_PARENT#~/}" ;; + esac INSTALL_PARENT_SET=true shift ;; diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index f1c08564a..fb159c093 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -131,9 +131,9 @@ while [[ $# -gt 0 ]]; do done if [[ -z "$TARGET" ]]; then - echo "❌ No target specified. Provide a PID or app name." - echo " Usage: $0 [options] " - echo " List Java processes: jps -l" + echo "❌ No target specified. Provide a PID or app name." >&2 + echo " Usage: $0 [options] " >&2 + echo " List Java processes: jps -l" >&2 exit 1 fi From d790d48809d4025ecc5990f940aa1e4d8cf97db9 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 21:43:31 +0200 Subject: [PATCH 23/30] fix(async-profiler): handle invalid inputs more safely Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../scripts/analyze_collapsed.py | 17 +++++++++---- skills/async-profiler/scripts/collect.sh | 25 +++++++++++++------ 2 files changed, 29 insertions(+), 13 deletions(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index 3e1117606..17f024331 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -28,10 +28,10 @@ from typing import Optional, Pattern -def parse_collapsed(path: str) -> list[tuple[list[str], int]]: +def parse_collapsed(path: Path) -> list[tuple[list[str], int]]: """Parse a collapsed stack file into (frames, count) tuples.""" stacks = [] - with open(path, "r", encoding="utf-8", errors="replace") as f: + with path.open("r", encoding="utf-8", errors="replace") as f: for line in f: line = line.strip() if not line or line.startswith("#"): @@ -183,15 +183,22 @@ def main(): grep_re = compile_pattern("grep", args.grep) exclude_re = compile_pattern("exclude", args.exclude) - path = args.file - if not Path(path).exists(): + path = Path(args.file) + if not path.exists(): print(f"❌ File not found: {path}", file=sys.stderr) sys.exit(1) + if not path.is_file(): + print(f"❌ Expected a regular file: {path}", file=sys.stderr) + sys.exit(1) print("\n📊 async-profiler collapsed stack analysis") print(f" File: {path}\n") - stacks = parse_collapsed(path) + try: + stacks = parse_collapsed(path) + except OSError as exc: + print(f"❌ Failed to read {path}: {exc}", file=sys.stderr) + sys.exit(1) if not stacks: print("❌ No stack data found. Is this a valid .collapsed file?") sys.exit(1) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 35d6f06ca..e6eed3804 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -417,17 +417,26 @@ cmd_stop() { local -a search_roots=() local -a jfr_matches=() local jfr_candidate - shopt -s nullglob - search_roots=(/var/folders/*/*/T) - shopt -u nullglob - if [[ ${#search_roots[@]} -eq 0 ]]; then + if [[ "$TARGET" =~ ^[0-9]+$ ]]; then search_roots=(/var/folders) search_maxdepth=8 - search_hint="find /var/folders -maxdepth 8 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + search_hint="find /var/folders -maxdepth 8 -path '*/T/*_${TARGET}/*.jfr' -newer '$sentinel' 2>/dev/null" + while IFS= read -r -d '' jfr_candidate; do + jfr_matches+=("$jfr_candidate") + done < <(find "${search_roots[@]}" -maxdepth "$search_maxdepth" -path "*/T/*_${TARGET}/*.jfr" -newer "$sentinel" -print0 2>/dev/null) + else + shopt -s nullglob + search_roots=(/var/folders/*/*/T) + shopt -u nullglob + if [[ ${#search_roots[@]} -eq 0 ]]; then + search_roots=(/var/folders) + search_maxdepth=8 + search_hint="find /var/folders -maxdepth 8 -name '*.jfr' -newer '$sentinel' 2>/dev/null" + fi + while IFS= read -r -d '' jfr_candidate; do + jfr_matches+=("$jfr_candidate") + done < <(find "${search_roots[@]}" -maxdepth "$search_maxdepth" -name "*.jfr" -newer "$sentinel" -print0 2>/dev/null) fi - while IFS= read -r -d '' jfr_candidate; do - jfr_matches+=("$jfr_candidate") - done < <(find "${search_roots[@]}" -maxdepth "$search_maxdepth" -name "*.jfr" -newer "$sentinel" -print0 2>/dev/null) if [[ ${#jfr_matches[@]} -gt 0 ]]; then found_jfr="$(newest_by_mtime "${jfr_matches[@]}")" From 2526245ec6894ae851b231357f441cd7756f6333 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 22:07:49 +0200 Subject: [PATCH 24/30] fix(async-profiler): align wrapper CLI with asprof Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/README.skills.md | 2 +- skills/async-profiler/SKILL.md | 2 + skills/async-profiler/scripts/_asprof_lib.sh | 86 +++++++++++++++++ skills/async-profiler/scripts/collect.sh | 83 +---------------- skills/async-profiler/scripts/run_profile.sh | 98 ++++---------------- 5 files changed, 110 insertions(+), 161 deletions(-) create mode 100644 skills/async-profiler/scripts/_asprof_lib.sh diff --git a/docs/README.skills.md b/docs/README.skills.md index ec6a6af4c..e81673a6f 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -53,7 +53,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to | [arize-trace](../skills/arize-trace/SKILL.md)
`gh skills install github/awesome-copilot arize-trace` | INVOKE THIS SKILL when downloading or exporting Arize traces and spans. Covers exporting traces by ID, sessions by ID, and debugging LLM application issues using the ax CLI. | `references/ax-profiles.md`
`references/ax-setup.md` | | [aspire](../skills/aspire/SKILL.md)
`gh skills install github/awesome-copilot aspire` | Aspire skill covering the Aspire CLI, AppHost orchestration, service discovery, integrations, MCP server, VS Code extension, Dev Containers, GitHub Codespaces, templates, dashboard, and deployment. Use when the user asks to create, run, debug, configure, deploy, or troubleshoot an Aspire distributed application. | `references/architecture.md`
`references/cli-reference.md`
`references/dashboard.md`
`references/deployment.md`
`references/integrations-catalog.md`
`references/mcp-server.md`
`references/polyglot-apis.md`
`references/testing.md`
`references/troubleshooting.md` | | [aspnet-minimal-api-openapi](../skills/aspnet-minimal-api-openapi/SKILL.md)
`gh skills install github/awesome-copilot aspnet-minimal-api-openapi` | Create ASP.NET Minimal API endpoints with proper OpenAPI documentation | None | -| [async-profiler](../skills/async-profiler/SKILL.md)
`gh skills install github/awesome-copilot async-profiler` | Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what's causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance. | `README.md`
`references/analyze.md`
`references/profile.md`
`references/setup.md`
`scripts/analyze_collapsed.py`
`scripts/collect.sh`
`scripts/install.sh`
`scripts/run_profile.sh` | +| [async-profiler](../skills/async-profiler/SKILL.md)
`gh skills install github/awesome-copilot async-profiler` | Install, run, and analyze async-profiler for Java — low-overhead sampling profiler producing flamegraphs, JFR recordings, and allocation profiles. Use for: "install async-profiler", "set up Java profiling", "Failed to open perf_events", "what JVM flags for profiling", "capture a flamegraph", "profile CPU/memory/allocations/lock contention", "profile my Spring Boot app", "generate a JFR recording", "heap keeps growing", "what does this flamegraph mean", "how do I read a flamegraph", "interpret profiling results", "open a .jfr file", "what's causing my CPU hotspot", "wide frame in my profile", "I see a lot of GC / Hibernate / park in my profile". Use this skill any time a Java developer mentions profiling, flamegraphs, async-profiler, JFR, or wants to understand JVM performance. | `README.md`
`references/analyze.md`
`references/profile.md`
`references/setup.md`
`scripts/_asprof_lib.sh`
`scripts/analyze_collapsed.py`
`scripts/collect.sh`
`scripts/install.sh`
`scripts/run_profile.sh` | | [audit-integrity](../skills/audit-integrity/SKILL.md)
`gh skills install github/awesome-copilot audit-integrity` | Shared audit integrity framework for all AppSec agents — enforces output quality, intellectual honesty, and continuous improvement through anti-rationalization guards, self-critique loops, retry protocols, non-negotiable behaviors, self-reflection quality gates (1-10 scoring, ≥8 threshold), and a self-learning system with lesson/memory governance for security analysis agents. | `references/anti-rationalization-guard.md`
`references/clarification-protocol.md`
`references/non-negotiable-behaviors.md`
`references/retry-protocol.md`
`references/self-critique-loop.md`
`references/self-learning-system.md`
`references/self-reflection-quality-gate.md` | | [automate-this](../skills/automate-this/SKILL.md)
`gh skills install github/awesome-copilot automate-this` | Analyze a screen recording of a manual process and produce targeted, working automation scripts. Extracts frames and audio narration from video files, reconstructs the step-by-step workflow, and proposes automation at multiple complexity levels using tools already installed on the user machine. | None | | [autoresearch](../skills/autoresearch/SKILL.md)
`gh skills install github/awesome-copilot autoresearch` | Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy's autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric. | None | diff --git a/skills/async-profiler/SKILL.md b/skills/async-profiler/SKILL.md index e20061ac6..72df64827 100644 --- a/skills/async-profiler/SKILL.md +++ b/skills/async-profiler/SKILL.md @@ -117,6 +117,8 @@ This skill includes four ready-to-run scripts in `scripts/`: Always offer to run these scripts on the user's behalf when relevant. +`scripts/_asprof_lib.sh` is an internal shared helper sourced by the profiling wrappers so async-profiler discovery and versioned-install lookup stay consistent across `run_profile.sh` and `collect.sh`. + ## How to use this skill This skill keeps detailed guidance in `references/` so the root `SKILL.md` diff --git a/skills/async-profiler/scripts/_asprof_lib.sh b/skills/async-profiler/scripts/_asprof_lib.sh new file mode 100644 index 000000000..946aa254d --- /dev/null +++ b/skills/async-profiler/scripts/_asprof_lib.sh @@ -0,0 +1,86 @@ +#!/usr/bin/env bash + +ASPROF_LIB_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +asprof_stat_mtime() { + if [[ "$(uname)" == "Darwin" ]]; then + stat -f '%m' "$1" 2>/dev/null || echo 0 + else + stat -c '%Y' "$1" 2>/dev/null || echo 0 + fi +} + +newest_by_mtime() { + local newest="" newest_mtime=0 candidate mtime + for candidate in "$@"; do + [[ -n "$candidate" ]] || continue + mtime="$(asprof_stat_mtime "$candidate")" + if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then + newest="$candidate" + newest_mtime="$mtime" + fi + done + printf '%s\n' "$newest" +} + +default_installed_asprof() { + local install_script candidate newest_versioned="" + local -a versioned_candidates=() + install_script="${ASPROF_LIB_DIR}/install.sh" + if [[ -f "$install_script" ]]; then + for candidate in \ + "$(bash "$install_script" --path-only 2>/dev/null || true)" \ + "$(bash "$install_script" /opt --path-only 2>/dev/null || true)" + do + if [[ -x "$candidate" ]]; then + printf '%s\n' "$candidate" + return 0 + fi + done + fi + + shopt -s nullglob + versioned_candidates=("$HOME"/async-profiler-*/bin/asprof /opt/async-profiler-*/bin/asprof) + shopt -u nullglob + if [[ ${#versioned_candidates[@]} -gt 0 ]]; then + newest_versioned="$(newest_by_mtime "${versioned_candidates[@]}")" + if [[ -x "$newest_versioned" ]]; then + printf '%s\n' "$newest_versioned" + return 0 + fi + fi + + return 0 +} + +locate_asprof_binary() { + local asprof_arg="${1:-}" + local asprof="" candidate installed_asprof="" + if [[ -n "$asprof_arg" ]]; then + asprof="$asprof_arg" + if [[ ! -f "$asprof" || ! -x "$asprof" ]]; then + echo "❌ --asprof must point to an executable asprof binary: $asprof" >&2 + return 1 + fi + elif command -v asprof &>/dev/null; then + asprof="$(command -v asprof)" + else + installed_asprof="$(default_installed_asprof)" + for candidate in \ + "$installed_asprof" \ + "$HOME/async-profiler/bin/asprof" \ + "/opt/async-profiler/bin/asprof" \ + "/usr/local/bin/asprof" + do + if [[ -x "$candidate" ]]; then + asprof="$candidate" + break + fi + done + fi + if [[ -z "$asprof" ]]; then + echo "❌ asprof not found. Install with: bash scripts/install.sh" >&2 + return 1 + fi + printf '%s\n' "$asprof" +} diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index e6eed3804..bc5bf644d 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -40,6 +40,9 @@ set -euo pipefail +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/_asprof_lib.sh" + # ── Parse subcommand ────────────────────────────────────────────────────────── if [[ $# -eq 0 ]]; then sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' @@ -89,66 +92,9 @@ if [[ -z "$TARGET" && "$SUBCMD" != "help" ]]; then fi # ── Helpers ─────────────────────────────────────────────────────────────────── -default_installed_asprof() { - local script_dir install_script candidate newest_versioned="" - local -a versioned_candidates=() - script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - install_script="${script_dir}/install.sh" - if [[ -f "$install_script" ]]; then - for candidate in \ - "$(bash "$install_script" --path-only 2>/dev/null || true)" \ - "$(bash "$install_script" /opt --path-only 2>/dev/null || true)" - do - if [[ -x "$candidate" ]]; then - echo "$candidate" - return 0 - fi - done - fi - - shopt -s nullglob - versioned_candidates=("$HOME"/async-profiler-*/bin/asprof /opt/async-profiler-*/bin/asprof) - shopt -u nullglob - if [[ ${#versioned_candidates[@]} -gt 0 ]]; then - newest_versioned="$(newest_by_mtime "${versioned_candidates[@]}")" - if [[ -x "$newest_versioned" ]]; then - echo "$newest_versioned" - return 0 - fi - fi - - return 0 -} - locate_asprof() { local asprof="" - if [[ -n "$ASPROF_ARG" ]]; then - asprof="$ASPROF_ARG" - if [[ ! -f "$asprof" || ! -x "$asprof" ]]; then - echo "❌ --asprof must point to an executable asprof binary: $asprof" >&2 - exit 1 - fi - elif command -v asprof &>/dev/null; then - asprof="$(command -v asprof)" - else - local installed_asprof="" - installed_asprof="$(default_installed_asprof)" - for candidate in \ - "$installed_asprof" \ - "$HOME/async-profiler/bin/asprof" \ - "/opt/async-profiler/bin/asprof" \ - "/usr/local/bin/asprof" - do - if [[ -x "$candidate" ]]; then - asprof="$candidate" - break - fi - done - fi - if [[ -z "$asprof" ]]; then - echo "❌ asprof not found. Install with: bash scripts/install.sh" >&2 - exit 1 - fi + asprof="$(locate_asprof_binary "$ASPROF_ARG")" || exit 1 echo "$asprof" } @@ -163,24 +109,6 @@ locate_jfrconv() { fi } -newest_by_mtime() { - local newest="" - local newest_mtime=0 - local candidate mtime - for candidate in "$@"; do - if [[ "$(uname)" == "Darwin" ]]; then - mtime="$(stat -f '%m' "$candidate" 2>/dev/null || echo 0)" - else - mtime="$(stat -c '%Y' "$candidate" 2>/dev/null || echo 0)" - fi - if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then - newest="$candidate" - newest_mtime="$mtime" - fi - done - echo "$newest" -} - stat_uid() { if [[ "$(uname)" == "Darwin" ]]; then stat -f '%u' "$1" @@ -318,12 +246,11 @@ split_jfr() { open "$cpu_html" "$alloc_html" "$wall_html" "$lock_html" fi - local script_dir; script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "" echo "💡 Next step: analyze results." echo " For collapsed stack analysis (CPU):" echo " jfrconv --cpu \"$jfr_path\" \"${base}-cpu.collapsed\"" - echo " python3 \"${script_dir}/analyze_collapsed.py\" \"${base}-cpu.collapsed\"" + echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"${base}-cpu.collapsed\"" } # ── start ───────────────────────────────────────────────────────────────────── diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index fb159c093..c3b7600ca 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -7,8 +7,8 @@ # Options: # -e, --event cpu|alloc|wall|lock Single event (default: cpu) # -d, --duration N Seconds to profile (default: 30) -# -f, --format html|svg|jfr|collapsed|txt Output format for single-event (default: html) -# -o, --output FILE Output path (default: auto-named) +# -F, --format html|svg|jfr|collapsed|txt Output format for single-event (default: html) +# -o, --output FILE Output path (default: auto-named; appends format extension when needed) # -t, --threads Profile threads separately # --all Capture all events to a JFR file # --comprehensive Capture all events AND split into per-event @@ -21,11 +21,14 @@ # bash scripts/run_profile.sh 12345 # 30s CPU flamegraph # bash scripts/run_profile.sh --comprehensive 12345 # all events, split into flamegraphs # bash scripts/run_profile.sh -e alloc -d 60 MyApp # 60s allocation flamegraph -# bash scripts/run_profile.sh -e wall -f jfr 12345 # wall-clock JFR recording +# bash scripts/run_profile.sh -e wall -F jfr 12345 # wall-clock JFR recording # bash scripts/run_profile.sh --all -d 120 12345 # all events, single JFR file set -euo pipefail +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/_asprof_lib.sh" + # ── Defaults ───────────────────────────────────────────────────────────────── EVENT="cpu" DURATION=30 @@ -47,56 +50,10 @@ detect_format_from_output() { esac } -stat_mtime() { - if [[ "$(uname)" == "Darwin" ]]; then - stat -f '%m' "$1" 2>/dev/null || echo 0 - else - stat -c '%Y' "$1" 2>/dev/null || echo 0 - fi -} - -newest_by_mtime() { - local newest="" newest_mtime=0 candidate mtime - for candidate in "$@"; do - [[ -n "$candidate" ]] || continue - mtime="$(stat_mtime "$candidate")" - if [[ -z "$newest" || "$mtime" -gt "$newest_mtime" ]]; then - newest="$candidate" - newest_mtime="$mtime" - fi - done - echo "$newest" -} - -default_installed_asprof() { - local script_dir install_script candidate newest_versioned="" - local -a versioned_candidates=() - script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - install_script="${script_dir}/install.sh" - if [[ -f "$install_script" ]]; then - for candidate in \ - "$(bash "$install_script" --path-only 2>/dev/null || true)" \ - "$(bash "$install_script" /opt --path-only 2>/dev/null || true)" - do - if [[ -x "$candidate" ]]; then - echo "$candidate" - return 0 - fi - done - fi - - shopt -s nullglob - versioned_candidates=("$HOME"/async-profiler-*/bin/asprof /opt/async-profiler-*/bin/asprof) - shopt -u nullglob - if [[ ${#versioned_candidates[@]} -gt 0 ]]; then - newest_versioned="$(newest_by_mtime "${versioned_candidates[@]}")" - if [[ -x "$newest_versioned" ]]; then - echo "$newest_versioned" - return 0 - fi - fi - - return 0 +append_format_extension() { + local output_path="$1" + local format="$2" + printf '%s.%s\n' "${output_path%.}" "$format" } # ── Parse arguments ─────────────────────────────────────────────────────────── @@ -104,7 +61,7 @@ while [[ $# -gt 0 ]]; do case "$1" in -e|--event) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; EVENT="$2"; shift 2 ;; -d|--duration) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; DURATION="$2"; shift 2 ;; - -f|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; REQUESTED_FORMAT="$2"; FORMAT_SET=true; shift 2 ;; + -F|--format) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; FORMAT="$2"; REQUESTED_FORMAT="$2"; FORMAT_SET=true; shift 2 ;; -o|--output) [[ $# -ge 2 ]] || { echo "❌ Missing value for $1" >&2; exit 1; }; OUTPUT="$2"; shift 2 ;; -t|--threads) THREADS=true; shift ;; --all) ALL_EVENTS=true; FORMAT="jfr"; shift ;; @@ -147,30 +104,10 @@ if $ALL_EVENTS; then fi # ── Locate asprof ───────────────────────────────────────────────────────────── -if [[ -z "$ASPROF" ]]; then - if command -v asprof &>/dev/null; then - ASPROF="$(command -v asprof)" - else - INSTALLED_ASPROF="$(default_installed_asprof)" - for candidate in \ - "$INSTALLED_ASPROF" \ - "$HOME/async-profiler/bin/asprof" \ - "/opt/async-profiler/bin/asprof" \ - "/usr/local/bin/asprof" - do - if [[ -x "$candidate" ]]; then - ASPROF="$candidate" - break - fi - done - fi -fi - -if [[ -z "$ASPROF" ]]; then - echo "❌ asprof not found. Install with: bash scripts/install.sh" - echo " Or specify path: --asprof /path/to/asprof" +ASPROF="$(locate_asprof_binary "$ASPROF")" || { + echo " Or specify path: --asprof /path/to/asprof" >&2 exit 1 -fi +} if [[ ! -f "$ASPROF" || ! -x "$ASPROF" ]]; then echo "❌ --asprof must point to an executable asprof binary: $ASPROF" >&2 @@ -191,9 +128,8 @@ fi OUTPUT_FORMAT="$(detect_format_from_output "$OUTPUT")" if [[ -z "$OUTPUT_FORMAT" ]]; then - echo "❌ Unsupported output extension in '$OUTPUT'." >&2 - echo " Use one of: .html, .svg, .jfr, .collapsed, .txt" >&2 - exit 1 + OUTPUT="$(append_format_extension "$OUTPUT" "$FORMAT")" + OUTPUT_FORMAT="$FORMAT" fi OUTPUT_DIR="$(dirname "$OUTPUT")" @@ -335,7 +271,6 @@ if $COMPREHENSIVE; then echo " to focus: $CPU_HTML, $ALLOC_HTML, $WALL_HTML, $LOCK_HTML'" echo "" echo " Or for collapsed stack analysis:" - SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo " jfrconv --cpu \"$OUTPUT\" \"${BASE}-cpu.collapsed\"" echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"${BASE}-cpu.collapsed\"" @@ -369,7 +304,6 @@ else echo " 'I have a JFR recording at $OUTPUT — help me interpret it.'" ;; collapsed) - SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" echo "Analyze with:" echo " python3 \"${SCRIPT_DIR}/analyze_collapsed.py\" \"$OUTPUT\"" echo "" From 237df2a3147fa7f74bf4605bdd5e0661b64a0741 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 22:24:25 +0200 Subject: [PATCH 25/30] docs(async-profiler): fix attach troubleshooting example Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/references/setup.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/skills/async-profiler/references/setup.md b/skills/async-profiler/references/setup.md index 69e0da273..72e0f64f3 100644 --- a/skills/async-profiler/references/setup.md +++ b/skills/async-profiler/references/setup.md @@ -187,7 +187,8 @@ section below. ## Using async-profiler as a Java agent If you can't attach dynamically (e.g., the JVM was started with -`-XX:-UseDynamicCodeDeoptimization`), use the Java agent mode: +`-XX:+DisableAttachMechanism`, or the container blocks ptrace / seccomp), use +the Java agent mode: ```bash java -agentpath:/path/to/libasyncProfiler.so=start,event=cpu,file=profile.html \ From bb2e8caeb574f330ba8a3d54ebc7a22ebbc54801 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 22:46:30 +0200 Subject: [PATCH 26/30] fix(async-profiler): validate wrapper inputs earlier Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../scripts/analyze_collapsed.py | 2 +- skills/async-profiler/scripts/install.sh | 7 +++++ skills/async-profiler/scripts/run_profile.sh | 29 +++++++++++++++++++ 3 files changed, 37 insertions(+), 1 deletion(-) diff --git a/skills/async-profiler/scripts/analyze_collapsed.py b/skills/async-profiler/scripts/analyze_collapsed.py index 17f024331..61d66538e 100644 --- a/skills/async-profiler/scripts/analyze_collapsed.py +++ b/skills/async-profiler/scripts/analyze_collapsed.py @@ -122,7 +122,7 @@ def top_packages(stacks, n=20, grep_re=None, exclude_re=None): def print_table(rows, total, header_left, header_right="Samples", csv_mode=False): if csv_mode: - writer = csv.writer(sys.stdout) + writer = csv.writer(sys.stdout, lineterminator="\n") writer.writerow([header_left, header_right, "Pct"]) for name, count in rows: pct = 100.0 * count / total if total else 0 diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index 9341cf61e..55e8621be 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -166,6 +166,13 @@ else tar xf "$ARCHIVE" fi +if [[ ! -d "$EXTRACTED_DIR" ]]; then + echo "❌ Extracted archive did not contain the expected directory." >&2 + echo " Archive : $ARCHIVE" >&2 + echo " Expected directory: $EXTRACTED_DIR/" >&2 + exit 1 +fi + # Move into place mkdir -p "$INSTALL_PARENT" mv "$EXTRACTED_DIR" "$INSTALL_DIR" diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index c3b7600ca..f3ad3de5b 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -56,6 +56,30 @@ append_format_extension() { printf '%s.%s\n' "${output_path%.}" "$format" } +validate_event() { + case "$1" in + cpu|alloc|wall|lock) ;; + *) + echo "❌ Unsupported --event: $1" >&2 + echo " Allowed values: cpu, alloc, wall, lock" >&2 + echo " Run '$0 --help' for usage." >&2 + exit 1 + ;; + esac +} + +validate_format() { + case "$1" in + html|svg|jfr|collapsed|txt) ;; + *) + echo "❌ Unsupported --format: $1" >&2 + echo " Allowed values: html, svg, jfr, collapsed, txt" >&2 + echo " Run '$0 --help' for usage." >&2 + exit 1 + ;; + esac +} + # ── Parse arguments ─────────────────────────────────────────────────────────── while [[ $# -gt 0 ]]; do case "$1" in @@ -103,6 +127,11 @@ if $ALL_EVENTS; then FORMAT="jfr" fi +if ! $ALL_EVENTS; then + validate_event "$EVENT" +fi +validate_format "$FORMAT" + # ── Locate asprof ───────────────────────────────────────────────────────────── ASPROF="$(locate_asprof_binary "$ASPROF")" || { echo " Or specify path: --asprof /path/to/asprof" >&2 From f761acc90f00d7ea2d8ec686abb772147aac0c2f Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sat, 2 May 2026 23:28:08 +0200 Subject: [PATCH 27/30] fix(async-profiler): harden collect session state Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 23 +++++++++++++++++++++++ skills/async-profiler/scripts/install.sh | 1 + 2 files changed, 24 insertions(+) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index bc5bf644d..81cfe0c80 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -325,6 +325,29 @@ cmd_stop() { if [[ -n "$ASPROF_ARG" ]]; then asprof="$(locate_asprof)" fi + if [[ -z "$jfr_path" ]]; then + echo "❌ Session state is missing the JFR output path: $sess" >&2 + echo " Re-run: bash scripts/collect.sh start $TARGET" >&2 + exit 1 + fi + local jfr_dir; jfr_dir="$(dirname "$jfr_path")" + if [[ ! -d "$jfr_dir" ]]; then + echo "❌ Session state points to a missing JFR directory: $jfr_dir" >&2 + echo " Session state: $sess" >&2 + echo " Re-run: bash scripts/collect.sh start $TARGET" >&2 + exit 1 + fi + if [[ -z "$asprof" || ! -x "$asprof" ]]; then + echo "❌ Session state contains an invalid asprof path: ${asprof:-}" >&2 + echo " Session state: $sess" >&2 + echo " Re-run: bash scripts/collect.sh start $TARGET" >&2 + exit 1 + fi + if [[ -z "$sentinel" ]]; then + echo "❌ Session state is missing the sentinel path: $sess" >&2 + echo " Re-run: bash scripts/collect.sh start $TARGET" >&2 + exit 1 + fi echo "⏹ Stopping profiler on target: $TARGET" # Note: on macOS, -f is silently ignored by asprof stop — handled below. diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index 55e8621be..fd3a38a61 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -83,6 +83,7 @@ esac # macOS ships as a single universal binary (covers both x64 and arm64) if [[ "$PLATFORM" == "macos" ]]; then + ARCH_LABEL="universal" ARCHIVE="async-profiler-${VERSION}-macos.zip" EXTRACTED_DIR="async-profiler-${VERSION}-macos" EXPECTED_SHA256="$MACOS_SHA256" From 8425417f925c7b0d29f73a2e86d5e92a0bf1e6ec Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sun, 3 May 2026 10:15:59 +0200 Subject: [PATCH 28/30] fix(async-profiler): tighten stop fallback and command echo Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 10 +++++----- skills/async-profiler/scripts/run_profile.sh | 14 +++++++++++++- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 81cfe0c80..2825dc971 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -355,12 +355,12 @@ cmd_stop() { # Session file is removed only after the JFR is confirmed written (see end of block). # ── macOS JFR path workaround ──────────────────────────────────────────── - # On macOS, asprof stop ignores -f and writes the JFR to: - # /var/folders//T/_/.jfr - # Use the sentinel (created at 'start') to find the file via find -newer. - if [[ "$(uname)" == "Darwin" ]] && [[ -n "$sentinel" ]] && [[ -f "$sentinel" ]]; then + # On macOS, some async-profiler versions ignore -f on stop and write the + # JFR under /var/folders instead. Only fall back to that search if the + # requested output path is still missing or empty after stop returns. + if [[ ! -s "$jfr_path" ]] && [[ "$(uname)" == "Darwin" ]] && [[ -n "$sentinel" ]] && [[ -f "$sentinel" ]]; then echo "" - echo "⚠️ macOS: -f is ignored by asprof stop — locating JFR in /var/folders..." + echo "⚠️ macOS: expected JFR output is missing — locating JFR in /var/folders..." local found_jfr="" local search_maxdepth=2 local search_hint="find /var/folders/*/*/T -maxdepth 2 -name '*.jfr' -newer '$sentinel' 2>/dev/null" diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index f3ad3de5b..226b66dfb 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -56,6 +56,18 @@ append_format_extension() { printf '%s.%s\n' "${output_path%.}" "$format" } +format_shell_command() { + local formatted="" quoted_arg raw_arg + for raw_arg in "$@"; do + printf -v quoted_arg '%q' "$raw_arg" + if [[ -n "$formatted" ]]; then + formatted+=" " + fi + formatted+="$quoted_arg" + done + printf '%s\n' "$formatted" +} + validate_event() { case "$1" in cpu|alloc|wall|lock) ;; @@ -202,7 +214,7 @@ echo " Duration: ${DURATION}s" echo " Output : $OUTPUT" $THREADS && echo " Threads : separate" echo "" -echo "▶ ${CMD[*]}" +echo "▶ $(format_shell_command "${CMD[@]}")" echo "Press Ctrl+C to stop early (partial results will be saved)." echo "" From 682a3d8724fd6e37c8c11f5dd523149ac768fe73 Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sun, 3 May 2026 11:31:53 +0200 Subject: [PATCH 29/30] fix(async-profiler): validate collect subcommands earlier Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/collect.sh | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/skills/async-profiler/scripts/collect.sh b/skills/async-profiler/scripts/collect.sh index 2825dc971..7bf4a4010 100755 --- a/skills/async-profiler/scripts/collect.sh +++ b/skills/async-profiler/scripts/collect.sh @@ -56,6 +56,14 @@ case "$SUBCMD" in sed -n '2,/^[^#]/p' "$0" | grep '^#' | sed 's/^# \?//' exit 0 ;; + start|stop|timed) + ;; + *) + echo "❌ Unknown subcommand: '$SUBCMD'" >&2 + echo " Supported subcommands: start, stop, timed" >&2 + echo " Run '$0 --help' for usage." >&2 + exit 1 + ;; esac # ── Parse options ───────────────────────────────────────────────────────────── @@ -85,7 +93,7 @@ while [[ $# -gt 0 ]]; do esac done -if [[ -z "$TARGET" && "$SUBCMD" != "help" ]]; then +if [[ -z "$TARGET" ]]; then echo "❌ No target specified. Provide a PID or app name." >&2 echo " List Java processes: jps -l" >&2 exit 1 From 4616081f2923f6ffbe09a1c1c36aceed55486f2f Mon Sep 17 00:00:00 2001 From: Vetle Leinonen-Roeim Date: Sun, 3 May 2026 12:46:36 +0200 Subject: [PATCH 30/30] fix(async-profiler): improve post-run guidance Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- skills/async-profiler/scripts/install.sh | 2 +- skills/async-profiler/scripts/run_profile.sh | 19 ++++++++++++++----- 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/skills/async-profiler/scripts/install.sh b/skills/async-profiler/scripts/install.sh index fd3a38a61..5aa4f69fc 100644 --- a/skills/async-profiler/scripts/install.sh +++ b/skills/async-profiler/scripts/install.sh @@ -212,4 +212,4 @@ if [[ "$PLATFORM" == "macos" ]]; then fi echo "Quick test (requires a running JVM — find PID with: jps -l):" -echo " asprof -d 5 " +printf ' %q -d 5 \n' "$ASPROF" diff --git a/skills/async-profiler/scripts/run_profile.sh b/skills/async-profiler/scripts/run_profile.sh index 226b66dfb..ba5f84e3a 100644 --- a/skills/async-profiler/scripts/run_profile.sh +++ b/skills/async-profiler/scripts/run_profile.sh @@ -68,6 +68,14 @@ format_shell_command() { printf '%s\n' "$formatted" } +try_open_outputs() { + local label="$1" + shift + if ! open "$@"; then + echo "⚠️ Could not open ${label} automatically." + fi +} + validate_event() { case "$1" in cpu|alloc|wall|lock) ;; @@ -294,12 +302,12 @@ if $COMPREHENSIVE; then echo " Combined JFR : $OUTPUT (open in IntelliJ or JDK Mission Control)" echo "" - # Open all flamegraphs at once if on macOS + echo "Open flamegraphs with:" if [[ "$(uname)" == "Darwin" ]]; then - echo "Opening all flamegraphs in browser..." - open "$CPU_HTML" "$ALLOC_HTML" "$WALL_HTML" "$LOCK_HTML" + echo " $(format_shell_command open "$CPU_HTML" "$ALLOC_HTML" "$WALL_HTML" "$LOCK_HTML")" + echo "Trying to open all flamegraphs in browser..." + try_open_outputs "flamegraphs" "$CPU_HTML" "$ALLOC_HTML" "$WALL_HTML" "$LOCK_HTML" else - echo "Open flamegraphs with:" echo " xdg-open \"$CPU_HTML\"" echo " xdg-open \"$ALLOC_HTML\"" echo " xdg-open \"$WALL_HTML\"" @@ -321,7 +329,8 @@ else html|svg) echo "Open in browser:" if [[ "$(uname)" == "Darwin" ]]; then - open "$OUTPUT" + echo " $(format_shell_command open "$OUTPUT")" + try_open_outputs "profile output" "$OUTPUT" else echo " xdg-open \"$OUTPUT\"" fi