Minor change to how docker cleans up resources#184
Conversation
There was a problem hiding this comment.
Pull request overview
Adjusts the test node Docker teardown behavior to reduce cleanup failures during --init, addressing containers restarting during docker compose down.
Changes:
- Update
docker compose downinvocation to remove orphans and use a shorter, explicit shutdown timeout. - Force-remove leftover containers and volumes during init cleanup.
- Change
restartpolicy forblockscoutandpostgresfromalwaystounless-stoppedto prevent restart during teardown.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
test-node.bash |
Makes init cleanup more aggressive (remove-orphans/timeout + force removal) to avoid teardown races. |
docker-compose.yaml |
Adjusts restart policies for key services to reduce restart-induced teardown failures. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if $force_init; then | ||
| echo == Removing old data.. | ||
| docker compose down | ||
| docker compose down --remove-orphans --timeout 10 2>/dev/null || true |
There was a problem hiding this comment.
This line suppresses all stderr from docker compose down and forces a success exit (|| true). With set -e at the top of the script, this makes real failures (e.g., compose file parse errors, Docker daemon unavailable, wrong context/project) silent and harder to debug, while the script continues into container/volume deletion steps.
Consider keeping the non-fatal behavior but surfacing the failure (e.g., capture the exit code and print a warning to stderr, and avoid redirecting stderr to /dev/null unless you’re specifically filtering a known-noisy error).
| docker compose down --remove-orphans --timeout 10 2>/dev/null || true | |
| compose_down_status=0 | |
| docker compose down --remove-orphans --timeout 10 || compose_down_status=$? | |
| if [ "$compose_down_status" -ne 0 ]; then | |
| echo "Warning: 'docker compose down --remove-orphans --timeout 10' failed with exit code $compose_down_status; continuing with manual cleanup." >&2 | |
| fi |
I had claude do some analysis on our docker usage and here is what it found:
Now I have a clear picture. Here's my analysis and fix:
Analysis
The error occurs during --init cleanup at lines 462-467 of test-node.bash. The root cause is a race condition:
The fix has two parts:
Use docker compose down --remove-orphans --timeout 10 to give containers time and clean up orphans.
Change the leftover container cleanup to force-remove (docker rm -f), since any container still alive at this point is one that down couldn't stop (due to restart policy) and needs to
be killed.
⏺ Update(test-node.bash)
⎿ Added 3 lines, removed 3 lines
464 - docker compose down
464 + docker compose down --remove-orphans --timeout 10 2>/dev/null || true
⏺ Now change restart: always to restart: unless-stopped for blockscout and postgres. This prevents them from restarting during a docker compose down while still auto-restarting on crashes
during normal operation.
⏺ Update(docker-compose.yaml)
⎿ Added 1 line, removed 1 line
6 image: blockscout-testnode
7 - restart: always
7 + restart: unless-stopped
⏺ Update(docker-compose.yaml)
⎿ Added 1 line, removed 1 line
30
31 postgres:
32 image: postgres:13.6
33 - restart: always
33 + restart: unless-stopped
⏺ Summary of changes
Root cause: restart: always on blockscout and postgres causes Docker to restart them immediately after docker compose down sends SIGTERM, so the containers are running again when the
script tries to remove them and the network.
Changes made:
allows docker compose down / docker stop to stop them permanently without restart.
- Added --remove-orphans --timeout 10 to docker compose down — cleans up orphaned containers and gives services 10 seconds for graceful shutdown
- Added 2>/dev/null || true so a partial failure in down doesn't abort the script (the subsequent force-remove handles stragglers)
- Changed docker rm to docker rm -f — force-kills any leftover containers that survived down (e.g., from a previous interrupted run)
- Changed docker volume rm to docker volume rm -f — same defensive approach for volumes still referenced by dying containers