Skip to content

Fix memory leaks in compress middleware#2915

Open
spaumx wants to merge 1 commit intolabstack:v5from
spaumx:fix/compress-memory-leaks
Open

Fix memory leaks in compress middleware#2915
spaumx wants to merge 1 commit intolabstack:v5from
spaumx:fix/compress-memory-leaks

Conversation

@spaumx
Copy link

@spaumx spaumx commented Mar 8, 2026

This commit fixes several critical memory leaks in the gzip compression middleware that could lead to significant memory accumulation under specific usage patterns.

Changes:

  1. Fixed WebSocket/Hijack resource leak (compress.go:213-219)

    • Close gzip writer before hijacking connection
    • Prevents ~32KB leak per WebSocket connection
    • Critical for long-lived WebSocket connections
  2. Fixed Flush() buffer accumulation (compress.go:200-204)

    • Clear buffer after successful write during Flush()
    • Prevents unbounded memory growth in SSE scenarios
    • Important for Server-Sent Events and streaming responses
  3. Improved pool management (compress.go:138-149)

    • Check writer state before returning to pool
    • Prevent corrupted writers from being reused
    • Eliminates potential data corruption issues

Impact:

  • WebSocket connections: no longer leak gzip writers (~32KB each)
  • SSE/streaming: prevents linear buffer growth
  • Pool safety: eliminates race conditions from reused writers

Fixes potential memory leaks affecting:

  • WebSocket applications using compression middleware
  • Server-Sent Events endpoints
  • Long-lived streaming connections
  • High-concurrency scenarios

This commit fixes several critical memory leaks in the gzip compression
middleware that could lead to significant memory accumulation under
specific usage patterns.

Changes:
1. Fixed WebSocket/Hijack resource leak (compress.go:213-219)
   - Close gzip writer before hijacking connection
   - Prevents ~32KB leak per WebSocket connection
   - Critical for long-lived WebSocket connections

2. Fixed Flush() buffer accumulation (compress.go:200-204)
   - Clear buffer after successful write during Flush()
   - Prevents unbounded memory growth in SSE scenarios
   - Important for Server-Sent Events and streaming responses

3. Improved pool management (compress.go:138-149)
   - Check writer state before returning to pool
   - Prevent corrupted writers from being reused
   - Eliminates potential data corruption issues

Impact:
- WebSocket connections: no longer leak gzip writers (~32KB each)
- SSE/streaming: prevents linear buffer growth
- Pool safety: eliminates race conditions from reused writers

Fixes potential memory leaks affecting:
- WebSocket applications using compression middleware
- Server-Sent Events endpoints
- Long-lived streaming connections
- High-concurrency scenarios
@aldas
Copy link
Contributor

aldas commented Mar 8, 2026

please provide proper description of the problem. it takes a little to much effort to understand the real-world situation, and you already have LLM created bullet list here so it should not be a problem.

p.s. tests/poc would be also helpful.

Copy link

@epaes90 epaes90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argus Code Review

Score: 100/100

Reviewed 1 files with no issues found. Highlights: Changes are focused and well-scoped; No security, performance, or quality issues detected. 1 minor observations noted. Score: 100/100.

No inline findings to report.

Copy link

@epaes90 epaes90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argus Code Review

Score: 100/100

Reviewed 1 files with no issues found. Highlights: Changes are focused and well-scoped; No security, performance, or quality issues detected. 2 minor observations noted. Score: 100/100.

No inline findings to report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants