Skip to content

fix(h2): TypeError: Cannot read properties of null (reading 'push') in RequestHandler.onData under high H2 concurrency #4880

@hxinhan

Description

@hxinhan

Bug Description

TypeError: Cannot read properties of null (reading 'push')

Reproducible By

Dispatcher config

new Agent({
  keepAliveTimeout: 20_000,
  keepAliveMaxTimeout: 60_000,
  bodyTimeout: 10_000,
  headersTimeout: 10_000,
  connections: 500,
  pipelining: 100,
  allowH2: true,
  maxConcurrentStreams: 100,
}).compose(interceptors.responseError())

Expected Behavior

onData should not throw when a data event fires after a stream has been torn down. Chunks arriving after stream abort should be silently dropped, and all requests should either succeed or fail gracefully without crashing.

Logs & Screenshots

"Cannot read properties of null (reading 'push')"
First error stack:
TypeError: Cannot read properties of null (reading 'push')
    at RequestHandler.onData (undici/lib/api/api-request.js:148:21)
    at WrapHandler.onResponseData (undici/lib/handler/wrap-handler.js:74:31)
    at ResponseErrorHandler.onResponseData (undici/lib/handler/decorator-handler.js:47:42)
    at ResponseErrorHandler.onResponseData (undici/lib/interceptor/response-error.js:48:20)
    at UnwrapHandler.onData (undici/lib/handler/unwrap-handler.js:81:35)
    at Request.onData (undici/lib/core/request.js:271:29)
    at ClientHttp2Stream.<anonymous> (undici/lib/dispatcher/client-h2.js:706:17)

Environment

  • OS Linux
  • Node.js v20.19.0
  • undici v7.21.0
  • allowH2: true

Additional context

Two changes work together to close the race:

client-h2.js — move data listener inside response handler

Previously the data listener was registered at stream creation time, meaning data events could fire before onHeaders had run and RequestHandler.res was set. The fix moves it inside the response callback so it's only registered after res is guaranteed to exist:

// Before
stream.once('response', headers => {
  // ...
  request.onHeaders(...)
})

stream.on('data', (chunk) => {          // ← registered too early
  if (request.onData(chunk) === false) {
    stream.pause()
  }
})

// After
stream.once('response', headers => {
  // ...
  request.onHeaders(...)

  // Only register after onHeaders has run and res is set
  stream.on('data', (chunk) => {
    if (request.onData(chunk) === false) {
      stream.pause()
    }
  })
})

This eliminates the window where a data event can arrive before res is initialized.

api-request.js — null guard as a safety net

Even with the listener moved, the abort path (stream.removeAllListeners('data')) can still race with an already-queued data event. The null guard in onData and onComplete handles that residual case:

onData (chunk) {
  return this.res?.push(chunk)   // was: this.res.push(chunk)
}

onComplete (trailers) {
  util.parseHeaders(trailers, this.trailers)
  this.res?.push(null)           // was: this.res.push(null)
}

The client-h2.js change reduces the race window; the api-request.js null guards are the hard stop that prevents the crash if a stale event still slips through.

'use strict'

// Regression test for: "Cannot read properties of null (reading 'push')"
//
// Stack trace:
//   TypeError: Cannot read properties of null (reading 'push')
//   at RequestHandler.onData (lib/api/api-request.js:148:21)
//   at ClientHttp2Stream.<anonymous> (lib/dispatcher/client-h2.js:706:17)
//
// Root cause:
//   The 'data' listener was registered unconditionally on the stream, outside
//   the 'response' handler. This meant 'data' events could fire before 'response',
//   i.e. before onHeaders() ran and set RequestHandler.res. With res still null
//   (its initial value), onData() -> this.res.push(chunk) crashed.
//
//   Fix: register the 'data' listener only inside the 'response' handler, after
//   onHeaders() has run and res is guaranteed to be set. A closure-local
//   dataHandlerActive flag additionally guards against already-queued 'data'
//   events that arrive after abort() tears down the stream.
//
// Reproduced by: TLS H2 server with slow responses (2.5s), 20 connections x 100
// concurrent streams = 2000 max concurrent, 10k total requests queued. Under this
// backpressure undici dispatches data events before response headers are processed.

const { tspl } = require('@matteo.collina/tspl')
const { test, after } = require('node:test')
const { Worker, isMainThread, parentPort } = require('node:worker_threads')
const { Agent, interceptors } = require('..')
const { RequestHandler } = require('../lib/api/api-request')

// ── Server (runs in worker thread) ──────────────────────────────────────────
if (!isMainThread) {
  const http2 = require('node:http2')
  const pem = require('@metcoder95/https-pem')

  pem.generate({ opts: { keySize: 2048 } }).then((cert) => {
    const body = JSON.stringify({ ok: true })
    const server = http2.createSecureServer({ key: cert.key, cert: cert.cert, allowHTTP1: false })

    server.on('stream', (stream, headers) => {
      const path = headers[':path'] ?? ''
      if (path.startsWith('/slow')) {
        const url = new URL(path, 'https://localhost')
        const delayMs = parseInt(url.searchParams.get('delayMs') ?? '2500')
        setTimeout(() => {
          stream.respond({ ':status': 200, 'content-type': 'application/json' })
          stream.end(body)
        }, delayMs)
      } else {
        stream.respond({ ':status': 200, 'content-type': 'application/json' })
        stream.end(body)
      }
    })

    server.listen(0, '127.0.0.1', () => {
      parentPort.postMessage({ port: server.address().port })
    })
  })

  return
}

// ── Client / tests (runs in main thread) ────────────────────────────────────

function startServer () {
  return new Promise((resolve) => {
    const worker = new Worker(__filename)
    worker.once('message', ({ port }) => resolve({ worker, port }))
  })
}

function makeDispatcher (connections, maxConcurrentStreams) {
  return new Agent({
    keepAliveTimeout: 20_000,
    keepAliveMaxTimeout: 60_000,
    bodyTimeout: 20_000,
    headersTimeout: 20_000,
    allowH2: true,
    connections,
    pipelining: maxConcurrentStreams,
    maxConcurrentStreams,
    connect: { rejectUnauthorized: false }
  }).compose(interceptors.responseError())
}

// Unit test: directly reproduce the race by calling onData after onError has
// already nulled out this.res. This is the exact sequence that happens under
// high concurrency when a queued 'data' event fires after stream teardown.
test('RequestHandler.onData must not crash when res is null after onError', (t) => {
  t = tspl(t, { plan: 1 })

  const handler = new RequestHandler(
    { method: 'GET', path: '/', opaque: null },
    (_err, _data) => {}
  )

  // Simulate onError having run: it sets this.res = null synchronously
  handler.res = null

  // This is the exact crash: a queued 'data' event fires after res is nulled
  t.doesNotThrow(
    () => handler.onData(Buffer.from('late chunk')),
    'onData must not throw "Cannot read properties of null (reading \'push\')" when res is null'
  )
})

// Integration test:  TLS H2 server in a worker thread, slow responses (2.5s), 
// 20 connections x 100 streams = 2000 concurrent max, 10k total requests queued, responseError interceptor active.
test('h2: no crash when data arrives after stream abort under high concurrency', async (t) => {
  t = tspl(t, { plan: 2 })

  const { worker, port } = await startServer()
  after(() => worker.terminate())

  const connections = 20
  const maxConcurrentStreams = 100
  const dispatcher = makeDispatcher(connections, maxConcurrentStreams)
  after(() => dispatcher.close())

  const origin = `https://127.0.0.1:${port}`
  const count = 10_000

  const requests = Array.from({ length: count }, () =>
    dispatcher.request({
      origin,
      path: '/slow?delayMs=2500',
      method: 'GET'
    }).then((res) => res.body.dump())
  )

  const results = await Promise.allSettled(requests)

  const errors = results.filter((r) => r.status === 'rejected')
  const successCount = results.length - errors.length

  if (errors.length > 0) {
    const groups = new Map()
    for (const e of errors) {
      const msg = e.reason?.message ?? String(e.reason)
      groups.set(msg, (groups.get(msg) ?? 0) + 1)
    }
    console.log('Error breakdown:', Object.fromEntries(groups))
    console.log('First error stack:\n', errors[0].reason?.stack)
  }

  // If the bug is present, some requests crash with:
  // "Cannot read properties of null (reading 'push')"
  t.ok(successCount + errors.length === count, `all ${count} requests settled`)
  t.ok(successCount === count, `all ${count} requests succeeded (got ${errors.length} errors)`)

  await t.completed
})

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions