Skip to content

Automated code review of src/subgraph/ by Gemini for potential bugs#9726

Merged
copybara-service[bot] merged 1 commit intomasterfrom
test_885703148
Mar 26, 2026
Merged

Automated code review of src/subgraph/ by Gemini for potential bugs#9726
copybara-service[bot] merged 1 commit intomasterfrom
test_885703148

Conversation

@copybara-service
Copy link
Copy Markdown
Contributor

@copybara-service copybara-service Bot commented Mar 18, 2026

Automated code review of src/subgraph/ by Gemini for potential bugs

The fixes are from a few basic categories:

  • Lack of validation of input tensor rank before using the shape. This is tricky because the tensor shape and rank are not expected to be defined, but we sometimes use these shapes anyways before a reshaping happens.
  • We mix up input input1 and input2 in a lot of places, in ways that expose invalid memory accesses.
  • We mix up filter_id and bias_id often. Sometimes only for logging/error reporting, but sometimes not...
  • Sometimes we check something is valid, but not before using it first.
  • Ignoring the return value of validation functions.

It makes one questionable change: when a loop has an unsigned extent, it wants to use an unsigned index. This is a questionable choice, there are arguments both ways on this one.

@copybara-service copybara-service Bot force-pushed the test_885703148 branch 4 times, most recently from 81c9b68 to 34bb300 Compare March 24, 2026 19:11
mohammadmseet-hue added a commit to mohammadmseet-hue/XNNPACK that referenced this pull request Mar 26, 2026
Per review feedback: in the optimizer path
(optimize_common_subgraphs_static_reshapes), when combined dims exceed
XNN_MAX_TENSOR_DIMS, skip the optimization and return xnn_status_success
rather than xnn_status_invalid_parameter. The runtime reshape functions
in copy.c already have the authoritative bounds checks — the optimizer
should gracefully bail out, consistent with the pattern in google#9726.
copybara-service Bot pushed a commit that referenced this pull request Mar 26, 2026
--
2101890 by mohammadmseet-hue <mohammadmseet@gmail.com>:

Fix missing bounds checks and error handling in tensor APIs

1. Add num_dims > XNN_MAX_TENSOR_DIMS check to xnn_reshape_external_value()
   in runtime.c. This was the only public API entry point that did not
   validate num_dims, unlike all xnn_define_*() functions in tensor.c and
   all operator reshape functions.

2. Add missing return statement after error log in
   xnn_define_blockwise_quantized_tensor_value_v2() when block_size == 0.
   Without this, execution falls through to a division by block_size,
   causing a division-by-zero crash (SIGFPE).

3. Add bounds checks in subgraph optimization passes that merge
   static_reshape and static_expand_dims nodes. The combined num_dims
   could exceed XNN_MAX_TENSOR_DIMS, causing out-of-bounds writes to
   stack-allocated xnn_shape.dim[6] arrays.

--
fab78ca by mohammadmseet-hue <mohammadmseet@gmail.com>:

Fix subgraph optimizer bounds check to return success instead of error

Per review feedback: in the optimizer path
(optimize_common_subgraphs_static_reshapes), when combined dims exceed
XNN_MAX_TENSOR_DIMS, skip the optimization and return xnn_status_success
rather than xnn_status_invalid_parameter. The runtime reshape functions
in copy.c already have the authoritative bounds checks — the optimizer
should gracefully bail out, consistent with the pattern in #9726.

FUTURE_COPYBARA_INTEGRATE_REVIEW=#9778 from mohammadmseet-hue:fix/missing-bounds-checks fab78ca
PiperOrigin-RevId: 889687406
The fixes are from a few basic categories:
- Lack of validation of input tensor rank before using the shape. This is tricky because the tensor shape and rank are not expected to be defined, but we sometimes use these shapes anyways before a reshaping happens.
- We mix up input `input1` and `input2` in a lot of places, in ways that expose invalid memory accesses.
- We mix up `filter_id` and `bias_id` often. Sometimes only for logging/error reporting, but sometimes not...
- Sometimes we check something is valid, but not before using it first.
- Ignoring the return value of validation functions.

It makes one questionable change: when a loop has an unsigned extent, it wants to use an unsigned index. This is a questionable choice, there are arguments both ways on this one.

PiperOrigin-RevId: 890058240
@copybara-service copybara-service Bot merged commit d45f452 into master Mar 26, 2026
@copybara-service copybara-service Bot deleted the test_885703148 branch March 26, 2026 21:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant