Skip to content

Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks#38374

Open
aIbrahiim wants to merge 1 commit intoapache:masterfrom
aIbrahiim:fix-pytorch-sentiment-gpu-container
Open

Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks#38374
aIbrahiim wants to merge 1 commit intoapache:masterfrom
aIbrahiim:fix-pytorch-sentiment-gpu-container

Conversation

@aIbrahiim
Copy link
Copy Markdown
Contributor

@aIbrahiim aIbrahiim commented May 5, 2026

Fixes: #30644

The PR stabilizes the Python Dataflow sentiment inference benchmark by explicitly using a gpu sdk container and hardening the pipeline against runtime/library mismatches I was seeing on workers

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@aIbrahiim aIbrahiim force-pushed the fix-pytorch-sentiment-gpu-container branch from aee8cfb to 25ac4cc Compare May 7, 2026 13:46
@codecov
Copy link
Copy Markdown

codecov Bot commented May 7, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 58.04%. Comparing base (e530809) to head (25ac4cc).
⚠️ Report is 64 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff              @@
##             master   #38374      +/-   ##
============================================
- Coverage     58.06%   58.04%   -0.03%     
  Complexity    13024    13024              
============================================
  Files          2509     2509              
  Lines        262066   261923     -143     
  Branches      10612    10612              
============================================
- Hits         152166   152029     -137     
+ Misses       104235   104229       -6     
  Partials       5665     5665              
Flag Coverage Δ
python 79.82% <ø> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks
@aIbrahiim aIbrahiim force-pushed the fix-pytorch-sentiment-gpu-container branch from 25ac4cc to 67b0655 Compare May 7, 2026 16:18
@aIbrahiim aIbrahiim marked this pull request as ready for review May 7, 2026 16:18
@github-actions github-actions Bot removed the build label May 7, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the PyTorch sentiment analysis benchmark in Apache Beam to improve compatibility and reliability. By moving tokenization to a per-worker DoFn and introducing compatibility layers for transformer configurations, the changes mitigate issues related to environment drift and cross-version discrepancies between the launcher and worker nodes. Additionally, the pipeline cleanup process has been adjusted to ensure a more stable shutdown sequence.

Highlights

  • Worker-side Tokenization: Refactored tokenization into a TokenizeTextDoFn to initialize the tokenizer per worker, improving scalability and resource management.
  • Transformers Compatibility: Introduced DistilBertForSequenceClassificationCompat and a configuration compatibility helper to prevent runtime errors caused by version mismatches between the launcher and worker environments.
  • Pipeline Stability: Added a wait period after pipeline cancellation to ensure resources settle correctly during benchmark execution.
New Features

🧠 You can now enable Memory (public preview) to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Pytorch_Sentiment_Batch_DistilBert_Base_Uncased.txt
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Pytorch_Sentiment_Streaming_DistilBert_Base_Uncased.txt
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the PyTorch sentiment analysis example to improve worker-side initialization and cross-version compatibility for the transformers library. Key changes include the introduction of TokenizeTextDoFn for per-worker tokenizer setup and a compatibility wrapper for DistilBertForSequenceClassification to handle configuration drift across different environments. Feedback suggests using the public pad_token API for better reliability and specifying the dimension in torch.squeeze to prevent potential issues with sequence lengths of one.

Comment on lines +71 to +72
if not hasattr(self.tokenizer, '_pad_token'):
self.tokenizer._pad_token = '[PAD]'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The check hasattr(self.tokenizer, '_pad_token') is likely to always return True because _pad_token is an internal attribute defined in the base class of transformers tokenizers (initialized to None in __init__), even if no padding token has been set. This makes the conditional assignment ineffective. It is recommended to check the public pad_token property and use the public API to set it if it is missing.

Suggested change
if not hasattr(self.tokenizer, '_pad_token'):
self.tokenizer._pad_token = '[PAD]'
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = '[PAD]'

truncation=True,
max_length=128,
return_tensors="pt")
yield text, {k: torch.squeeze(v) for k, v in tokenized.items()}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using torch.squeeze(v) without specifying a dimension can be risky as it removes all dimensions of size 1. If the sequence length happens to be 1, the tensor would be reduced to a scalar, which might cause issues during batching in RunInference. Since the tokenizer with return_tensors="pt" adds a batch dimension at index 0, it is safer to use torch.squeeze(v, 0) to specifically remove only that dimension.

Suggested change
yield text, {k: torch.squeeze(v) for k, v in tokenized.items()}
yield text, {k: torch.squeeze(v, 0) for k, v in tokenized.items()}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The Inference Python Benchmarks Dataflow job is flaky

1 participant