Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks#38374
Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks#38374aIbrahiim wants to merge 1 commit intoapache:masterfrom
Conversation
aee8cfb to
25ac4cc
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #38374 +/- ##
============================================
- Coverage 58.06% 58.04% -0.03%
Complexity 13024 13024
============================================
Files 2509 2509
Lines 262066 261923 -143
Branches 10612 10612
============================================
- Hits 152166 152029 -137
+ Misses 104235 104229 -6
Partials 5665 5665
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Pin tensor_rt digest for PyTorch sentiment Dataflow benchmarks
25ac4cc to
67b0655
Compare
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the PyTorch sentiment analysis benchmark in Apache Beam to improve compatibility and reliability. By moving tokenization to a per-worker DoFn and introducing compatibility layers for transformer configurations, the changes mitigate issues related to environment drift and cross-version discrepancies between the launcher and worker nodes. Additionally, the pipeline cleanup process has been adjusted to ensure a more stable shutdown sequence. Highlights
New Features🧠 You can now enable Memory (public preview) to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the PyTorch sentiment analysis example to improve worker-side initialization and cross-version compatibility for the transformers library. Key changes include the introduction of TokenizeTextDoFn for per-worker tokenizer setup and a compatibility wrapper for DistilBertForSequenceClassification to handle configuration drift across different environments. Feedback suggests using the public pad_token API for better reliability and specifying the dimension in torch.squeeze to prevent potential issues with sequence lengths of one.
| if not hasattr(self.tokenizer, '_pad_token'): | ||
| self.tokenizer._pad_token = '[PAD]' |
There was a problem hiding this comment.
The check hasattr(self.tokenizer, '_pad_token') is likely to always return True because _pad_token is an internal attribute defined in the base class of transformers tokenizers (initialized to None in __init__), even if no padding token has been set. This makes the conditional assignment ineffective. It is recommended to check the public pad_token property and use the public API to set it if it is missing.
| if not hasattr(self.tokenizer, '_pad_token'): | |
| self.tokenizer._pad_token = '[PAD]' | |
| if self.tokenizer.pad_token is None: | |
| self.tokenizer.pad_token = '[PAD]' |
| truncation=True, | ||
| max_length=128, | ||
| return_tensors="pt") | ||
| yield text, {k: torch.squeeze(v) for k, v in tokenized.items()} |
There was a problem hiding this comment.
Using torch.squeeze(v) without specifying a dimension can be risky as it removes all dimensions of size 1. If the sequence length happens to be 1, the tensor would be reduced to a scalar, which might cause issues during batching in RunInference. Since the tokenizer with return_tensors="pt" adds a batch dimension at index 0, it is safer to use torch.squeeze(v, 0) to specifically remove only that dimension.
| yield text, {k: torch.squeeze(v) for k, v in tokenized.items()} | |
| yield text, {k: torch.squeeze(v, 0) for k, v in tokenized.items()} |
Fixes: #30644
The PR stabilizes the Python Dataflow sentiment inference benchmark by explicitly using a gpu sdk container and hardening the pipeline against runtime/library mismatches I was seeing on workers
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.