Skip to content

[TRTLLM-11551][feat] Support EPLB with various MoE backends for nemotron-h models#12280

Merged
Wanli-Jiang merged 11 commits intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/support-wideep-nemotronh
Apr 1, 2026
Merged

[TRTLLM-11551][feat] Support EPLB with various MoE backends for nemotron-h models#12280
Wanli-Jiang merged 11 commits intoNVIDIA:mainfrom
Wanli-Jiang:user/williamj/support-wideep-nemotronh

Conversation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator

@Wanli-Jiang Wanli-Jiang commented Mar 17, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for NemotronHForCausalLM model architecture in mixture-of-experts (MoE) processing.
  • Improvements

    • Enhanced MoE module configuration to better support activation type parameter handling in expert parallel implementations.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@Wanli-Jiang Wanli-Jiang requested a review from a team as a code owner March 17, 2026 09:33
@Wanli-Jiang Wanli-Jiang requested a review from xxi-nv March 17, 2026 09:33
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

This PR adds support for configurable activation functions in Wide EP MoE and extends MoE model architecture support to include NemotronHForCausalLM. The WideEPMoE class now accepts an activation_type parameter which is propagated through the constructor, and the load balancer's supported model architectures list is extended.

Changes

Cohort / File(s) Summary
Activation Type Configuration
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py, tensorrt_llm/_torch/modules/fused_moe/create_moe.py
Added ActivationType import and new activation_type parameter to WideEPMoE.__init__ with default value ActivationType.Swiglu. The parameter is now passed during WideEPMoE instantiation in the backend factory.
MoE Model Architecture Support
tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py
Extended moe_model_arch_list to include 'NemotronHForCausalLM' as a supported model architecture for MoE load balancer creation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is largely a template with placeholder sections. It lacks substantive details about the changes, implementation rationale, test coverage, and specific modifications made. Fill in the Description and Test Coverage sections with specific details about the WideEP MoE changes, activation_type parameter, NemotronHForCausalLM support, and relevant test cases that validate these changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title mentions 'EPLB with various MoE backends for nemotron-h models', but the changes show focus on WideEPMoE backend and NemotronH support, which aligns with the core intent.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@Wanli-Jiang Wanli-Jiang changed the title [None][feat] Support WideEP MoE backend for nemotron-h models [TRTLLM-11543][feat] Support WideEP MoE backend for nemotron-h models Mar 18, 2026
@Wanli-Jiang Wanli-Jiang changed the title [TRTLLM-11543][feat] Support WideEP MoE backend for nemotron-h models [TRTLLM-11551][feat] Support WideEP MoE backend for nemotron-h models Mar 18, 2026
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/support-wideep-nemotronh branch from 7b8ebae to 6296cbf Compare March 18, 2026 08:34
@Wanli-Jiang Wanli-Jiang requested review from a team as code owners March 18, 2026 08:34
@Wanli-Jiang Wanli-Jiang requested review from 2ez4bz and tomeras91 March 18, 2026 08:34
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot help

@github-actions
Copy link
Copy Markdown

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --only-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #39427 [ run ] triggered by Bot. Commit: f20a797 Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --only-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #39429 [ run ] triggered by Bot. Commit: f20a797 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #39429 [ run ] completed with state SUCCESS. Commit: f20a797
/LLM/main/L0_MergeRequest_PR pipeline #30656 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Comment thread tensorrt_llm/_torch/modules/fused_moe/create_moe.py
Comment thread tests/integration/defs/accuracy/test_llm_api_pytorch.py Outdated
Comment thread tests/integration/defs/accuracy/test_llm_api_pytorch.py Outdated
@Wanli-Jiang Wanli-Jiang requested a review from a team as a code owner March 20, 2026 09:38
@Wanli-Jiang Wanli-Jiang requested review from chang-l and kaiyux March 20, 2026 09:38
@Wanli-Jiang Wanli-Jiang changed the title [TRTLLM-11551][feat] Support WideEP MoE backend for nemotron-h models [TRTLLM-11551][feat] Support EPLB with various MoE backends for nemotron-h models Mar 23, 2026
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/support-wideep-nemotronh branch 3 times, most recently from 3b6c74a to d56daed Compare March 24, 2026 02:25
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40019 [ run ] triggered by Bot. Commit: d56daed Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40818 [ run ] triggered by Bot. Commit: b4e874a Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40818 [ run ] completed with state SUCCESS. Commit: b4e874a
/LLM/main/L0_MergeRequest_PR pipeline #31831 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/support-wideep-nemotronh branch from b4e874a to 4aff7a4 Compare March 31, 2026 05:59
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40872 [ run ] triggered by Bot. Commit: 4aff7a4 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40872 [ run ] completed with state SUCCESS. Commit: 4aff7a4
/LLM/main/L0_MergeRequest_PR pipeline #31878 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41069 [ run ] triggered by Bot. Commit: 4aff7a4 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41069 [ run ] completed with state FAILURE. Commit: 4aff7a4
/LLM/main/L0_MergeRequest_PR pipeline #32045 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
@Wanli-Jiang Wanli-Jiang force-pushed the user/williamj/support-wideep-nemotronh branch from 4aff7a4 to 09b65aa Compare April 1, 2026 05:10
@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41140 [ run ] triggered by Bot. Commit: 09b65aa Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41140 [ run ] completed with state SUCCESS. Commit: 09b65aa
/LLM/main/L0_MergeRequest_PR pipeline #32109 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

@Wanli-Jiang
Copy link
Copy Markdown
Collaborator Author

/bot skip --comment "all tests are passed in different runs"

@Wanli-Jiang Wanli-Jiang enabled auto-merge (squash) April 1, 2026 11:55
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41195 [ skip ] triggered by Bot. Commit: 09b65aa Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41195 [ skip ] completed with state SUCCESS. Commit: 09b65aa
Skipping testing for commit 09b65aa

Link to invocation

@Wanli-Jiang Wanli-Jiang merged commit 3a71062 into NVIDIA:main Apr 1, 2026
5 checks passed
karen-sy pushed a commit to karen-sy/TensorRT-LLM that referenced this pull request Apr 7, 2026
…ron-h models (NVIDIA#12280)

Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants