feat: Add dedicated lora mode to Megatron backend#635
Draft
vivekkalyan wants to merge 1 commit intomainfrom
Draft
feat: Add dedicated lora mode to Megatron backend#635vivekkalyan wants to merge 1 commit intomainfrom
vivekkalyan wants to merge 1 commit intomainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This adds dedicated
loramode to the Megatron backend so inference and training can run on separate GPUs in parallel.In dedicated mode, ART now keeps a dedicated vLLM server on the inference GPU and updates LoRA adapters in place after training steps. That makes the Megatron flow match the dedicated-serving model we already use elsewhere instead of treating training as a blocking operation.
What this enables
Validation
tests/unit/test_megatron_dedicated.py01Qwen/Qwen3-30B-A3B-Instruct-2507loramode completed two real train steps and advanced the served model from@0to@2