Refactor spec modification/introspection to make references to Submodules typed#2834
Refactor spec modification/introspection to make references to Submodules typed#2834nschank wants to merge 8 commits intoNVIDIA:mainfrom
Conversation
|
@chtruong814 @ko3n1g Mostly changes in tests - could you help take a look. thank you! |
|
Updated to fix conflicts. Anyone mind taking a look? This should be pretty uncontroversial (except maybe the format of the |
|
@NVIDIA/mcore-oncall |
|
Fixed merge conflicts |
|
I added a nice little helper inspired by @Skylion007 to make the gpt_layer_spec methods keep all their many arguments during type checking. It's optional tho, I can drop it if there are concerns about it! |
|
Just noting the other places I found where |
| if self.sequence_parallel_lm or self.context_parallel_lm > 1: | ||
| if not language_model_type.startswith('nemotron5-hybrid'): | ||
| attn_module = language_transformer_layer_spec.submodules.self_attention | ||
| assert isinstance( |
There was a problem hiding this comment.
@trintamaki @parthmannan do you have any concerns with these asserts here? can we expect the language model to be always uses mcore specs or do you also use HF models directly like for the vision encoder?
There was a problem hiding this comment.
If there are concerns, I'm happy to switch back to cast instead (which only affects type checking and won't do anything at runtime), just let me know.
|
This has gotten no traction, so I'm going to split this up to make it easier to review. #3255 is the most interesting subset, will follow with a few one liners and finally do the tests as a last step. |
What does this PR do ?
In order to safely refactor
Submodulesclasses, I want to make sure I can easily find everywhere those classes are being referenced. This updates every instance I could find whereModuleSpecsubmodules are being inspected or modified, and either usescastor uses a typed helper method to ensure that searching for references/usages of a field will consistently find them.Relevant design doc: https://docs.google.com/document/d/1shyv0iKEzRdevLOlouF_NktbdJazvWifqxUwPXFigQE/edit?tab=t.0#heading=h.uwes2zo47yg6
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @megatron-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.