Skip to content

[Qwen3.5 MoE Support]#2377

Draft
dsikka wants to merge 2 commits intomainfrom
qwen3_5_support
Draft

[Qwen3.5 MoE Support]#2377
dsikka wants to merge 2 commits intomainfrom
qwen3_5_support

Conversation

@dsikka
Copy link
Collaborator

@dsikka dsikka commented Feb 17, 2026

SUMMARY:
"please provide a brief summary"

TEST PLAN:
"please outline how the changes were tested"

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dsikka, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request extends the LLM compressor's capabilities by integrating support for the Qwen3.5 MoE model. It provides a new example for quantizing this model using FP8 dynamic quantization and implements the necessary calibration logic for its sparse mixture-of-experts architecture. Additionally, a core utility file was adjusted to manage PyTorch initialization functions more robustly.

Highlights

  • Qwen3.5 MoE Quantization Example: A new example script has been added to demonstrate FP8 dynamic quantization for the Qwen3.5 MoE model, including specific ignore patterns for various layers.
  • Qwen3.5 MoE Calibration Support: Introduced a dedicated calibration module for Qwen3.5 MoE sparse blocks, allowing all tokens to be processed by all experts during calibration.
  • Utility Function Refinement: The dev.py utility file was updated to redefine TORCH_INIT_FUNCTIONS locally, ensuring a consistent set of PyTorch initialization methods are available.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/quantization_w8a8_fp8/qwen3_5_moe.py
    • Added a new example script for quantizing the Qwen3.5 MoE model.
  • src/llmcompressor/modeling/init.py
    • Imported the new CalibrateQwen3_5MoeTextSparseMoeBlock for Qwen3.5 MoE.
  • src/llmcompressor/modeling/qwen3_5_vl_moe.py
    • Added a new module qwen3_5_vl_moe.py to implement calibration for Qwen3.5 MoE sparse blocks.
    • Defined CalibrateQwen3_5MoeTextSparseMoeBlock to handle expert calibration.
    • Included SequentialQwen3VLMoeTextExperts for sequential processing of experts during calibration.
  • src/llmcompressor/utils/dev.py
    • Commented out the original import of TORCH_INIT_FUNCTIONS from transformers.modeling_utils.
    • Redefined TORCH_INIT_FUNCTIONS locally to include a comprehensive list of PyTorch initialization methods.
Activity
  • The pull request was created by dsikka with a placeholder summary and test plan, indicating initial submission without detailed human review comments yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Feb 17, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Qwen3.5 MoE models, including a new example script for quantization and corresponding calibration logic. The changes involve adding a new calibration module for Qwen3.5 MoE and updating the __init__.py to include it. Additionally, the src/llmcompressor/utils/dev.py file has been modified to redefine TORCH_INIT_FUNCTIONS locally. While the core functionality seems to be in place, there are a couple of areas that could be improved for better maintainability and portability.

from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier

MODEL_ID = "/raid/engine/dsikka/models--Qwen--Qwen3.5-397B-A17B/snapshots/7cad2bae11cb49ca79f7d6a0954de2e2756f4e27"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The MODEL_ID is currently hardcoded to a specific local path. This reduces the portability and reusability of the example script. Consider making this path configurable, perhaps through command-line arguments or environment variables, to allow users to easily specify their model location.

Suggested change
MODEL_ID = "/raid/engine/dsikka/models--Qwen--Qwen3.5-397B-A17B/snapshots/7cad2bae11cb49ca79f7d6a0954de2e2756f4e27"
MODEL_ID = "/path/to/your/model" # TODO: Make this configurable, e.g., via argparse

oneshot(model=model, recipe=recipe)

# Save to disk in compressed-tensors format.
SAVE_DIR = "/raid/engine/dsikka/" + "Qwen3.5-397B-A17B" + "-FP8-DYNAMIC"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to MODEL_ID, the SAVE_DIR is hardcoded to a local path. This can lead to issues if the script is run in a different environment or by another user. It would be beneficial to make this directory configurable, allowing users to specify where they want to save the quantized model.

Suggested change
SAVE_DIR = "/raid/engine/dsikka/" + "Qwen3.5-397B-A17B" + "-FP8-DYNAMIC"
SAVE_DIR = "/path/to/save/quantized_model" # TODO: Make this configurable

Comment on lines +26 to +43
from torch import nn

TORCH_INIT_FUNCTIONS = {
"uniform_": nn.init.uniform_,
"normal_": nn.init.normal_,
"trunc_normal_": nn.init.trunc_normal_,
"constant_": nn.init.constant_,
"xavier_uniform_": nn.init.xavier_uniform_,
"xavier_normal_": nn.init.xavier_normal_,
"kaiming_uniform_": nn.init.kaiming_uniform_,
"kaiming_normal_": nn.init.kaiming_normal_,
"uniform": nn.init.uniform,
"normal": nn.init.normal,
"xavier_uniform": nn.init.xavier_uniform,
"xavier_normal": nn.init.xavier_normal,
"kaiming_uniform": nn.init.kaiming_uniform,
"kaiming_normal": nn.init.kaiming_normal,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TORCH_INIT_FUNCTIONS dictionary is redefined locally after commenting out the import from transformers.modeling_utils. This introduces a potential maintenance burden, as any future updates or changes in transformers regarding these initialization functions might not be reflected here, leading to subtle bugs or unexpected behavior. Please add a comment explaining the specific reason for this local redefinition and why the upstream version could not be used. This will help future developers understand the context and potential implications.

@mergify
Copy link
Contributor

mergify bot commented Feb 17, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

@mergify mergify bot removed the quality-failed label Feb 18, 2026
@mergify
Copy link
Contributor

mergify bot commented Feb 18, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Copy link

@Sehyo Sehyo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my PR for implementation.

super().__init__(
[
Qwen3_5MoeMLP(
config, intermediate_size=config.shared_expert_intermediate_size
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be config.moe_intermediate_size , this will create incorrectly sized linear layers.

self,
original: "Qwen3_5MoeSparseMoeBlock",
config: "Qwen3_5MoeConfig",
calibrate_all_experts: bool,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calibrate_all_experts as there is no forward() function implemented, calibration module will not work at runtime.

from transformers import AutoModelForCausalLM, PreTrainedModel
from transformers.modeling_utils import TORCH_INIT_FUNCTIONS

# from transformers.modeling_utils import TORCH_INIT_FUNCTIONS
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated to Qwen 3.5 as this is a transformers compatibility workaround

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation quality-failed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments