Skip to content

[WIP][Examples] model_free_ptq of nvidia/DeepSeek-R1-NVFP4#2228

Draft
brian-dellabetta wants to merge 17 commits intomainfrom
bdellabe/example-dsr1-nvfp4-fp8block
Draft

[WIP][Examples] model_free_ptq of nvidia/DeepSeek-R1-NVFP4#2228
brian-dellabetta wants to merge 17 commits intomainfrom
bdellabe/example-dsr1-nvfp4-fp8block

Conversation

@brian-dellabetta
Copy link
Collaborator

@brian-dellabetta brian-dellabetta commented Jan 13, 2026

SUMMARY:

This PR extends the nvidia/DeepSeek-R1-NVFP4 checkpoint to

  • quantize all compatible linear self_attn layers to FP8_BLOCK.
  • invert modelopt's input_scale to create CT's input_global_scale
  • invert modelopt's weight_scale_2 to create CT's weight_global_scale
  • convert the packing order of modelopt NVFP4 tensors to use compressed-tensors convention
  • merge the hf_quant_config.json modelopt config into the compressed-tensors config in config.json "quantization_config"

Changes to src:

  • removes the targets must be "Linear" constraint from model_free_ptq, as it is no longer an issue in vllm.

Results in new config.json:

{
    "architectures": [
        "DeepseekV3ForCausalLM"
    ],
    "attention_bias": false,
    "attention_dropout": 0.0,
    "auto_map": {
        "AutoConfig": "configuration_deepseek.DeepseekV3Config",
        "AutoModel": "modeling_deepseek.DeepseekV3Model",
        "AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
    },
    "aux_loss_alpha": 0.001,
    "bos_token_id": 0,
    "eos_token_id": 1,
    "ep_size": 1,
    "first_k_dense_replace": 3,
    "hidden_act": "silu",
    "hidden_size": 7168,
    "initializer_range": 0.02,
    "intermediate_size": 18432,
    "kv_lora_rank": 512,
    "max_position_embeddings": 163840,
    "model_type": "deepseek_v3",
    "moe_intermediate_size": 2048,
    "moe_layer_freq": 1,
    "n_group": 8,
    "n_routed_experts": 256,
    "n_shared_experts": 1,
    "norm_topk_prob": true,
    "num_attention_heads": 128,
    "num_experts_per_tok": 8,
    "num_hidden_layers": 61,
    "num_key_value_heads": 128,
    "num_nextn_predict_layers": 1,
    "pretraining_tp": 1,
    "q_lora_rank": 1536,
    "qk_nope_head_dim": 128,
    "qk_rope_head_dim": 64,
    "quantization_config": {
        "config_groups": {
            "config_group_0": {
                "targets": [
                    "re:.*self_attn.(o_proj|q_a_proj|q_b_proj).*"
                ],
                "weights": {
                    "num_bits": 8,
                    "type": "float",
                    "symmetric": true,
                    "group_size": null,
                    "strategy": "block",
                    "block_structure": [
                        128,
                        128
                    ],
                    "dynamic": false,
                    "actorder": null,
                    "scale_dtype": null,
                    "zp_dtype": null,
                    "observer": "static_minmax",
                    "observer_kwargs": {}
                },
                "input_activations": {
                    "num_bits": 8,
                    "type": "float",
                    "symmetric": true,
                    "group_size": 128,
                    "strategy": "group",
                    "block_structure": null,
                    "dynamic": true,
                    "actorder": null,
                    "scale_dtype": null,
                    "zp_dtype": null,
                    "observer": null,
                    "observer_kwargs": {}
                },
                "output_activations": null,
                "format": "float-quantized"
            },
            "config_group_1": {
                "targets": [
                    "re:.*mlp.*\\.(gate|up|down)_proj$"
                ],
                "weights": {
                    "num_bits": 4,
                    "type": "float",
                    "symmetric": true,
                    "group_size": 16,
                    "strategy": "tensor_group",
                    "block_structure": null,
                    "dynamic": false,
                    "actorder": null,
                    "scale_dtype": "torch.float8_e4m3fn",
                    "zp_dtype": null,
                    "observer": "static_minmax",
                    "observer_kwargs": {}
                },
                "input_activations": {
                    "num_bits": 4,
                    "type": "float",
                    "symmetric": true,
                    "group_size": 16,
                    "strategy": "tensor_group",
                    "block_structure": null,
                    "dynamic": "local",
                    "actorder": null,
                    "scale_dtype": "torch.float8_e4m3fn",
                    "zp_dtype": null,
                    "observer": "static_minmax",
                    "observer_kwargs": {}
                },
                "output_activations": null,
                "format": null
            }
        },
        "quant_method": "compressed-tensors",
        "kv_cache_scheme": null,
        "format": "mixed-precision",
        "quantization_status": "compressed",
        "global_compression_ratio": null,
        "ignore": []
    },
    "rms_norm_eps": 1e-06,
    "rope_scaling": {
        "beta_fast": 32,
        "beta_slow": 1,
        "factor": 40,
        "mscale": 1.0,
        "mscale_all_dim": 1.0,
        "original_max_position_embeddings": 4096,
        "type": "yarn"
    },
    "rope_theta": 10000,
    "routed_scaling_factor": 2.5,
    "scoring_func": "sigmoid",
    "seq_aux": true,
    "tie_word_embeddings": false,
    "topk_group": 4,
    "topk_method": "noaux_tc",
    "torch_dtype": "bfloat16",
    "transformers_version": "4.46.3",
    "use_cache": true,
    "v_head_dim": 128,
    "vocab_size": 129280
}

TEST PLAN:
"please outline how the changes were tested"

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @brian-dellabetta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new example script for applying model-free post-training quantization to the nvidia/DeepSeek-R1-NVFP4 model. The script specifically targets certain self_attn layers for FP8 block quantization, aiming to demonstrate the use of llmcompressor and compressed-tensors for model compression. The PR is currently a work in progress, with further integration steps planned.

Highlights

  • New Example Script: A new example script, dsr1_nvfp4_fp8_block.py, has been added to demonstrate model-free post-training quantization (PTQ) for the nvidia/DeepSeek-R1-NVFP4 model.
  • FP8 Block Quantization: The script applies FP8 BLOCK quantization to the weights and FP8 GROUP quantization to the input activations of specific self_attn linear layers (kv_b_proj, o_proj, q_a_proj, q_b_proj) within the DeepSeek-R1-NVFP4 model.
  • Quantization Configuration: The quantization scheme uses a block structure of [128, 128] for weights and a group size of 128 for input activations, with both being symmetric and float types.
  • Work in Progress: This pull request is marked as Work In Progress, with pending tasks including converting NVFP4 tensor packing order and merging quantization configurations into the config.json.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR adds a new example script for model-free PTQ on the nvidia/DeepSeek-R1-NVFP4 model. The script correctly sets up the quantization scheme to apply FP8-Block quantization to specific self-attention layers. My main feedback is focused on improving the clarity and readability of the layer selection logic. The current implementation uses a complex regex in the ignore list, which is not intuitive. I've suggested either using the more idiomatic targets list or, if that's not possible, significantly improving the comments to make the current approach easier to understand. As this is an example script, clarity is paramount.

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
model_free_ptq(
model_stub=MODEL_ID,
save_directory=SAVE_DIR,
scheme=QuantizationScheme(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just use the pre-set scheme

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to set targets

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can set targets outside of the scheme?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the model_free_ptq API. I don't think so

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleaned this up a bit to use **FP8_BLOCK

@mergify
Copy link
Contributor

mergify bot commented Jan 14, 2026

Documentation update

@mergify mergify bot added the documentation Improvements or additions to documentation label Jan 14, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 15, 2026
@mergify
Copy link
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 15, 2026
@mergify
Copy link
Contributor

mergify bot commented Jan 15, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Comment on lines 21 to 36
weights=QuantizationArgs(
num_bits=8,
type=QuantizationType.FLOAT,
strategy=QuantizationStrategy.BLOCK,
symmetric=True,
dynamic=False,
block_structure=[128, 128],
),
input_activations=QuantizationArgs(
num_bits=8,
type=QuantizationType.FLOAT,
strategy=QuantizationStrategy.GROUP,
symmetric=True,
dynamic=True,
observer=None,
group_size=128,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this not just "FP8_BLOCK"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but i need to set targets, I can do this more cleanly with **FP8_BLOCK

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

)


def merge_configs():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be simpler to just build the config from scratch. Since this flow is very specialized to deepseek anyways.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this is more to just show what i'm doing, i don't expect this pr to ever land like this. we can cherry-pick the changes in another PR

# validate arguments
model_files = get_checkpoint_files(model_stub)
scheme_name, scheme = validate_scheme(scheme)
ignore = ignore or []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can just make this (or an empty tuple) the default value, since the ignore list is never mutated.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe should also consider lm_head

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

# - model.layers.3.mlp.experts.0.down_proj.weight
# - model.layers.3.mlp.experts.0.gate_proj.weight
# - model.layers.3.mlp.experts.0.up_proj.weight
if _match_name(module_name, "re:.*mlp.*\.(gate|up|down)_proj$"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully we don't plan to leave this in the source code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when i'm done i plan to ask team how frequently we expect requests like this to see if we should look into abstractions to help with this. i don't plan to land this PR as is

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Kyle. We can create a dedicated model_opt conversion tool in ct

for name in list(tensors.keys()):
module_name, param_name = name.rsplit(".", 1)
is_linear_weight = param_name == "weight" and not module_name.endswith("norm")
is_targeted = (is_linear_weight and "Linear" in scheme.targets) or any(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it more robust and explicit to just make ["Linear"] the default target list?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's just hard to do with the API, user might use preset schemes which don't set targets

# ct moe layer has a hard coded check for "Linear"
scheme.targets = ["Linear"]
if len(scheme.targets) == 0:
scheme.targets.append("Linear")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should force users to be explicit about targeting linear layers.

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 16, 2026
@mergify
Copy link
Contributor

mergify bot commented Jan 16, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 16, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Contributor

mergify bot commented Jan 18, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Jan 22, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments