Skip to content

[pyTorch] Replace the make_empty implementation to use C++ implementation#2666

Open
ptrendx wants to merge 9 commits intoNVIDIA:mainfrom
ptrendx:pr_unify_make_empty
Open

[pyTorch] Replace the make_empty implementation to use C++ implementation#2666
ptrendx wants to merge 9 commits intoNVIDIA:mainfrom
ptrendx:pr_unify_make_empty

Conversation

@ptrendx
Copy link
Member

@ptrendx ptrendx commented Feb 10, 2026

Description

This PR unifies the implementation of the QuantizedTensor creation by using the C++ implementation of the create_tensor.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Replaced the Python implementations of the make_empty with the calls to C++ create_tensor

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@ptrendx ptrendx requested a review from negvet February 10, 2026 00:17
@ptrendx
Copy link
Member Author

ptrendx commented Feb 10, 2026

/te-ci L1 pytorch

1 similar comment
@ptrendx
Copy link
Member Author

ptrendx commented Feb 10, 2026

/te-ci L1 pytorch

"""Construct quantized tensor with uninitialized data"""
raise NotImplementedError(
f"{self.__class__.__name__} class does not implement make_empty function, "
"required for construction of unintialized quantized tensor"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This clear NotImplementedError is beneficial for custom quantizers that do not override make_empty().
Now, if custom quantizer does not have make_empty(), then it will fail at the C++ convert_quantizer, because there is no registered C++ converter. C++ failure with NVTE_ERROR("Unexpected type for quantizer") is not as clear as NotImplementedError.

What about making C++ error more clear or even better add a check at base Quantizer.make_empty:

def make_empty(...):
    if getattr(self, "custom", False):
        raise NotImplementedError(
            f"{self.__class__.__name__} does not implement make_empty"
        )
    # ... existing C++ path ...

@ptrendx ptrendx force-pushed the pr_unify_make_empty branch from 98f9681 to 6be430a Compare February 19, 2026 01:27
Signed-off-by: Przemek Tredak <[email protected]>
@ptrendx ptrendx marked this pull request as ready for review February 19, 2026 22:41
@ptrendx
Copy link
Member Author

ptrendx commented Feb 19, 2026

/te-ci pytorch L1

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 19, 2026

Greptile Summary

This PR unifies QuantizedTensor creation by replacing per-class Python make_empty implementations with a single C++ path through a new create_empty_quantized_tensor pybind11 binding. The device and pin_memory parameters are threaded through all Quantizer::create_tensor virtual overrides, and the base Quantizer.make_empty in Python now delegates directly to C++.

Key changes:

  • New create_empty_quantized_tensor C++ function added in cast.cpp and exposed via pybind.cpp
  • create_tensor virtual interface in common.h updated with device and pin_memory defaulting to torch::kCUDA / false
  • All quantizer create_tensor implementations (NoneQuantizer, Float8Quantizer, Float8CurrentScalingQuantizer, Float8BlockQuantizer, MXFP8Quantizer, NVFP4Quantizer) updated in quantizer.cpp
  • ~250 lines of duplicated Python removed from the four tensor-type files

Issue found:

  • In Float8CurrentScalingQuantizer::create_tensor (quantizer.cpp ~line 593), the new device function parameter is shadowed by the pre-existing local variable at::Device device. This will trigger -Wshadow compiler warnings and, if the build uses -Werror=shadow, a compile error. Additionally, in the edge case where neither with_data nor with_transpose is true, the shadowing local resolves to c10::cuda::current_device() rather than the requested device argument, causing the Python tensor object to report the wrong device.

Confidence Score: 2/5

  • Not safe to merge until the device parameter shadowing in Float8CurrentScalingQuantizer::create_tensor is resolved.
  • The overall refactoring is clean and well-structured, but quantizer.cpp introduces a variable shadowing bug in Float8CurrentScalingQuantizer::create_tensor where the new device parameter is shadowed by an existing local variable of the same name. This can cause a compile failure under strict warning settings and a device-mismatch bug in rare call paths.
  • transformer_engine/pytorch/csrc/quantizer.cpp — specifically Float8CurrentScalingQuantizer::create_tensor around line 593.

Sequence Diagram

sequenceDiagram
    participant PY as Python caller
    participant QBase as Quantizer.make_empty (Python base)
    participant TEX as tex.create_empty_quantized_tensor (C++)
    participant QCPP as QuantizerCPP::create_tensor (C++)
    participant AT as at::empty / PyTorch allocator

    PY->>QBase: make_empty(shape, dtype, device, pin_memory)
    QBase->>QBase: normalize device (str → torch.device)
    QBase->>TEX: create_empty_quantized_tensor(quantizer, shape, dtype, device, pin_memory)
    TEX->>QCPP: quantizer_cpp->create_tensor(shape, te_dtype, device, pin_memory)
    QCPP->>AT: at::empty(shape, opts.device(device).pinned_memory(pin_memory))
    AT-->>QCPP: allocated tensor(s)
    QCPP-->>TEX: (TensorWrapper, py::object)
    TEX-->>QBase: py::object (QuantizedTensor)
    QBase->>QBase: result.requires_grad_(True) if requires_grad
    QBase-->>PY: QuantizedTensor
Loading

Last reviewed commit: 4abf5a8

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

pin_memory,
)
if requires_grad:
result.requires_grad_(True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing this in C++ itslef might be faster, since we are anyway going to call the QuantizedTensor.new method with requires_grad argument. Calling this from python for custom quantized tensor has severe python overheads

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I see it can get quite complicated since we might have to change the create_tensor API to accept the requires_grad argument.

@ptrendx
Copy link
Member Author

ptrendx commented Mar 4, 2026

/te-ci pytorch

ptrendx and others added 2 commits March 4, 2026 16:43
@ptrendx
Copy link
Member Author

ptrendx commented Mar 5, 2026

/te-ci pytorch

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 5, 2026

Additional Comments (1)

transformer_engine/pytorch/csrc/quantizer.cpp, line 596
device parameter shadowed by pre-existing local variable

The new device function parameter (line 562) is shadowed by the local variable at::Device device declared here. In C++, this is a scoping issue: the local declaration shadows the parameter, so kwargs["device"] on line 632 uses the local variable instead of the caller's argument.

This creates two problems:

  1. If the build uses -Wshadow -Werror, this will be a compile error.
  2. More critically, in the edge case where both with_data == false and with_transpose == false, the device will resolve to c10::cuda::current_device() rather than the device parameter passed by the caller. This causes the Python Float8Tensor object to report the wrong device.

Fix: Remove the shadowing local variable and use the parameter directly:

  // Construct Python FP8 tensor
  py::object out_py;
  py::object scale_inv_py = py::cast(scale_inv_tensor);
  py::object data_py = with_data ? py::cast(data_tensor) : py::none();
  py::object transpose_py = with_transpose ? py::cast(transpose_tensor) : py::none();
  if (internal) {
    // ...
    kwargs["quantizer"] = this->quantizer;

    py::tuple args(0);
    // ...
  } else {
    // ...
    kwargs["quantizer"] = this->quantizer;
    kwargs["device"] = py::cast(device);

The local at::Device device declaration (lines 593-596) should be removed entirely, as the device parameter already holds the requested device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants