-
Notifications
You must be signed in to change notification settings - Fork 467
Description
In prepare_pt2e, there is a call to _fuse_linear_bn_ (see quantize_pt2e.py) that unconditionally fuses BatchNorm stats to Linear weights tensor. The problem is that this fusion silently produces invalid model in some cases.
More context
The fusion is correct only when both layers work with the same dimension. Linear layer works strictly with only the last dimension of it's input. BatchNorm on the other hand works in channels dimension (the second one in most cases) except for one case when using BatchNorm1d and input shape (N, C) (see official PyTorch documentation for more info). Only in this one case both BatchNorm and Linear works on the same dim and are correctly fusable. When we use incompatible Linear+BatchNorm that accidentally has matching last dim size with number of channels, the fusion silently produces a model with incorrect Linear weights tensor.
Example
In typical scenario, the last dim won't match the number of channels and will fail:
inp = torch.randn((2, 2, 3)) # (N, C, L)
lin = nn.Linear(in_features=3, out_features=5) # Weight of shape (5, 3)
bn = nn.BatchNorm2d(num_features=2) # Running var shape (2, )
# With the following fusing equation:
# linear_w_fused = linear_w * (gamma / sqrt(var + eps))
# And the following shapes:
# linear_w.shape: (5, 3); var.shape: (2, )
# We cannot perform the multiplication.but when inp = torch.randn((2, 5, 3)), the quantization passes and produces incorrect Linear weights.