快捷方式

ConvTranspose1d

class torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)[source][source]

應用一個 1D 轉置卷積運算元,作用於由多個輸入平面組成的輸入影像。關於輸入引數、引數和實現細節,請參見 ConvTranspose1d

注意

目前僅實現了 QNNPACK 引擎。請設定 torch.backends.quantized.engine = ‘qnnpack’

特別注意事項請參見 Conv1d

變數
  • weight (Tensor) – 派生自可學習權重引數的打包張量。

  • scale (Tensor) – 輸出尺度的標量

  • zero_point (Tensor) – 輸出零點的標量

其他屬性請參見 ConvTranspose2d

示例

>>> torch.backends.quantized.engine = 'qnnpack'
>>> from torch.ao.nn import quantized as nnq
>>> # With square kernels and equal stride
>>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12])

文件

查閱 PyTorch 的完整開發者文件

檢視文件

教程

獲取面向初學者和高階開發者的深度教程

檢視教程

資源

查詢開發資源並獲得問題解答

檢視資源