Conv2dNormActivation¶
- class torchvision.ops.Conv2dNormActivation(in_channels: int, out_channels: int, kernel_size: ~typing.Union[int, ~typing.Tuple[int, int]] = 3, stride: ~typing.Union[int, ~typing.Tuple[int, int]] = 1, padding: ~typing.Optional[~typing.Union[int, ~typing.Tuple[int, int], str]] = None, groups: int = 1, norm_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, activation_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = <class 'torch.nn.modules.activation.ReLU'>, dilation: ~typing.Union[int, ~typing.Tuple[int, int]] = 1, inplace: ~typing.Optional[bool] = True, bias: ~typing.Optional[bool] = None)[source]¶
用於 Convolution2d-Normalization-Activation 塊的可配置塊。
- 引數:
in_channels (int) – 輸入影像中的通道數
out_channels (int) – Convolution-Normalization-Activation 塊產生的通道數
kernel_size – (int, 可選): 卷積核的大小。預設值: 3
stride (int, 可選) – 卷積的步長。預設值: 1
padding (int, tuple 或 str, 可選) – 輸入四邊新增的填充。預設值: None,此時將計算為
padding = (kernel_size - 1) // 2 * dilationgroups (int, 可選) – 從輸入通道到輸出通道的組連線數。預設值: 1
norm_layer (Callable[..., torch.nn.Module], 可選) – 將堆疊在卷積層之上的歸一化層。如果為
None,則不使用此層。預設值:torch.nn.BatchNorm2dactivation_layer (Callable[..., torch.nn.Module], 可選) – 將堆疊在歸一化層(如果非 None)之上,否則堆疊在卷積層之上的啟用函式。如果為
None,則不使用此層。預設值:torch.nn.ReLUdilation (int) – 核心元素之間的間距。預設值: 1
inplace (bool) – 啟用層的引數,可以選擇原地執行操作。預設值:
Truebias (bool, 可選) – 是否在卷積層中使用偏置。預設情況下,如果
norm_layer is None,則包含偏置。