TenCrop¶
- class torchvision.transforms.TenCrop(size, vertical_flip=False)[原始碼]¶
將給定影像裁剪為四個角和中心部分,以及它們的翻轉版本(預設為水平翻轉)。如果影像是 torch Tensor,則期望其形狀為 […, H, W],其中 … 表示任意數量的前導維度
注意
此變換返回一個影像元組,這可能導致您的 Dataset 返回的輸入和目標數量不匹配。請參見下方示例以瞭解如何處理這種情況。
- 引數:
示例
>>> transform = Compose([ >>> TenCrop(size), # this is a tuple of PIL Images >>> Lambda(lambda crops: torch.stack([PILToTensor()(crop) for crop in crops])) # returns a 4D tensor >>> ]) >>> #In your test loop you can do the following: >>> input, target = batch # input is a 5d tensor, target is 2d >>> bs, ncrops, c, h, w = input.size() >>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops >>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops