一次性(Once-for-All)

獲取超網

您可以按如下方式快速載入超網

import torch
super_net_name = "ofa_supernet_mbv3_w10" 
# other options: 
#    ofa_supernet_resnet50 / 
#    ofa_supernet_mbv3_w12 / 
#    ofa_supernet_proxyless

super_net = torch.hub.load('mit-han-lab/once-for-all', super_net_name, pretrained=True).eval()
OFA 網路設計空間解析度寬度乘數深度擴充套件比核大小
ofa_resnet50ResNet50D128 – 2240.65, 0.8, 1.00, 1, 20.2, 0.25, 0.353
ofa_mbv3_d234_e346_k357_w1.0MobileNetV3128 – 2241.02, 3, 43, 4, 63, 5, 7
ofa_mbv3_d234_e346_k357_w1.2MobileNetV3160 – 2241.22, 3, 43, 4, 63, 5, 7
ofa_proxyless_d234_e346_k357_w1.3ProxylessNAS128 – 2241.32, 3, 43, 4, 63, 5, 7

以下是從超網中取樣/選擇子網的用法

# Randomly sample sub-networks from OFA network
super_net.sample_active_subnet()
random_subnet = super_net.get_active_subnet(preserve_weight=True)
    
# Manually set the sub-network
super_net.set_active_subnet(ks=7, e=6, d=4)
manual_subnet = super_net.get_active_subnet(preserve_weight=True)

獲取專用架構

import torch

# or load a architecture specialized for certain platform
net_config = "resnet50D_MAC_4_1B"

specialized_net, image_size = torch.hub.load('mit-han-lab/once-for-all', net_config, pretrained=True)
specialized_net.eval()

更多模型和配置可在 once-for-all/model-zoo 中找到,並透過以下指令碼獲取

ofa_specialized_get = torch.hub.load('mit-han-lab/once-for-all', "ofa_specialized_get")
model, image_size = ofa_specialized_get("flops@595M_top1@80.0_finetune@75", pretrained=True)
model.eval()

模型的預測可以透過以下方式評估

# Download an example image from pytorch website
import urllib
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
try: 
  urllib.URLopener().retrieve(url, filename)
except: 
  urllib.request.urlretrieve(url, filename)


# sample execution (requires torchvision)
from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model

# move the input and model to GPU for speed if available
if torch.cuda.is_available():
    input_batch = input_batch.to('cuda')
    model.to('cuda')

with torch.no_grad():
    output = model(input_batch)
# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes
print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
print(probabilities)

模型描述

一次訓練通用模型 (Once-for-all) 來自 Once for All: Train One Network and Specialize it for Efficient Deployment。傳統方法要麼手動設計,要麼使用神經架構搜尋 (NAS) 來尋找專門的神經網路,並針對每種情況從頭開始訓練,這在計算上是 prohibitive(產生相當於 5 輛汽車壽命的二氧化碳排放),因此無法擴充套件。在這項工作中,我們提出透過解耦訓練和搜尋來訓練一個支援各種架構設定的“一次訓練通用” (OFA) 網路。在各種邊緣裝置上,OFA 始終優於最先進 (SOTA) 的 NAS 方法(ImageNet top1 準確率比 MobileNetV3 提高高達 4.0%,或相同準確率但比 MobileNetV3 快 1.5 倍,比 EfficientNet 快 2.6 倍,就測量延遲而言),同時將 GPU 小時數和二氧化碳排放量減少了許多個數量級。特別是,OFA 在移動設定(<600M MACs)下實現了新的 SOTA 80.0% ImageNet top-1 準確率。

參考資料

@inproceedings{
  cai2020once,
  title={Once for All: Train One Network and Specialize it for Efficient Deployment},
  author={Han Cai and Chuang Gan and Tianzhe Wang and Zhekai Zhang and Song Han},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://arxiv.org/pdf/1908.09791.pdf}
}

一次性(OFA)將訓練和搜尋解耦,並在各種邊緣裝置和資源限制下實現高效推理。

模型型別: 可指令碼化 | 視覺
提交者: MIT Han Lab