注意
轉到末尾 以下載完整示例程式碼。
多工環境中的任務特定策略¶
本教程詳細介紹瞭如何使用多工策略和批次環境。
完成本教程後,你將能夠編寫使用不同權重集在不同設定中計算動作的策略。你還將能夠並行執行不同的環境。
from tensordict import LazyStackedTensorDict
from tensordict.nn import TensorDictModule, TensorDictSequential
from torch import nn
from torchrl.envs import CatTensors, Compose, DoubleToFloat, ParallelEnv, TransformedEnv
from torchrl.envs.libs.dm_control import DMControlEnv
from torchrl.modules import MLP
我們設計了兩個環境,一個仿人環境必須完成站立任務,另一個必須學習行走。
env1 = DMControlEnv("humanoid", "stand")
env1_obs_keys = list(env1.observation_spec.keys())
env1 = TransformedEnv(
env1,
Compose(
CatTensors(env1_obs_keys, "observation_stand", del_keys=False),
CatTensors(env1_obs_keys, "observation"),
DoubleToFloat(
in_keys=["observation_stand", "observation"],
in_keys_inv=["action"],
),
),
)
env2 = DMControlEnv("humanoid", "walk")
env2_obs_keys = list(env2.observation_spec.keys())
env2 = TransformedEnv(
env2,
Compose(
CatTensors(env2_obs_keys, "observation_walk", del_keys=False),
CatTensors(env2_obs_keys, "observation"),
DoubleToFloat(
in_keys=["observation_walk", "observation"],
in_keys_inv=["action"],
),
),
)
tdreset1 = env1.reset()
tdreset2 = env2.reset()
# With LazyStackedTensorDict, stacking is done in a lazy manner: the original tensordicts
# can still be recovered by indexing the main tensordict
tdreset = LazyStackedTensorDict.lazy_stack([tdreset1, tdreset2], 0)
assert tdreset[0] is tdreset1
print(tdreset[0])
策略¶
我們將設計一個策略,其中主幹網路讀取“observation”鍵。然後,如果存在的話,特定的子元件將讀取堆疊 tensordicts 的“observation_stand”和“observation_walk”鍵,並將它們傳遞給專用的子網路。
action_dim = env1.action_spec.shape[-1]
policy_common = TensorDictModule(
nn.Linear(67, 64), in_keys=["observation"], out_keys=["hidden"]
)
policy_stand = TensorDictModule(
MLP(67 + 64, action_dim, depth=2),
in_keys=["observation_stand", "hidden"],
out_keys=["action"],
)
policy_walk = TensorDictModule(
MLP(67 + 64, action_dim, depth=2),
in_keys=["observation_walk", "hidden"],
out_keys=["action"],
)
seq = TensorDictSequential(
policy_common, policy_stand, policy_walk, partial_tolerant=True
)
我們來檢查一下我們的序列是否為單個環境(站立)輸出了動作。
seq(env1.reset())
我們來檢查一下我們的序列是否為單個環境(行走)輸出了動作。
seq(env2.reset())
這也適用於堆疊:現在 stand 和 walk 鍵已經消失了,因為它們並非所有 tensordicts 共享。但是 TensorDictSequential 仍然執行了操作。注意,主幹網路是以向量化方式執行的——而不是在迴圈中——這更高效。
seq(tdreset)
並行執行不同任務¶
如果公共鍵值對共享相同的 specs(特別是它們的形狀和 dtype 必須匹配:如果 observation 形狀不同但指向同一個鍵,你將無法執行以下操作),我們可以並行化操作。
如果 ParallelEnv 接收單個環境建立函式,它將假定只需執行單個任務。如果提供函式列表,則將假定我們處於多工設定。
def env1_maker():
return TransformedEnv(
DMControlEnv("humanoid", "stand"),
Compose(
CatTensors(env1_obs_keys, "observation_stand", del_keys=False),
CatTensors(env1_obs_keys, "observation"),
DoubleToFloat(
in_keys=["observation_stand", "observation"],
in_keys_inv=["action"],
),
),
)
def env2_maker():
return TransformedEnv(
DMControlEnv("humanoid", "walk"),
Compose(
CatTensors(env2_obs_keys, "observation_walk", del_keys=False),
CatTensors(env2_obs_keys, "observation"),
DoubleToFloat(
in_keys=["observation_walk", "observation"],
in_keys_inv=["action"],
),
),
)
env = ParallelEnv(2, [env1_maker, env2_maker])
assert not env._single_task
tdreset = env.reset()
print(tdreset)
print(tdreset[0])
print(tdreset[1]) # should be different
讓我們將輸出透過我們的網路。
tdreset = seq(tdreset)
print(tdreset)
print(tdreset[0])
print(tdreset[1]) # should be different but all have an "action" key
env.step(tdreset) # computes actions and execute steps in parallel
print(tdreset)
print(tdreset[0])
print(tdreset[1]) # next_observation has now been written
Rollout¶
td_rollout = env.rollout(100, policy=seq, return_contiguous=False)
td_rollout[:, 0] # tensordict of the first step: only the common keys are shown
td_rollout[0] # tensordict of the first env: the stand obs is present
env.close()
del env