EndOfLifeTransform¶
- 類 torchrl.envs.transforms.EndOfLifeTransform(eol_key: NestedKey = 'end-of-life', lives_key: NestedKey = 'lives', done_key: NestedKey = 'done', eol_attribute='unwrapped.ale.lives')[源]¶
註冊來自具有 lives 方法的 Gym 環境的生命結束訊號。
由 DeepMind 為 DQN 等提出。它有助於價值估計。
- 引數:
eol_key (NestedKey, 可選) – 應寫入生命結束訊號的鍵。預設為
"end-of-life"。done_key (NestedKey, 可選) – 父環境 done_spec 中的一個“完成”鍵,可以在其中檢索完成值。此鍵必須是唯一的,並且其形狀必須與生命結束條目的形狀匹配。預設為
"done"。eol_attribute (str, 可選) – Gym 環境中“生命數”的位置。預設為
"unwrapped.ale.lives"。支援的屬性型別是整數/類陣列物件或返回這些值的可呼叫物件。
注意
此轉換應與具有
env.unwrapped.ale.lives的 Gym 環境一起使用。示例
>>> from torchrl.envs.libs.gym import GymEnv >>> from torchrl.envs.transforms.transforms import TransformedEnv >>> env = GymEnv("ALE/Breakout-v5") >>> env.rollout(100) TensorDict( fields={ action: Tensor(shape=torch.Size([100, 4]), device=cpu, dtype=torch.int64, is_shared=False), done: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), pixels: Tensor(shape=torch.Size([100, 210, 160, 3]), device=cpu, dtype=torch.uint8, is_shared=False), reward: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.float32, is_shared=False), terminated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([100]), device=cpu, is_shared=False), pixels: Tensor(shape=torch.Size([100, 210, 160, 3]), device=cpu, dtype=torch.uint8, is_shared=False), terminated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([100]), device=cpu, is_shared=False) >>> eol_transform = EndOfLifeTransform() >>> env = TransformedEnv(env, eol_transform) >>> env.rollout(100) TensorDict( fields={ action: Tensor(shape=torch.Size([100, 4]), device=cpu, dtype=torch.int64, is_shared=False), done: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), eol: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), lives: Tensor(shape=torch.Size([100]), device=cpu, dtype=torch.int64, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), end-of-life: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), lives: Tensor(shape=torch.Size([100]), device=cpu, dtype=torch.int64, is_shared=False), pixels: Tensor(shape=torch.Size([100, 210, 160, 3]), device=cpu, dtype=torch.uint8, is_shared=False), reward: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.float32, is_shared=False), terminated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([100]), device=cpu, is_shared=False), pixels: Tensor(shape=torch.Size([100, 210, 160, 3]), device=cpu, dtype=torch.uint8, is_shared=False), terminated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([100, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([100]), device=cpu, is_shared=False)
此轉換的典型用法是在損失模組中用“生命結束”替換“完成”狀態。生命結束訊號未在
done_spec中註冊,因為它不應指示環境重置。示例
>>> from torchrl.objectives import DQNLoss >>> module = torch.nn.Identity() # used as a placeholder >>> loss = DQNLoss(module, action_space="categorical") >>> loss.set_keys(done="end-of-life", terminated="end-of-life") >>> # equivalently >>> eol_transform.register_keys(loss)
- register_keys(loss_or_advantage: LossModule)[源]¶
在損失函式內的適當位置註冊生命結束鍵。
- 引數:
loss_or_advantage (torchrl.objectives.LossModule 或 torchrl.objectives.value.ValueEstimatorBase) – 一個模組,用於指示生命結束鍵是什麼。
- transform_observation_spec(observation_spec)[源]¶
轉換觀察規範,使結果規範與轉換對映匹配。
- 引數:
observation_spec (TensorSpec) – 轉換前的規範
- 返回:
轉換後的預期規範