快捷方式

ModelBasedEnvBase

torchrl.envs.ModelBasedEnvBase(*args, **kwargs)[原始碼]

用於 Model Based RL state-of-the-art (SOTA) 實現的基礎環境。

MBRL 演算法模型的包裝器。旨在為世界模型(包括但不限於觀察、獎勵、完成狀態和安全約束模型)提供一個環境框架,並使其表現得像一個經典環境。

這是其他環境的基類,不應直接使用。

示例

>>> import torch
>>> from tensordict import TensorDict
>>> from torchrl.data import Composite, Unbounded
>>> class MyMBEnv(ModelBasedEnvBase):
...     def __init__(self, world_model, device="cpu", dtype=None, batch_size=None):
...         super().__init__(world_model, device=device, dtype=dtype, batch_size=batch_size)
...         self.observation_spec = Composite(
...             hidden_observation=Unbounded((4,))
...         )
...         self.state_spec = Composite(
...             hidden_observation=Unbounded((4,)),
...         )
...         self.action_spec = Unbounded((1,))
...         self.reward_spec = Unbounded((1,))
...
...     def _reset(self, tensordict: TensorDict) -> TensorDict:
...         tensordict = TensorDict(
...             batch_size=self.batch_size,
...             device=self.device,
...         )
...         tensordict = tensordict.update(self.state_spec.rand())
...         tensordict = tensordict.update(self.observation_spec.rand())
...         return tensordict
>>> # This environment is used as follows:
>>> import torch.nn as nn
>>> from torchrl.modules import MLP, WorldModelWrapper
>>> world_model = WorldModelWrapper(
...     TensorDictModule(
...         MLP(out_features=4, activation_class=nn.ReLU, activate_last_layer=True, depth=0),
...         in_keys=["hidden_observation", "action"],
...         out_keys=["hidden_observation"],
...     ),
...     TensorDictModule(
...         nn.Linear(4, 1),
...         in_keys=["hidden_observation"],
...         out_keys=["reward"],
...     ),
... )
>>> env = MyMBEnv(world_model)
>>> tensordict = env.rollout(max_steps=10)
>>> print(tensordict)
TensorDict(
    fields={
        action: Tensor(torch.Size([10, 1]), dtype=torch.float32),
        done: Tensor(torch.Size([10, 1]), dtype=torch.bool),
        hidden_observation: Tensor(torch.Size([10, 4]), dtype=torch.float32),
        next: LazyStackedTensorDict(
            fields={
                hidden_observation: Tensor(torch.Size([10, 4]), dtype=torch.float32)},
            batch_size=torch.Size([10]),
            device=cpu,
            is_shared=False),
        reward: Tensor(torch.Size([10, 1]), dtype=torch.float32)},
    batch_size=torch.Size([10]),
    device=cpu,
    is_shared=False)
屬性

observation_spec (Composite): 觀察的取樣規範; action_spec (TensorSpec): 動作的取樣規範; reward_spec (TensorSpec): 獎勵的取樣規範; input_spec (Composite): 輸入的取樣規範; batch_size (torch.Size): 環境使用的 batch_size。如果未設定,環境接受所有 batch_size 的 TensorDict。 device (torch.device): 環境輸入和輸出預計所在的裝置

引數:
  • world_model (nn.Module) – 生成世界狀態及其對應獎勵的模型;

  • params (List[torch.Tensor], optional) – 世界模型的引數列表;

  • buffers (List[torch.Tensor], optional) – 世界模型的緩衝區列表;

  • device (torch.device, optional) – 環境輸入和輸出預計所在的裝置

  • dtype (torch.dtype, optional) – 環境輸入和輸出的資料型別 (dtype)

  • batch_size (torch.Size, optional) – 例項中包含的環境數量

  • run_type_check (bool, optional) – 是否在環境的 step 方法中執行型別檢查

torchrl.envs.step(TensorDict -> TensorDict)

在環境中執行 step

torchrl.envs.reset(TensorDict, optional -> TensorDict)

重置環境

torchrl.envs.set_seed(int -> int)

設定環境的隨機種子

torchrl.envs.rand_step(TensorDict, optional -> TensorDict)

根據動作規範執行隨機 step

torchrl.envs.rollout(Callable, ... -> TensorDict)

使用給定的策略(如果未提供策略則執行隨機 step)在環境中執行 rollout

文件

查閱 PyTorch 的全面開發者文件

檢視文件

教程

獲取面向初學者和高階開發者的深度教程

檢視教程

資源

查詢開發資源並獲取問題解答

檢視資源