快捷方式

DMControlWrapper

torchrl.envs.DMControlWrapper(*args, **kwargs)[source]

DeepMind Control Lab 環境封裝器。

DeepMind Control 庫可以在這裡找到:https://github.com/deepmind/dm_control

論文: https://arxiv.org/abs/2006.12983

引數:

env (dm_control.suite env) – Task 環境例項。

關鍵字引數:
  • from_pixels (bool, 可選) – 如果為 True,將嘗試從環境中返回畫素觀測。預設情況下,這些觀測將寫入到 "pixels" 條目下。預設為 False

  • pixels_only (bool, 可選) – 如果為 True,將只返回畫素觀測(預設情況下在輸出 tensordict 的 "pixels" 條目下)。如果為 False,當 from_pixels=True 時,將返回觀測(例如,狀態)和畫素。預設為 True

  • frame_skip (int, 可選) – 如果提供,表示將重複同一動作的步數。返回的觀測將是序列中的最後一個觀測,而獎勵將是這些步數的獎勵總和。

  • device (torch.device, 可選) – 如果提供,資料將被投射到的裝置。預設為 torch.device("cpu")

  • batch_size (torch.Size, 可選) – 環境的批次大小。應與所有觀測、完成狀態、獎勵、動作和資訊的前導維度匹配。預設為 torch.Size([])

  • allow_done_after_reset (bool, 可選) – 如果為 True,允許環境在呼叫 reset() 後立即完成。預設為 False

變數:

available_envs (list) – 一個 Tuple[str, List[str]] 列表,表示可用的環境/任務對。

示例

>>> from dm_control import suite
>>> from torchrl.envs import DMControlWrapper
>>> env = suite.load("cheetah", "run")
>>> env = DMControlWrapper(env,
...    from_pixels=True, frame_skip=4)
>>> td = env.rand_step()
>>> print(td)
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([6]), device=cpu, dtype=torch.float64, is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False),
                pixels: Tensor(shape=torch.Size([240, 320, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
                position: Tensor(shape=torch.Size([8]), device=cpu, dtype=torch.float64, is_shared=False),
                reward: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.float64, is_shared=False),
                terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False),
                velocity: Tensor(shape=torch.Size([9]), device=cpu, dtype=torch.float64, is_shared=False)},
            batch_size=torch.Size([]),
            device=cpu,
            is_shared=False)},
    batch_size=torch.Size([]),
    device=cpu,
    is_shared=False)
>>> print(env.available_envs)
[('acrobot', ['swingup', 'swingup_sparse']), ('ball_in_cup', ['catch']), ('cartpole', ['balance', 'balance_sparse', 'swingup', 'swingup_sparse', 'three_poles', 'two_poles']), ('cheetah', ['run']), ('finger', ['spin', 'turn_easy', 'turn_hard']), ('fish', ['upright', 'swim']), ('hopper', ['stand', 'hop']), ('humanoid', ['stand', 'walk', 'run', 'run_pure_state']), ('manipulator', ['bring_ball', 'bring_peg', 'insert_ball', 'insert_peg']), ('pendulum', ['swingup']), ('point_mass', ['easy', 'hard']), ('reacher', ['easy', 'hard']), ('swimmer', ['swimmer6', 'swimmer15']), ('walker', ['stand', 'walk', 'run']), ('dog', ['fetch', 'run', 'stand', 'trot', 'walk']), ('humanoid_CMU', ['run', 'stand', 'walk']), ('lqr', ['lqr_2_1', 'lqr_6_2']), ('quadruped', ['escape', 'fetch', 'run', 'walk']), ('stacker', ['stack_2', 'stack_4'])]

文件

獲取 PyTorch 全面開發者文件

檢視文件

教程

獲取針對初學者和高階開發者的深度教程

檢視教程

資源

查詢開發資源並獲得解答

檢視資源