JumanjiEnv¶
- torchrl.envs.JumanjiEnv(*args, **kwargs)[source]¶
使用環境名稱構建的 Jumanji 環境包裝器。
Jumanji 提供了一個基於 Jax 的向量化模擬框架。TorchRL 的包裝器在 Jax 到 Torch 的轉換過程中會產生一些開銷,但計算圖仍然可以在模擬軌跡之上構建,從而允許透過 rollout 進行反向傳播。
GitHub: https://github.com/instadeepai/jumanji
文件: https://instadeepai.github.io/jumanji/
論文: https://arxiv.org/abs/2306.09884
- 引數:
env_name (字串) – 要包裝的環境名稱。必須是
available_envs的一部分。categorical_action_encoding (布林型, 可選) – 如果
True為 ,則分類規範將轉換為等效的 TorchRL 規範 (torchrl.data.Categorical),否則將使用獨熱編碼 (torchrl.data.OneHot)。預設為False。
- 關鍵字引數:
from_pixels (布林型, 可選) – 尚不支援。
frame_skip (整型, 可選) – 如果提供,表示重複同一動作的步數。返回的觀測是序列中的最後一個觀測,而獎勵將是所有步的獎勵總和。
device (torch.device, 可選) – 如果提供,指定資料要轉換到的裝置。預設為
torch.device("cpu")。batch_size (torch.Size, 可選) – 環境的批次大小。在
jumanji中,這表示向量化環境的數量。預設為torch.Size([])。allow_done_after_reset (布林型, 可選) – 如果
True為 ,則允許環境在reset()被呼叫後立即處於done狀態。預設為False。
- 變數:
available_envs – 可構建的環境
示例
>>> from torchrl.envs import JumanjiEnv >>> env = JumanjiEnv("Snake-v1") >>> env.set_seed(0) >>> td = env.reset() >>> td["action"] = env.action_spec.rand() >>> td = env.step(td) >>> print(td) TensorDict( fields={ action: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), grid: Tensor(shape=torch.Size([12, 12, 5]), device=cpu, dtype=torch.float32, is_shared=False), next: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), grid: Tensor(shape=torch.Size([12, 12, 5]), device=cpu, dtype=torch.float32, is_shared=False), reward: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.float32, is_shared=False), state: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), body: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False), body_state: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.int32, is_shared=False), fruit_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), head_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), key: Tensor(shape=torch.Size([2]), device=cpu, dtype=torch.int32, is_shared=False), length: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), tail: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), state: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), body: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False), body_state: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.int32, is_shared=False), fruit_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), head_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), key: Tensor(shape=torch.Size([2]), device=cpu, dtype=torch.int32, is_shared=False), length: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), tail: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False) >>> print(env.available_envs) ['Game2048-v1', 'Maze-v0', 'Cleaner-v0', 'CVRP-v1', 'MultiCVRP-v0', 'Minesweeper-v0', 'RubiksCube-v0', 'Knapsack-v1', 'Sudoku-v0', 'Snake-v1', 'TSP-v1', 'Connector-v2', 'MMST-v0', 'GraphColoring-v0', 'RubiksCube-partly-scrambled-v0', 'RobotWarehouse-v0', 'Tetris-v0', 'BinPack-v2', 'Sudoku-very-easy-v0', 'JobShop-v0']
為了利用 Jumanji 的優勢,通常會同時執行多個環境。
>>> from torchrl.envs import JumanjiEnv >>> env = JumanjiEnv("Snake-v1", batch_size=[10]) >>> env.set_seed(0) >>> td = env.reset() >>> td["action"] = env.action_spec.rand() >>> td = env.step(td)
在以下示例中,我們迭代測試不同的批次大小,並報告短時 rollout 的執行時間
示例
>>> from torch.utils.benchmark import Timer >>> for batch_size in [4, 16, 128]: ... timer = Timer( ... ''' ... env.rollout(100) ... ''', ... setup=f''' ... from torchrl.envs import JumanjiEnv ... env = JumanjiEnv('Snake-v1', batch_size=[{batch_size}]) ... env.set_seed(0) ... env.rollout(2) ... ''') ... print(batch_size, timer.timeit(number=10)) 4 <torch.utils.benchmark.utils.common.Measurement object at 0x1fca91910> env.rollout(100) setup: [...] Median: 122.40 ms 2 measurements, 1 runs per measurement, 1 thread 16 <torch.utils.benchmark.utils.common.Measurement object at 0x1ff9baee0> env.rollout(100) setup: [...] Median: 134.39 ms 2 measurements, 1 runs per measurement, 1 thread 128 <torch.utils.benchmark.utils.common.Measurement object at 0x1ff9ba7c0> env.rollout(100) setup: [...] Median: 172.31 ms 2 measurements, 1 runs per measurement, 1 thread