ActionDiscretizer¶
- class torchrl.envs.transforms.ActionDiscretizer(num_intervals: int | torch.Tensor, action_key: NestedKey = 'action', out_action_key: NestedKey =None, sampling=None, categorical: bool =True)[source]¶
用於將連續動作空間離散化的變換。
此變換使得可以在具有連續動作空間的環境中使用為離散動作空間設計的演算法,例如 DQN。
- 引數:
num_intervals (int 或 torch.Tensor) – 動作空間中每個元素的離散值數量。如果提供單個整數,則所有動作項將按相同的元素數量進行切分。如果提供張量,則其元素數量必須與動作空間相同(即,
num_intervals張量的長度必須與動作空間的最後一個維度匹配)。action_key (NestedKey, 可選) – 要使用的動作鍵。指向父環境的動作(浮點動作)。預設為
"action"。out_action_key (NestedKey, 可選) – 寫入離散動作的鍵。如果提供
None,則預設為action_key的值。如果兩個鍵不匹配,連續的 action_spec 將從full_action_spec環境屬性移動到full_state_spec容器,因為只有離散動作應該被取樣以便採取動作。提供out_action_key可以確保浮點動作可用於記錄。sampling (ActionDiscretizer.SamplingStrategy, 可選) –
ActionDiscretizer.SamplingStrategyIntEnum物件的一個元素(MEDIAN、LOW、HIGH或RANDOM)。指示應如何在提供的區間內對連續動作進行取樣。categorical (bool, 可選) – 如果為
False,則使用獨熱編碼。預設為True。
示例
>>> from torchrl.envs import GymEnv, check_env_specs >>> import torch >>> base_env = GymEnv("HalfCheetah-v4") >>> num_intervals = torch.arange(5, 11) >>> categorical = True >>> sampling = ActionDiscretizer.SamplingStrategy.MEDIAN >>> t = ActionDiscretizer( ... num_intervals=num_intervals, ... categorical=categorical, ... sampling=sampling, ... out_action_key="action_disc", ... ) >>> env = base_env.append_transform(t) TransformedEnv( env=GymEnv(env=HalfCheetah-v4, batch_size=torch.Size([]), device=cpu), transform=ActionDiscretizer( num_intervals=tensor([ 5, 6, 7, 8, 9, 10]), action_key=action, out_action_key=action_disc,, sampling=0, categorical=True)) >>> check_env_specs(env) >>> # Produce a rollout >>> r = env.rollout(4) >>> print(r) TensorDict( fields={ action: Tensor(shape=torch.Size([4, 6]), device=cpu, dtype=torch.float32, is_shared=False), action_disc: Tensor(shape=torch.Size([4, 6]), device=cpu, dtype=torch.int64, is_shared=False), done: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False), observation: Tensor(shape=torch.Size([4, 17]), device=cpu, dtype=torch.float64, is_shared=False), reward: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.float32, is_shared=False), terminated: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([4]), device=cpu, is_shared=False), observation: Tensor(shape=torch.Size([4, 17]), device=cpu, dtype=torch.float64, is_shared=False), terminated: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([4, 1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([4]), device=cpu, is_shared=False) >>> assert r["action"].dtype == torch.float >>> assert r["action_disc"].dtype == torch.int64 >>> assert (r["action"] < base_env.action_spec.high).all() >>> assert (r["action"] > base_env.action_spec.low).all()
- transform_input_spec(input_spec)[source]¶
轉換輸入規範,使得生成的規範與變換對映匹配。
- 引數:
input_spec (TensorSpec) – 變換前的規範
- 返回:
變換後預期的規範