SelectTransform¶
- 類 torchrl.envs.transforms.SelectTransform(*selected_keys: NestedKey, keep_rewards: bool = True, keep_dones: bool = True)[原始碼]¶
從輸入 tensordict 中選擇鍵。
- 通常,應首選
ExcludeTransform:此變換還會 選擇“action”(或 input_spec 中的其他鍵)、“done”和“reward”鍵,但可能還需要其他鍵。
- 引數:
*selected_keys (NestedKey 的可迭代物件) – 要選擇的鍵的名稱。如果鍵不存在,則會被忽略。
- 關鍵字引數:
keep_rewards (bool, 可選的) – 如果為
False,如果要保留獎勵鍵,則必須提供。預設為True。keep_dones (bool, 可選的) – 如果為
False,如果要保留 done 鍵,則必須必須提供。預設為True。
示例
>>> import gymnasium >>> from torchrl.envs import GymWrapper >>> env = TransformedEnv( ... GymWrapper(gymnasium.make("Pendulum-v1")), ... SelectTransform("observation", "reward", "done", keep_dones=False), # we leave done behind ... ) >>> env.rollout(3) # the truncated key is now absent TensorDict( fields={ action: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False), done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False), reward: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([3]), device=cpu, is_shared=False), observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([3]), device=cpu, is_shared=False)
- forward(tensordict: TensorDictBase) TensorDictBase¶
讀取輸入 tensordict,並對選定的鍵應用變換。
- 通常,應首選