快捷方式

ActorCriticWrapper

class torchrl.modules.tensordict_module.ActorCriticWrapper(*args, **kwargs)[source]

沒有共享模組的 Actor-Value 運算元。

此類將 actor 模型和 value 模型封裝在一起,它們不共享通用的觀測嵌入網路

../../_images/aafig-5b1c51d6da7f2229a6c42592c838f793bf136146.svg

為了便於工作流程,此類提供了 get_policy_operator() 和 get_value_operator() 方法,它們都將返回具有專用功能的獨立 TDModule。

引數:
  • policy_operator (TensorDictModule) – 一個策略運算元,讀取隱藏變數並返回一個動作

  • value_operator (TensorDictModule) – 一個價值運算元,讀取隱藏變數並返回一個價值

示例

>>> import torch
>>> from tensordict import TensorDict
>>> from tensordict.nn import TensorDictModule
>>> from torchrl.modules import (
...      ActorCriticWrapper,
...      ProbabilisticActor,
...      NormalParamExtractor,
...      TanhNormal,
...      ValueOperator,
...  )
>>> action_module = TensorDictModule(
...        nn.Sequential(torch.nn.Linear(4, 8), NormalParamExtractor()),
...        in_keys=["observation"],
...        out_keys=["loc", "scale"],
...    )
>>> td_module_action = ProbabilisticActor(
...    module=action_module,
...    in_keys=["loc", "scale"],
...    distribution_class=TanhNormal,
...    return_log_prob=True,
...    )
>>> module_value = torch.nn.Linear(4, 1)
>>> td_module_value = ValueOperator(
...    module=module_value,
...    in_keys=["observation"],
...    )
>>> td_module = ActorCriticWrapper(td_module_action, td_module_value)
>>> td = TensorDict({"observation": torch.randn(3, 4)}, [3,])
>>> td_clone = td_module(td.clone())
>>> print(td_clone)
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_policy_operator()(td.clone())
>>> print(td_clone)  # no value
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_value_operator()(td.clone())
>>> print(td_clone)  # no action
TensorDict(
    fields={
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
get_policy_head() SafeSequential

返回一個獨立的策略運算元,將觀測對映到動作。

get_policy_operator() SafeSequential[source]

返回一個獨立的策略運算元,將觀測對映到動作。

get_value_head() SafeSequential

返回一個獨立的價值網路運算元,將觀測對映到價值估計。

get_value_operator() SafeSequential[source]

返回一個獨立的價值網路運算元,將觀測對映到價值估計。

文件

查閱 PyTorch 的全面開發者文件

檢視文件

教程

獲取針對初學者和高階開發者的深入教程

檢視教程

資源

查詢開發資源並解答問題

檢視資源