快捷方式

ActorCriticOperator

torchrl.modules.tensordict_module.ActorCriticOperator(*args, **kwargs)[源]

Actor-critic 運算子。

該類將一個 actor 和一個 value 模型包裝在一起,它們共享一個通用的觀察嵌入網路

../../_images/aafig-79381044a8773741ed8c83c5de90ab4def5c10b2.svg

注意

對於返回 action 和狀態值 \(V(s)\) 的類似類,請參閱 ActorValueOperator

為了簡化工作流程,此類提供了一個 get_policy_operator() 方法,該方法將返回一個具有專用功能的獨立 TDModule。get_critic_operator 將返回父物件,因為 value 是根據 policy 的輸出計算的。

引數
  • common_operator (TensorDictModule) – 讀取觀察結果併產生隱藏變數的通用運算子

  • policy_operator (TensorDictModule) – 讀取隱藏變數並返回 action 的策略運算子

  • value_operator (TensorDictModule) – 讀取隱藏變數並返回 value 的值運算子

示例

>>> import torch
>>> from tensordict import TensorDict
>>> from torchrl.modules import ProbabilisticActor
>>> from torchrl.modules import  ValueOperator, TanhNormal, ActorCriticOperator, NormalParamExtractor, MLP
>>> module_hidden = torch.nn.Linear(4, 4)
>>> td_module_hidden = SafeModule(
...    module=module_hidden,
...    in_keys=["observation"],
...    out_keys=["hidden"],
...    )
>>> module_action = nn.Sequential(torch.nn.Linear(4, 8), NormalParamExtractor())
>>> module_action = TensorDictModule(module_action, in_keys=["hidden"], out_keys=["loc", "scale"])
>>> td_module_action = ProbabilisticActor(
...    module=module_action,
...    in_keys=["loc", "scale"],
...    out_keys=["action"],
...    distribution_class=TanhNormal,
...    return_log_prob=True,
...    )
>>> module_value = MLP(in_features=8, out_features=1, num_cells=[])
>>> td_module_value = ValueOperator(
...    module=module_value,
...    in_keys=["hidden", "action"],
...    out_keys=["state_action_value"],
...    )
>>> td_module = ActorCriticOperator(td_module_hidden, td_module_action, td_module_value)
>>> td = TensorDict({"observation": torch.randn(3, 4)}, [3,])
>>> td_clone = td_module(td.clone())
>>> print(td_clone)
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_action_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_policy_operator()(td.clone())
>>> print(td_clone)  # no value
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_critic_operator()(td.clone())
>>> print(td_clone)  # no action
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_action_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
get_critic_operator() TensorDictModuleWrapper[源]

返回一個獨立的 critic 網路運算子,它將狀態-動作對對映到 critic 估計值。

get_policy_head() SafeSequential[源]

返回 policy head。

get_value_head() SafeSequential[源]

返回 value head。

get_value_operator() TensorDictModuleWrapper[源]

返回一個獨立的 value 網路運算子,它將觀察結果對映到 value 估計值。


© Copyright 2022, Meta。

使用 Sphinx 構建,主題由 Read the Docs 提供。

文件

訪問 PyTorch 的完整開發者文件

檢視文件

教程

獲取針對初學者和高階開發者的深度教程

檢視教程

資源

查詢開發資源並獲得問題解答

檢視資源