快捷方式

AtariDQNExperienceReplay

class torchrl.data.datasets.AtariDQNExperienceReplay(dataset_id: str, batch_size: int | None = None, *, root: str | Path | None = None, download: bool | str = True, sampler=None, writer=None, transform: 'Transform' | None =None, num_procs: int =0, num_slices: int | None =None, slice_len: int | None =None, strict_len: bool =True, replacement: bool =True, mp_start_method: str ='fork', **kwargs)[source]

Atari DQN 經驗回放類。

Atari DQN 資料集(https://offline-rl.github.io/)收集了 DQN 在每個 Atari 2600 遊戲上的 5 次訓練迭代,共計 2 億幀。子取樣率(幀跳過)等於 4,這意味著每個遊戲資料集總共有 5000 萬步。

資料格式遵循 TED 規範。由於資料集相當大,資料格式化是在取樣時線上完成的。

為了使訓練更模組化,我們按 Atari 遊戲和每次訓練輪次對資料集進行了拆分。因此,每個資料集都表示為一個長度為 50x10^6 元素的 Storage。在底層,這個資料集被分成 50 個記憶體對映的 tensordict,每個長度為 100 萬。

引數:
  • dataset_id (str) – 要下載的資料集。必須是 AtariDQNExperienceReplay.available_datasets 的一部分。

  • batch_size (int) – 取樣時使用的批大小。如有必要,可以透過 data.sample(batch_size) 覆蓋。

關鍵字引數:
  • root (Pathstr, 可選) – AtariDQN 資料集根目錄。實際的資料集記憶體對映檔案將儲存在 <root>/<dataset_id> 下。如果未提供,預設為 ~/.cache/torchrl/atari

  • num_procs (int, 可選) – 用於預處理的程序數。如果資料已下載,則無效。預設為 0(不使用多程序)。

  • download (boolstr, 可選) – 如果未找到資料集,是否應下載。預設為 True。也可以傳遞 "force",在這種情況下,下載的資料將被覆蓋。

  • sampler (Sampler, 可選) – 要使用的取樣器。如果未提供,將使用預設的 RandomSampler()。

  • writer (Writer, 可選) – 要使用的寫入器。如果未提供,將使用預設的 ImmutableDatasetWriter

  • collate_fn (callable, 可選) – 合併樣本列表以形成 Tensor(s)/輸出的迷你批次。在使用對映式資料集進行批次載入時使用。

  • pin_memory (bool) – 是否應在 rb 樣本上呼叫 pin\_memory()。

  • prefetch (int, 可選) – 使用多執行緒預取的下一個批次數量。

  • transform (Transform, 可選) – 在呼叫 sample() 時要執行的 Transform。要鏈式使用 transforms,請使用 Compose 類。

  • num_slices (int, 可選) – 要取樣的片段數量。批大小必須大於或等於 num_slices 引數。與 slice_len 互斥。預設為 None(不進行片段取樣)。sampler 引數將覆蓋此值。

  • slice_len (int, 可選) – 要取樣的片段長度。批大小必須大於或等於 slice_len 引數且能被其整除。與 num_slices 互斥。預設為 None(不進行片段取樣)。sampler 引數將覆蓋此值。

  • strict_length (bool, 可選) – 如果為 False,則允許出現在批次中的軌跡長度短於 slice_len(或 batch_size // num_slices)。請注意,這可能導致實際的 batch_size 小於請求的大小!軌跡可以使用 torchrl.collectors.split_trajectories() 進行分割。預設為 Truesampler 引數將覆蓋此值。

  • replacement (bool, 可選) – 如果為 False,取樣將不放回。 sampler 引數將覆蓋此值。

  • mp_start_method (str, 可選) – 多程序下載的啟動方法。預設為 "fork"

變數:
  • available_datasets – 可用資料集列表,格式為 <game_name>/<run>。示例:“Pong/5”“Krull/2” 等。

  • dataset_id (str) – 資料集名稱。

  • episodes (torch.Tensor) – 一個 1 維張量,指示 100 萬幀中的每一幀屬於哪個執行。與 SliceSampler 一起使用,以便廉價地取樣片段。

示例

>>> from torchrl.data.datasets import AtariDQNExperienceReplay
>>> dataset = AtariDQNExperienceReplay("Pong/5", batch_size=128)
>>> for data in dataset:
...     print(data)
...     break
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.int32, is_shared=False),
        done: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
        index: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.int64, is_shared=False),
        metadata: NonTensorData(
            data={'invalid_range': MemoryMappedTensor([999998, 999999,      0,      1,      2]), 'add_count': MemoryMappedTensor(999999), 'dataset_id': 'Pong/5'}},
            batch_size=torch.Size([128]),
            device=None,
            is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
                observation: Tensor(shape=torch.Size([128, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
                truncated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False)},
            batch_size=torch.Size([128]),
            device=None,
            is_shared=False),
        observation: Tensor(shape=torch.Size([128, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
        truncated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False)},
    batch_size=torch.Size([128]),
    device=None,
    is_shared=False)

警告

Atari-DQN 在終止訊號後不提供下一個觀測。換句話說,當 ("next", "done")True 時,無法獲得 ("next", "observation") 狀態。此值填充了 0,但在實踐中不應使用。如果使用 TorchRL 的值估計器(ValueEstimator),則這不是問題。

注意

由於用於片段取樣的取樣器的構建稍微複雜,我們方便使用者將 SliceSampler 的引數直接傳遞給 AtariDQNExperienceReplay 資料集:num_slicesslice_len 引數中的任何一個都將使取樣器成為 SliceSampler 的例項。strict_length 也可以傳遞。

>>> from torchrl.data.datasets import AtariDQNExperienceReplay
>>> from torchrl.data.replay_buffers import SliceSampler
>>> dataset = AtariDQNExperienceReplay("Pong/5", batch_size=128, slice_len=64)
>>> for data in dataset:
...     print(data)
...     print(data.get("index"))  # indices are in 4 groups of consecutive values
...     break
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.int32, is_shared=False),
        done: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
        index: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.int64, is_shared=False),
        metadata: NonTensorData(
            data={'invalid_range': MemoryMappedTensor([999998, 999999,      0,      1,      2]), 'add_count': MemoryMappedTensor(999999), 'dataset_id': 'Pong/5'}},
            batch_size=torch.Size([128]),
            device=None,
            is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([128, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                observation: Tensor(shape=torch.Size([128, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([128, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([128, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([128]),
            device=None,
            is_shared=False),
        observation: Tensor(shape=torch.Size([128, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False),
        truncated: Tensor(shape=torch.Size([128]), device=cpu, dtype=torch.uint8, is_shared=False)},
    batch_size=torch.Size([128]),
    device=None,
    is_shared=False)
tensor([2657628, 2657629, 2657630, 2657631, 2657632, 2657633, 2657634, 2657635,
        2657636, 2657637, 2657638, 2657639, 2657640, 2657641, 2657642, 2657643,
        2657644, 2657645, 2657646, 2657647, 2657648, 2657649, 2657650, 2657651,
        2657652, 2657653, 2657654, 2657655, 2657656, 2657657, 2657658, 2657659,
        2657660, 2657661, 2657662, 2657663, 2657664, 2657665, 2657666, 2657667,
        2657668, 2657669, 2657670, 2657671, 2657672, 2657673, 2657674, 2657675,
        2657676, 2657677, 2657678, 2657679, 2657680, 2657681, 2657682, 2657683,
        2657684, 2657685, 2657686, 2657687, 2657688, 2657689, 2657690, 2657691,
        1995687, 1995688, 1995689, 1995690, 1995691, 1995692, 1995693, 1995694,
        1995695, 1995696, 1995697, 1995698, 1995699, 1995700, 1995701, 1995702,
        1995703, 1995704, 1995705, 1995706, 1995707, 1995708, 1995709, 1995710,
        1995711, 1995712, 1995713, 1995714, 1995715, 1995716, 1995717, 1995718,
        1995719, 1995720, 1995721, 1995722, 1995723, 1995724, 1995725, 1995726,
        1995727, 1995728, 1995729, 1995730, 1995731, 1995732, 1995733, 1995734,
        1995735, 1995736, 1995737, 1995738, 1995739, 1995740, 1995741, 1995742,
        1995743, 1995744, 1995745, 1995746, 1995747, 1995748, 1995749, 1995750])

注意

通常,資料集應使用 ReplayBufferEnsemble 進行組合

>>> from torchrl.data.datasets import AtariDQNExperienceReplay
>>> from torchrl.data.replay_buffers import ReplayBufferEnsemble
>>> # we change this parameter for quick experimentation, in practice it should be left untouched
>>> AtariDQNExperienceReplay._max_runs = 2
>>> dataset_asterix = AtariDQNExperienceReplay("Asterix/5", batch_size=128, slice_len=64, num_procs=4)
>>> dataset_pong = AtariDQNExperienceReplay("Pong/5", batch_size=128, slice_len=64, num_procs=4)
>>> dataset = ReplayBufferEnsemble(dataset_pong, dataset_asterix, batch_size=128, sample_from_all=True)
>>> sample = dataset.sample()
>>> print("first sample, Asterix", sample[0])
first sample, Asterix TensorDict(
    fields={
        action: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int32, is_shared=False),
        done: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False),
        index: TensorDict(
            fields={
                buffer_ids: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int64, is_shared=False),
                index: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int64, is_shared=False)},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        metadata: NonTensorData(
            data={'invalid_range': MemoryMappedTensor([999998, 999999,      0,      1,      2]), 'add_count': MemoryMappedTensor(999999), 'dataset_id': 'Pong/5'},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                observation: Tensor(shape=torch.Size([64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        observation: Tensor(shape=torch.Size([64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False),
        truncated: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False)},
    batch_size=torch.Size([64]),
    device=None,
    is_shared=False)
>>> print("second sample, Pong", sample[1])
second sample, Pong TensorDict(
    fields={
        action: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int32, is_shared=False),
        done: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False),
        index: TensorDict(
            fields={
                buffer_ids: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int64, is_shared=False),
                index: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int64, is_shared=False)},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        metadata: NonTensorData(
            data={'invalid_range': MemoryMappedTensor([999998, 999999,      0,      1,      2]), 'add_count': MemoryMappedTensor(999999), 'dataset_id': 'Asterix/5'},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                observation: Tensor(shape=torch.Size([64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([64, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([64]),
            device=None,
            is_shared=False),
        observation: Tensor(shape=torch.Size([64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False),
        truncated: Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.uint8, is_shared=False)},
    batch_size=torch.Size([64]),
    device=None,
    is_shared=False)
>>> print("Aggregate (metadata hidden)", sample)
Aggregate (metadata hidden) LazyStackedTensorDict(
    fields={
        action: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.int32, is_shared=False),
        done: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.uint8, is_shared=False),
        index: LazyStackedTensorDict(
            fields={
                buffer_ids: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.int64, is_shared=False),
                index: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.int64, is_shared=False)},
            exclusive_fields={
            },
            batch_size=torch.Size([2, 64]),
            device=None,
            is_shared=False,
            stack_dim=0),
        metadata: LazyStackedTensorDict(
            fields={
            },
            exclusive_fields={
            },
            batch_size=torch.Size([2, 64]),
            device=None,
            is_shared=False,
            stack_dim=0),
        next: LazyStackedTensorDict(
            fields={
                done: Tensor(shape=torch.Size([2, 64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                observation: Tensor(shape=torch.Size([2, 64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([2, 64, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([2, 64, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            exclusive_fields={
            },
            batch_size=torch.Size([2, 64]),
            device=None,
            is_shared=False,
            stack_dim=0),
        observation: Tensor(shape=torch.Size([2, 64, 84, 84]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.uint8, is_shared=False),
        truncated: Tensor(shape=torch.Size([2, 64]), device=cpu, dtype=torch.uint8, is_shared=False)},
    exclusive_fields={
    },
    batch_size=torch.Size([2, 64]),
    device=None,
    is_shared=False,
    stack_dim=0)
add(data: TensorDictBase) int

向回放緩衝區新增單個元素。

引數:

data (Any) – 要新增到回放緩衝區的資料。

返回:

資料在回放緩衝區中的索引。

append_transform(transform: Transform, *, invert: bool =False) ReplayBuffer

在末尾新增 transform。

呼叫 sample 時按順序應用 Transforms。

引數:

transform (Transform) – 要新增的 transform。

關鍵字引數:

invert (bool, 可選) – 如果為 True,transform 將被反轉(在寫入時呼叫正向函式,在讀取時呼叫反向函式)。預設為 False

示例

>>> rb = ReplayBuffer(storage=LazyMemmapStorage(10), batch_size=4)
>>> data = TensorDict({"a": torch.zeros(10)}, [10])
>>> def t(data):
...     data += 1
...     return data
>>> rb.append_transform(t, invert=True)
>>> rb.extend(data)
>>> assert (data == 1).all()
abstract property data_path: Path

資料集路徑,包含拆分資訊。

abstract property data_path_root: Path

資料集根路徑。

delete()

從磁碟刪除資料集儲存。

dump(*args, **kwargs)

dumps() 的別名。

dumps(path)

將回放緩衝區儲存到指定路徑的磁碟上。

引數:

path (Pathstr) – 儲存回放緩衝區的路徑。

示例

>>> import tempfile
>>> import tqdm
>>> from torchrl.data import LazyMemmapStorage, TensorDictReplayBuffer
>>> from torchrl.data.replay_buffers.samplers import PrioritizedSampler, RandomSampler
>>> import torch
>>> from tensordict import TensorDict
>>> # Build and populate the replay buffer
>>> S = 1_000_000
>>> sampler = PrioritizedSampler(S, 1.1, 1.0)
>>> # sampler = RandomSampler()
>>> storage = LazyMemmapStorage(S)
>>> rb = TensorDictReplayBuffer(storage=storage, sampler=sampler)
>>>
>>> for _ in tqdm.tqdm(range(100)):
...     td = TensorDict({"obs": torch.randn(100, 3, 4), "next": {"obs": torch.randn(100, 3, 4)}, "td_error": torch.rand(100)}, [100])
...     rb.extend(td)
...     sample = rb.sample(32)
...     rb.update_tensordict_priority(sample)
>>> # save and load the buffer
>>> with tempfile.TemporaryDirectory() as tmpdir:
...     rb.dumps(tmpdir)
...
...     sampler = PrioritizedSampler(S, 1.1, 1.0)
...     # sampler = RandomSampler()
...     storage = LazyMemmapStorage(S)
...     rb_load = TensorDictReplayBuffer(storage=storage, sampler=sampler)
...     rb_load.loads(tmpdir)
...     assert len(rb) == len(rb_load)
empty()

清空回放緩衝區並將遊標重置為 0。

extend(tensordicts: TensorDictBase) Tensor

使用 iterable 中包含的一個或多個元素擴充套件回放緩衝區。

如果存在,將呼叫反向 transforms。`

引數:

data (iterable) – 要新增到回放緩衝區的資料集合。

返回:

新增到回放緩衝區的資料索引。

警告

當處理值列表時,extend() 可能具有模糊的簽名,這些值列表應被解釋為 PyTree(在這種情況下,列表中的所有元素將被放入儲存中儲存的 PyTree 的一個片段中)或者一個逐個新增的值列表。為了解決這個問題,TorchRL 在 list 和 tuple 之間做了明確的區分:tuple 將被視為 PyTree,而 list(在根級別)將被解釋為一個逐個新增到緩衝區的 值棧。對於 ListStorage 例項,只能提供未繫結的元素(不能是 PyTrees)。

insert_transform(index: int, transform: Transform, *, invert: bool =False) ReplayBuffer

插入 transform。

呼叫 sample 時按順序執行 Transforms。

引數:
  • index (int) – 插入 transform 的位置。

  • transform (Transform) – 要新增的 transform。

關鍵字引數:

invert (bool, 可選) – 如果為 True,transform 將被反轉(在寫入時呼叫正向函式,在讀取時呼叫反向函式)。預設為 False

load(*args, **kwargs)

loads() 的別名。

loads(path)

在給定路徑載入回放緩衝區狀態。

緩衝區應具有匹配的元件,並使用 dumps() 儲存。

引數:

path (Pathstr) – 儲存回放緩衝區的路徑。

有關更多資訊,請參閱 dumps()

preprocess(fn: Callable[[TensorDictBase], TensorDictBase], dim: int = 0, num_workers: int | None = None, *, chunksize: int | None = None, num_chunks: int | None = None, pool: mp.Pool | None = None, generator: torch.Generator | None = None, max_tasks_per_child: int | None = None, worker_threads: int = 1, index_with_generator: bool = False, pbar: bool = False, mp_start_method: str | None = None, dest: str | Path, num_frames: int | None = None)[source]

預處理資料集並返回一個包含格式化資料的新儲存。

資料轉換必須是單元式的(作用於資料集的單個樣本)。

位置引數和關鍵字引數將被轉發給 map()

資料集隨後可以使用 delete() 方法刪除。

關鍵字引數:
  • dest (路徑等價物) – 新資料集儲存位置的路徑。

  • num_frames (整型, 可選) – 如果提供,將只轉換前 num_frames 個幀。這對於初步除錯轉換非常有用。

返回:一個可在 ReplayBuffer 例項中使用的新儲存。

示例

>>> from torchrl.data.datasets import MinariExperienceReplay
>>>
>>> data = MinariExperienceReplay(
...     list(MinariExperienceReplay.available_datasets)[0],
...     batch_size=32
...     )
>>> print(data)
MinariExperienceReplay(
    storages=TensorStorage(TensorDict(
        fields={
            action: MemoryMappedTensor(shape=torch.Size([1000000, 8]), device=cpu, dtype=torch.float32, is_shared=True),
            episode: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.int64, is_shared=True),
            info: TensorDict(
                fields={
                    distance_from_origin: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    forward_reward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                    qpos: MemoryMappedTensor(shape=torch.Size([1000000, 15]), device=cpu, dtype=torch.float64, is_shared=True),
                    qvel: MemoryMappedTensor(shape=torch.Size([1000000, 14]), device=cpu, dtype=torch.float64, is_shared=True),
                    reward_ctrl: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    reward_forward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    reward_survive: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    success: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.bool, is_shared=True),
                    x_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    x_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    y_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                    y_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True)},
                batch_size=torch.Size([1000000]),
                device=cpu,
                is_shared=False),
            next: TensorDict(
                fields={
                    done: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True),
                    info: TensorDict(
                        fields={
                            distance_from_origin: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            forward_reward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                            qpos: MemoryMappedTensor(shape=torch.Size([1000000, 15]), device=cpu, dtype=torch.float64, is_shared=True),
                            qvel: MemoryMappedTensor(shape=torch.Size([1000000, 14]), device=cpu, dtype=torch.float64, is_shared=True),
                            reward_ctrl: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            reward_forward: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            reward_survive: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            success: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.bool, is_shared=True),
                            x_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            x_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            y_position: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True),
                            y_velocity: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.float64, is_shared=True)},
                        batch_size=torch.Size([1000000]),
                        device=cpu,
                        is_shared=False),
                    observation: TensorDict(
                        fields={
                            achieved_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                            desired_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                            observation: MemoryMappedTensor(shape=torch.Size([1000000, 27]), device=cpu, dtype=torch.float64, is_shared=True)},
                        batch_size=torch.Size([1000000]),
                        device=cpu,
                        is_shared=False),
                    reward: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.float64, is_shared=True),
                    terminated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True),
                    truncated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True)},
                batch_size=torch.Size([1000000]),
                device=cpu,
                is_shared=False),
            observation: TensorDict(
                fields={
                    achieved_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                    desired_goal: MemoryMappedTensor(shape=torch.Size([1000000, 2]), device=cpu, dtype=torch.float64, is_shared=True),
                    observation: MemoryMappedTensor(shape=torch.Size([1000000, 27]), device=cpu, dtype=torch.float64, is_shared=True)},
                batch_size=torch.Size([1000000]),
                device=cpu,
                is_shared=False)},
        batch_size=torch.Size([1000000]),
        device=cpu,
        is_shared=False)),
    samplers=RandomSampler,
    writers=ImmutableDatasetWriter(),
batch_size=32,
transform=Compose(
),
collate_fn=<function _collate_id at 0x120e21dc0>)
>>> from torchrl.envs import CatTensors, Compose
>>> from tempfile import TemporaryDirectory
>>>
>>> cat_tensors = CatTensors(
...     in_keys=[("observation", "observation"), ("observation", "achieved_goal"),
...              ("observation", "desired_goal")],
...     out_key="obs"
...     )
>>> cat_next_tensors = CatTensors(
...     in_keys=[("next", "observation", "observation"),
...              ("next", "observation", "achieved_goal"),
...              ("next", "observation", "desired_goal")],
...     out_key=("next", "obs")
...     )
>>> t = Compose(cat_tensors, cat_next_tensors)
>>>
>>> def func(td):
...     td = td.select(
...         "action",
...         "episode",
...         ("next", "done"),
...         ("next", "observation"),
...         ("next", "reward"),
...         ("next", "terminated"),
...         ("next", "truncated"),
...         "observation"
...         )
...     td = t(td)
...     return td
>>> with TemporaryDirectory() as tmpdir:
...     new_storage = data.preprocess(func, num_workers=4, pbar=True, mp_start_method="fork", dest=tmpdir)
...     rb = ReplayBuffer(storage=new_storage)
...     print(rb)
ReplayBuffer(
    storage=TensorStorage(
        data=TensorDict(
            fields={
                action: MemoryMappedTensor(shape=torch.Size([1000000, 8]), device=cpu, dtype=torch.float32, is_shared=True),
                episode: MemoryMappedTensor(shape=torch.Size([1000000]), device=cpu, dtype=torch.int64, is_shared=True),
                next: TensorDict(
                    fields={
                        done: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True),
                        obs: MemoryMappedTensor(shape=torch.Size([1000000, 31]), device=cpu, dtype=torch.float64, is_shared=True),
                        observation: TensorDict(
                            fields={
                            },
                            batch_size=torch.Size([1000000]),
                            device=cpu,
                            is_shared=False),
                        reward: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.float64, is_shared=True),
                        terminated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True),
                        truncated: MemoryMappedTensor(shape=torch.Size([1000000, 1]), device=cpu, dtype=torch.bool, is_shared=True)},
                    batch_size=torch.Size([1000000]),
                    device=cpu,
                    is_shared=False),
                obs: MemoryMappedTensor(shape=torch.Size([1000000, 31]), device=cpu, dtype=torch.float64, is_shared=True),
                observation: TensorDict(
                    fields={
                    },
                    batch_size=torch.Size([1000000]),
                    device=cpu,
                    is_shared=False)},
            batch_size=torch.Size([1000000]),
            device=cpu,
            is_shared=False),
        shape=torch.Size([1000000]),
        len=1000000,
        max_size=1000000),
    sampler=RandomSampler(),
    writer=RoundRobinWriter(cursor=0, full_storage=True),
    batch_size=None,
    collate_fn=<function _collate_id at 0x168406fc0>)
register_load_hook(hook: Callable[[Any], Any])

為儲存註冊一個載入鉤子。

注意

鉤子在儲存回放緩衝區時當前不會被序列化:每次建立緩衝區時都必須手動重新初始化它們。

register_save_hook(hook: Callable[[Any], Any])

為儲存註冊一個儲存鉤子。

注意

鉤子在儲存回放緩衝區時當前不會被序列化:每次建立緩衝區時都必須手動重新初始化它們。

sample(batch_size: int | None = None, return_info: bool = False, include_info: bool = None) TensorDictBase

從回放緩衝區取樣一批資料。

使用 Sampler 取樣索引,並從 Storage 中檢索它們。

引數:
  • batch_size (整型, 可選) – 要收集的資料批次大小。如果未提供,此方法將根據 Sampler 指示的批次大小進行取樣。

  • return_info (布林型) – 是否返回資訊。如果為 True,結果是一個元組 (data, info)。如果為 False,結果是資料。

返回:

一個包含在回放緩衝區中選定的資料批次的 tensordict。如果 return_info 標誌設定為 True,則返回一個包含此 tensordict 和 info 的元組。

property sampler

回放緩衝區的取樣器。

取樣器必須是 Sampler 的例項。

save(*args, **kwargs)

dumps() 的別名。

set_sampler(sampler: Sampler)

在回放緩衝區中設定一個新的取樣器並返回之前的取樣器。

set_storage(storage: Storage, collate_fn: Callable | None = None)

引數:
  • 在回放緩衝區中設定一個新的儲存並返回之前的儲存。

  • storage (Storage) – 緩衝區的新儲存。

collate_fn (可呼叫物件, 可選) – 如果提供,collate_fn 將被設定為此值。否則,它將被重置為預設值。

set_writer(writer: Writer)

在回放緩衝區中設定一個新的寫入器並返回之前的寫入器。

property storage

回放緩衝區的儲存。

儲存必須是 Storage 的例項。

property write_count

透過 add 和 extend 方法,目前已寫入緩衝區中的專案總數。

property writer

回放緩衝區的寫入器。