IR¶
PyTorch 2.0 為後端提供了兩組 IR 以供介接:Core Aten IR 和 Prims IR。
Core Aten IR¶
核心 aten 運算子是 aten 運算子的核心子集,可用於組合其他運算子。核心 aten IR 功能齊全,並且在此運算子集中沒有 inplace 或 _out 變體。與 Prims IR 相反,核心 aten 運算子會重複使用「native_functions.yaml」中現有的 aten 運算子,並且不會進一步將運算子分解為明確的類型提升和廣播運算子。此運算子集旨在作為與後端介接的功能性 IR。
警告
此運算子集仍在積極開發中,未來將會新增更多運算子。
| 運算子 | 結構描述 | 
|---|---|
| 
 | _adaptive_avg_pool2d(Tensor self, SymInt[2] output_size) -> Tensor | 
| 
 | _adaptive_avg_pool2d_backward(Tensor grad_output, Tensor self) -> Tensor | 
| 
 | _adaptive_avg_pool3d(Tensor self, SymInt[3] output_size) -> Tensor | 
| 
 | _cdist_forward(Tensor x1, Tensor x2, float p, int? compute_mode) -> Tensor | 
| 
 | _embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False, int padding_idx=-1) -> (Tensor, Tensor, Tensor, Tensor) | 
| 
 | _local_scalar_dense(Tensor self) -> Scalar | 
| 
 | _log_softmax(Tensor self, int dim, bool half_to_float) -> Tensor | 
| 
 | _native_batch_norm_legit(Tensor input, Tensor? weight, Tensor? bias, Tensor(a!) running_mean, Tensor(b!) running_var, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor) | 
| 
 | _native_batch_norm_legit.no_stats(Tensor input, Tensor? weight, Tensor? bias, bool training, float momentum, float eps) -> (Tensor, Tensor, Tensor) | 
| 
 | _native_batch_norm_legit_no_training(Tensor input, Tensor? weight, Tensor? bias, Tensor running_mean, Tensor running_var, float momentum, float eps) -> (Tensor, Tensor, Tensor) | 
| 
 | _pdist_forward(Tensor self, float p=2) -> Tensor | 
| 
 | _softmax(Tensor self, int dim, bool half_to_float) -> Tensor | 
| 
 | _to_copy(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, MemoryFormat? memory_format=None) -> Tensor | 
| 
 | abs(Tensor self) -> Tensor | 
| 
 | acos(Tensor self) -> Tensor | 
| 
 | acosh(Tensor self) -> Tensor | 
| 
 | adaptive_avg_pool1d(Tensor self, int[1] output_size) -> Tensor | 
| 
 | add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor | 
| 
 | add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor | 
| 
 | addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> Tensor | 
| 
 | alias(Tensor(a) self) -> Tensor(a) | 
| 
 | amax(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor | 
| 
 | amin(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor | 
| 
 | any(Tensor self) -> Tensor | 
| 
 | any.dim(Tensor self, int dim, bool keepdim=False) -> Tensor | 
| 
 | any.dims(Tensor self, int[]? dim=None, bool keepdim=False) -> Tensor | 
| 
 | arange.start_step(Scalar start, Scalar end, Scalar step=1, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | argmax(Tensor self, int? dim=None, bool keepdim=False) -> Tensor | 
| 
 | argmin(Tensor self, int? dim=None, bool keepdim=False) -> Tensor | 
| 
 | as_strided(Tensor(a) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a) | 
| 
 | asin(Tensor self) -> Tensor | 
| 
 | asinh(Tensor self) -> Tensor | 
| 
 | atan(Tensor self) -> Tensor | 
| 
 | atan2(Tensor self, Tensor other) -> Tensor | 
| 
 | atan2.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!) | 
| 
 | atanh(Tensor self) -> Tensor | 
| 
 | avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=0, bool ceil_mode=False, bool count_include_pad=True) -> Tensor | 
| 
 | avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor | 
| 
 | avg_pool2d_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, bool ceil_mode, bool count_include_pad, int? divisor_override) -> Tensor | 
| 
 | avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor | 
| 
 | bitwise_and.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | bitwise_and.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | bitwise_not(Tensor self) -> Tensor | 
| 
 | bitwise_or.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | bitwise_or.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | bitwise_xor.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | bitwise_xor.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | bmm(Tensor self, Tensor mat2) -> Tensor | 
| 
 | cat(Tensor[] tensors, int dim=0) -> Tensor | 
| 
 | ceil(Tensor self) -> Tensor | 
| 
 | clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> Tensor | 
| 
 | clamp.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> Tensor | 
| 
 | clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor | 
| 
 | col2im(Tensor self, SymInt[2] output_size, int[2] kernel_size, int[2] dilation, int[2] padding, int[2] stride) -> Tensor | 
| 
 | constant_pad_nd(Tensor self, SymInt[] pad, Scalar value=0) -> Tensor | 
| 
 | convolution(Tensor input, Tensor weight, Tensor? bias, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups) -> Tensor | 
| 
 | convolution_backward(Tensor grad_output, Tensor input, Tensor weight, SymInt[]? bias_sizes, SymInt[] stride, SymInt[] padding, SymInt[] dilation, bool transposed, SymInt[] output_padding, SymInt groups, bool[3] output_mask) -> (Tensor, Tensor, Tensor) | 
| 
 | copy(Tensor self, Tensor src, bool non_blocking=False) -> Tensor | 
| 
 | cos(Tensor self) -> Tensor | 
| 
 | cosh(Tensor self) -> Tensor | 
| 
 | cumsum(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor | 
| 
 | diagonal(Tensor(a) self, int offset=0, int dim1=0, int dim2=1) -> Tensor(a) | 
| 
 | div.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | div.Scalar_mode(Tensor self, Scalar other, *, str? rounding_mode) -> Tensor | 
| 
 | div.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> Tensor | 
| 
 | embedding(Tensor weight, Tensor indices, SymInt padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> Tensor | 
| 
 | embedding_dense_backward(Tensor grad_output, Tensor indices, SymInt num_weights, SymInt padding_idx, bool scale_grad_by_freq) -> Tensor | 
| 
 | empty.memory_format(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor | 
| 
 | empty_strided(SymInt[] size, SymInt[] stride, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | eq.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | eq.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | erf(Tensor self) -> Tensor | 
| 
 | exp(Tensor self) -> Tensor | 
| 
 | expand(Tensor(a) self, SymInt[] size, *, bool implicit=False) -> Tensor(a) | 
| 
 | expm1(Tensor self) -> Tensor | 
| 
 | fill.Scalar(Tensor self, Scalar value) -> Tensor | 
| 
 | flip(Tensor self, int[] dims) -> Tensor | 
| 
 | floor(Tensor self) -> Tensor | 
| 
 | fmod.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | fmod.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | full(SymInt[] size, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | gather(Tensor self, int dim, Tensor index, *, bool sparse_grad=False) -> Tensor | 
| 
 | ge.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | ge.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | gelu(Tensor self, *, str approximate=’none’) -> Tensor | 
| 
 | grid_sampler_2d(Tensor input, Tensor grid, int interpolation_mode, int padding_mode, bool align_corners) -> Tensor | 
| 
 | gt.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | gt.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> Tensor | 
| 
 | index.Tensor(Tensor self, Tensor?[] indices) -> Tensor | 
| 
 | index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> Tensor | 
| 
 | index_select(Tensor self, int dim, Tensor index) -> Tensor | 
| 
 | isinf(Tensor self) -> Tensor | 
| 
 | isnan(Tensor self) -> Tensor | 
| 
 | le.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | le.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | leaky_relu(Tensor self, Scalar negative_slope=0.01) -> Tensor | 
| 
 | log(Tensor self) -> Tensor | 
| 
 | log10(Tensor self) -> Tensor | 
| 
 | log1p(Tensor self) -> Tensor | 
| 
 | log2(Tensor self) -> Tensor | 
| 
 | logical_and(Tensor self, Tensor other) -> Tensor | 
| 
 | logical_not(Tensor self) -> Tensor | 
| 
 | logical_or(Tensor self, Tensor other) -> Tensor | 
| 
 | logical_xor(Tensor self, Tensor other) -> Tensor | 
| 
 | lt.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | lt.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) | 
| 
 | max_pool2d_with_indices(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, int[2] dilation=1, bool ceil_mode=False) -> (Tensor, Tensor) | 
| 
 | max_pool2d_with_indices_backward(Tensor grad_output, Tensor self, int[2] kernel_size, int[2] stride, int[2] padding, int[2] dilation, bool ceil_mode, Tensor indices) -> Tensor | 
| 
 | max_pool3d_with_indices(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, int[3] dilation=1, bool ceil_mode=False) -> (Tensor, Tensor) | 
| 
 | maximum(Tensor self, Tensor other) -> Tensor | 
| 
 | mean(Tensor self, *, ScalarType? dtype=None) -> Tensor | 
| 
 | mean.dim(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor | 
| 
 | min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices) | 
| 
 | minimum(Tensor self, Tensor other) -> Tensor | 
| 
 | mm(Tensor self, Tensor mat2) -> Tensor | 
| 
 | mul.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | mul.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | native_dropout(Tensor input, float p, bool? train) -> (Tensor, Tensor) | 
| 
 | native_group_norm(Tensor input, Tensor? weight, Tensor? bias, SymInt N, SymInt C, SymInt HxW, int group, float eps) -> (Tensor, Tensor, Tensor) | 
| 
 | native_group_norm_backward(Tensor grad_out, Tensor input, Tensor mean, Tensor rstd, Tensor? weight, SymInt N, SymInt C, SymInt HxW, int group, bool[3] output_mask) -> (Tensor, Tensor, Tensor) | 
| 
 | native_layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight, Tensor? bias, float eps) -> (Tensor, Tensor, Tensor) | 
| 
 | native_layer_norm_backward(Tensor grad_out, Tensor input, SymInt[] normalized_shape, Tensor mean, Tensor rstd, Tensor? weight, Tensor? bias, bool[3] output_mask) -> (Tensor, Tensor, Tensor) | 
| 
 | ne.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | ne.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | neg(Tensor self) -> Tensor | 
| 
 | nonzero(Tensor self) -> Tensor | 
| 
 | permute(Tensor(a) self, int[] dims) -> Tensor(a) | 
| 
 | pow.Scalar(Scalar self, Tensor exponent) -> Tensor | 
| 
 | pow.Tensor_Scalar(Tensor self, Scalar exponent) -> Tensor | 
| 
 | pow.Tensor_Tensor(Tensor self, Tensor exponent) -> Tensor | 
| 
 | prod(Tensor self, *, ScalarType? dtype=None) -> Tensor | 
| 
 | prod.dim_int(Tensor self, int dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor | 
| 
 | rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | randn(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | randperm(SymInt n, *, ScalarType? dtype=long, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | reciprocal(Tensor self) -> Tensor | 
| 
 | reflection_pad1d(Tensor self, SymInt[2] padding) -> Tensor | 
| 
 | reflection_pad2d(Tensor self, SymInt[4] padding) -> Tensor | 
| 
 | reflection_pad3d(Tensor self, SymInt[6] padding) -> Tensor | 
| 
 | relu(Tensor self) -> Tensor | 
| 
 | remainder.Scalar(Tensor self, Scalar other) -> Tensor | 
| 
 | remainder.Tensor(Tensor self, Tensor other) -> Tensor | 
| 
 | repeat(Tensor self, SymInt[] repeats) -> Tensor | 
| 
 | replication_pad2d(Tensor self, SymInt[4] padding) -> Tensor | 
| 
 | replication_pad3d(Tensor self, SymInt[6] padding) -> Tensor | 
| 
 | resize_(Tensor(a!) self, SymInt[] size, *, MemoryFormat? memory_format=None) -> Tensor(a!) | 
| 
 | round(Tensor self) -> Tensor | 
| 
 | rsqrt(Tensor self) -> Tensor | 
| 
 | scalar_tensor(Scalar s, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor | 
| 
 | scatter.src(Tensor self, int dim, Tensor index, Tensor src) -> Tensor | 
| 
 | scatter.value(Tensor self, int dim, Tensor index, Scalar value) -> Tensor | 
| 
 | scatter_add(Tensor self, int dim, Tensor index, Tensor src) -> Tensor | 
| 
 | scatter_reduce.two(Tensor self, int dim, Tensor index, Tensor src, str reduce, *, bool include_self=True) -> Tensor | 
| 
 | select.int(Tensor(a) self, int dim, SymInt index) -> Tensor(a) | 
| 
 | select_scatter(Tensor self, Tensor src, int dim, SymInt index) -> Tensor | 
| 
 | sigmoid(Tensor self) -> Tensor | 
| 
 | sign(Tensor self) -> Tensor | 
| 
 | sin(Tensor self) -> Tensor | 
| 
 | sinh(Tensor self) -> Tensor | 
| 
 | slice.Tensor(Tensor(a) self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a) | 
| 
 | slice_scatter(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor | 
| 
 | sort(Tensor self, int dim=-1, bool descending=False) -> (Tensor values, Tensor indices) | 
| 
 | split_with_sizes(Tensor(a -> *) self, SymInt[] split_sizes, int dim=0) -> Tensor(a)[] | 
| 
 | sqrt(Tensor self) -> Tensor | 
| 
 | squeeze.dim(Tensor(a) self, int dim) -> Tensor(a) | 
| 
 | squeeze.dims(Tensor(a) self, int[] dim) -> Tensor(a) | 
| 
 | sub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor | 
| 
 | sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor | 
| 
 | sum.dim_IntList(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor | 
| 
 | sym_numel(Tensor self) -> SymInt | 
| 
 | sym_size.int(Tensor self, int dim) -> SymInt | 
| 
 | sym_storage_offset(Tensor self) -> SymInt | 
| 
 | sym_stride.int(Tensor self, int dim) -> SymInt | 
| 
 | tan(Tensor self) -> Tensor | 
| 
 | tanh(Tensor self) -> Tensor | 
| 
 | topk(Tensor self, SymInt k, int dim=-1, bool largest=True, bool sorted=True) -> (Tensor values, Tensor indices) | 
| 
 | trunc(Tensor self) -> Tensor | 
| 
 | unsqueeze(Tensor(a) self, int dim) -> Tensor(a) | 
| 
 | upsample_bilinear2d.vec(Tensor input, SymInt[]? output_size, bool align_corners, float[]? scale_factors) -> Tensor | 
| 
 | upsample_nearest2d.vec(Tensor input, SymInt[]? output_size, float[]? scale_factors) -> Tensor | 
| 
 | var.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> Tensor | 
| 
 | var.dim(Tensor self, int[1]? dim, bool unbiased=True, bool keepdim=False) -> Tensor | 
| 
 | view(Tensor(a) self, SymInt[] size) -> Tensor(a) | 
| 
 | where.self(張量 condition, 張量 self, 張量 other) -> 張量 | 
Prims IR¶
Prims IR 是一組可用於組合其他運算子的基本運算子。Prims IR 是比核心 Aten IR 更低階的運算子集,它將運算子進一步分解為明確的類型提升和廣播運算子:prims.convert_element_type 和 prims.broadcast_in_dim。此運算子集旨在與編譯器後端介接。
警告
此運算子集仍在積極開發中,未來將會新增更多運算子。
| 運算子 | 結構描述 | 
|---|---|
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self, 純量 value) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量(a) self) -> 張量(a) | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量(a) self) -> 張量(a) | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self) -> (張量 mantissa, 張量 exponent) | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量 self, 張量 other) -> 張量 | 
| 
 | (張量(a!) a, 符號整數[] size, 符號整數[] stride, 符號整數 storage_offset) -> 張量(a!) | 
| 
 | (張量(a) a, 符號整數[] shape, 整數[] broadcast_dimensions) -> 張量(a) | 
| 
 | (張量(a) a, 整數 start, 整數 end) -> 張量(a) | 
| 
 | (張量(a) a) -> 張量(a) | 
| 
 | (張量(a) a, 符號整數[] start_indices, 符號整數[] limit_indices, 符號整數[]? strides=無) -> 張量(a) | 
| 
 | (張量(a) a, 符號整數 start_index, 符號整數 limit_index, 整數 stride=1, 整數 axis=0) -> 張量(a) | 
| 
 | (張量(a) a, 整數 dim, 符號整數 outer_length) -> 張量(a) | 
| 
 | (張量(a) a, 整數[] dimensions) -> 張量(a) | 
| 
 | (張量(a) a, 整數[] permutation) -> 張量(a) | 
| 
 | (張量(a) a) -> 張量(a) | 
| 
 | (張量(a) a, 純量類型 dtype) -> 張量(a) | 
| 
 | (張量 self, 張量 src, 符號整數[] size, 符號整數[] stride, 符號整數 storage_offset) -> 張量 | 
| 
 | (張量 a, 整數 start, 整數 end) -> 張量 | 
| 
 | (張量[] tensors, 整數 dim) -> 張量 | 
| 
 | (張量 a, 符號整數[] shape) -> 張量 | 
| 
 | (張量 a, 整數[] dims) -> 張量 | 
| 
 | (張量 pred, 張量 a, 張量 b) -> 張量 | 
| 
 | (張量 self, *, 記憶體格式? memory_format=無) -> 張量 | 
| 
 | (張量 a, 純量類型 dtype) -> 張量 | 
| 
 | (張量 a, 裝置 device) -> 張量 | 
| 
 | (張量 a) -> 純量 | 
| 
 | (純量類型 dtype) -> 純量 | 
| 
 | (純量類型 dtype) -> 純量 | 
| 
 | (張量 a, 符號整數[] stride) -> 張量 | 
| 
 | (張量(a!) a, 張量 b) -> 張量(a!) | 
| 
 | (張量(a!) a, 符號整數[] shape) -> 張量(a!) | 
| 
 | (張量 inp, 整數[]? dims, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (張量 inp, 整數[]? dims, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (張量 inp, 整數[]? dims, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (張量 inp, 整數[]? dims, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (張量 inp, 整數[]? dims, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (張量 inp, 整數[]? dims, 浮點數? correction=1, *, 純量類型? output_dtype=無) -> 張量 | 
| 
 | (符號整數[] shape, 符號整數[] strides, *, 純量類型 dtype, 裝置 device, 布林值 requires_grad) -> 張量 | 
| 
 | (符號整數[] shape, 整數[] physical_layout, *, 純量類型 dtype, 裝置 device, 布林值 requires_grad) -> 張量 | 
| 
 | (純量 s, *, 純量類型? dtype=無, 裝置? device=無) -> 張量 | 
| 
 | (符號整數 length, *, 符號整數 start, 符號整數 step, 純量類型 dtype, 裝置 device, 布林值 requires_grad) -> 張量 | 
| 
 | (張量 A, *, 布林值 full_matrices) -> (張量 U, 張量 S, 張量 Vh) | 
| 
 | (符號整數[] shape, *, 純量 mean, 純量 std, 純量類型 dtype, 裝置 device, 布林值 requires_grad, 產生器? generator=無) -> 張量 | 
| 
 | (符號整數[] shape, *, 純量 low, 純量 high, 純量類型 dtype, 裝置 device, 產生器? generator=無) -> 張量 | 
| 
 | (張量 self, *, 整數[] dim, 布林值 onesided) -> 張量 | 
| 
 | (張量 self, *, 整數[] dim, 布林值 forward) -> 張量 | 
| 
 | (張量 self, *, 整數[] dim, 符號整數 last_dim_size) -> 張量 | 
| 
 | () -> 張量 | 
| 
 | (張量[] tokens) -> () |