torch.gradient¶
- torch.gradient(input, *, spacing=1, dim=None, edge_order=1) Tensor 列表¶
估計函式 在一維或多維中的梯度,使用二階精確中心差分法 並在邊界處使用一階或二階估計。
使用樣本估計 的梯度。預設情況下,未指定
spacing時,樣本完全由input描述,並且輸入座標到輸出的對映與 Tensor 的索引到值的對映相同。例如,對於三維的input,描述的函式是 ,並且 。指定
spacing時,它會修改input與輸入座標之間的關係。這將在下面的“關鍵字引數”部分中詳細介紹。梯度的估計是透過獨立估計 的每個偏導數來完成的。如果 屬於 (它至少有 3 個連續導數),則此估計是精確的,並且透過提供更接近的樣本可以改進估計。從數學上講,偏導數每個內點的值使用帶餘項的泰勒定理估計。令 為一個內點,其左側和右側的相鄰點分別為 和 , 和 可以使用以下方法估計:
利用 這一事實並求解線性系統,我們得到
注意
我們以相同的方式估計複數域 中函式的梯度。
邊界點處每個偏導數的值計算方式不同。參見下文
edge_order。- 引數
input (
Tensor) – 表示函式值的張量- 關鍵字引數
spacing (
scalar,list of scalar,list of Tensor, optional) –spacing可用於修改input張量的索引與樣本座標之間的關係。如果spacing是一個標量,則索引乘以該標量以生成座標。例如,如果spacing=2,則索引 (1, 2, 3) 變為座標 (2, 4, 6)。如果spacing是一個標量列表,則相應的索引將被相乘。例如,如果spacing=(2, -1, 3),則索引 (1, 2, 3) 變為座標 (2, -2, 9)。最後,如果spacing是一個一維張量列表,則每個張量指定相應維度的座標。例如,如果索引是 (1, 2, 3) 並且張量是 (t0, t1, t2),則座標是 (t0[1], t1[2], t2[3])dim (
int,list of int, optional) – 用來近似計算梯度的維度或多個維度。預設情況下,會計算每個維度上的偏梯度。請注意,當指定了dim時,spacing引數的元素必須與指定的 dim 相對應。edge_order (
int, optional) – 1 或 2,分別用於 一階 或 二階 估計邊界(“邊緣”)值。
示例
>>> # Estimates the gradient of f(x)=x^2 at points [-2, -1, 2, 4] >>> coordinates = (torch.tensor([-2., -1., 1., 4.]),) >>> values = torch.tensor([4., 1., 1., 16.], ) >>> torch.gradient(values, spacing = coordinates) (tensor([-3., -2., 2., 5.]),) >>> # Estimates the gradient of the R^2 -> R function whose samples are >>> # described by the tensor t. Implicit coordinates are [0, 1] for the outermost >>> # dimension and [0, 1, 2, 3] for the innermost dimension, and function estimates >>> # partial derivative for both dimensions. >>> t = torch.tensor([[1, 2, 4, 8], [10, 20, 40, 80]]) >>> torch.gradient(t) (tensor([[ 9., 18., 36., 72.], [ 9., 18., 36., 72.]]), tensor([[ 1.0000, 1.5000, 3.0000, 4.0000], [10.0000, 15.0000, 30.0000, 40.0000]])) >>> # A scalar value for spacing modifies the relationship between tensor indices >>> # and input coordinates by multiplying the indices to find the >>> # coordinates. For example, below the indices of the innermost >>> # 0, 1, 2, 3 translate to coordinates of [0, 2, 4, 6], and the indices of >>> # the outermost dimension 0, 1 translate to coordinates of [0, 2]. >>> torch.gradient(t, spacing = 2.0) # dim = None (implicitly [0, 1]) (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000], [ 4.5000, 9.0000, 18.0000, 36.0000]]), tensor([[ 0.5000, 0.7500, 1.5000, 2.0000], [ 5.0000, 7.5000, 15.0000, 20.0000]])) >>> # doubling the spacing between samples halves the estimated partial gradients. >>> >>> # Estimates only the partial derivative for dimension 1 >>> torch.gradient(t, dim = 1) # spacing = None (implicitly 1.) (tensor([[ 1.0000, 1.5000, 3.0000, 4.0000], [10.0000, 15.0000, 30.0000, 40.0000]]),) >>> # When spacing is a list of scalars, the relationship between the tensor >>> # indices and input coordinates changes based on dimension. >>> # For example, below, the indices of the innermost dimension 0, 1, 2, 3 translate >>> # to coordinates of [0, 3, 6, 9], and the indices of the outermost dimension >>> # 0, 1 translate to coordinates of [0, 2]. >>> torch.gradient(t, spacing = [3., 2.]) (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000], [ 4.5000, 9.0000, 18.0000, 36.0000]]), tensor([[ 0.3333, 0.5000, 1.0000, 1.3333], [ 3.3333, 5.0000, 10.0000, 13.3333]])) >>> # The following example is a replication of the previous one with explicit >>> # coordinates. >>> coords = (torch.tensor([0, 2]), torch.tensor([0, 3, 6, 9])) >>> torch.gradient(t, spacing = coords) (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000], [ 4.5000, 9.0000, 18.0000, 36.0000]]), tensor([[ 0.3333, 0.5000, 1.0000, 1.3333], [ 3.3333, 5.0000, 10.0000, 13.3333]]))