gpt4 book ai didi

python - 通过渐变调整 PyTorch 张量的大小至更小的尺寸

转载 作者:行者123 更新时间:2023-12-02 08:58:38 28 4
gpt4 key购买 nike

我正在尝试将张量从 (3,3) 缩小到 (1, 1),但我想保留原始张量:

import torch

a = torch.rand(3, 3)
a_copy = a.clone()
a_copy.resize_(1, 1)

我的初始张量需要 requires_grad=True 但 PyTorch 禁止我尝试调整副本大小:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone()
a_copy.resize_(1, 1)

抛出错误:

Traceback (most recent call last):
File "pytorch_test.py", line 7, in <module>
a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad

克隆和分离

我也尝试了.clone().detach():

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone().detach()

with torch.no_grad():
a_copy.resize_(1, 1)

它给出了这个错误:

Traceback (most recent call last):
File "pytorch_test.py", line 14, in <module>
a_copy.resize_(1, 1)
RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block.
For example, change:
x.data.set_(y)
to:
with torch.no_grad():
x.set_(y)

此行为已在 the docs 中说明。和 #15070 .

使用no_grad()

因此,按照他们在错误消息中所说的内容,我删除了 .detach() 并使用 no_grad() 代替:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone()

with torch.no_grad():
a_copy.resize_(1, 1)

但它仍然给我一个关于 grad 的错误:

Traceback (most recent call last):
File "pytorch_test.py", line 21, in <module>
a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad

类似问题

我看过Resize PyTorch Tensor但该示例中的张量保留了所有原始值。我也看过Pytorch preferred way to copy a tensor这是我用来复制张量的方法。

我使用的是 PyTorch 版本 1.4.0

最佳答案

有一个narrow()功能:

def samestorage(x,y):
if x.storage().data_ptr()==y.storage().data_ptr():
print("same storage")
else:
print("different storage")
def contiguous(y):
if True==y.is_contiguous():
print("contiguous")
else:
print("non contiguous")
# narrow => same storage contiguous tensors
import torch
x = torch.randn(3, 3, requires_grad=True)
y = x.narrow(0, 1, 2) #dim, start, len
print(x)
print(y)
contiguous(y)
samestorage(x,y)

输出:

tensor([[ 1.1383, -1.2937,  0.8451],
[ 0.0151, 0.8608, 1.4623],
[ 0.8490, -0.0870, -0.0254]], requires_grad=True)
tensor([[ 0.0151, 0.8608, 1.4623],
[ 0.8490, -0.0870, -0.0254]], grad_fn=<SliceBackward>)
contiguous
same storage

关于python - 通过渐变调整 PyTorch 张量的大小至更小的尺寸,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60664524/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com