Pytorch: convert FloatTensor to DoubleTensor

I have 2 numpy arrays that I convert to tensors to use the TensorDataset object.

import torch.utils.data as data_utils

X = np.zeros((100,30))
Y = np.zeros((100,30))

train = data_utils.TensorDataset(torch.from_numpy(X).double(), torch.from_numpy(Y))
train_loader = data_utils.DataLoader(train, batch_size=50, shuffle=True)

when i do this:

for batch_idx, (data, target) in enumerate(train_loader):
    data, target = Variable(data), Variable(target)
    optimizer.zero_grad()
    output = model(data)               # error occurs here

I get an error:

TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of: [...]
 * (float beta, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2 ) do not match because some arguments have invalid types: (int, int, torch.DoubleTensor, torch.FloatTensor)
 * (float beta, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2) does not match, because some arguments have invalid types: (int, int, torch.DoubleTensor, torch.FloatTensor)

The last error arises from:

output.addmm_ (0, 1, input, weight.t())

, , .double() - . FloatTensor, - DoubleTensor? ?

+6
2

numpy 64-bit floating point torch.DoubleTensor . , , , Double. , numpy Float, Float.

, :

data_utils.TensorDataset(torch.from_numpy(X).float(), torch.from_numpy(Y).float())

do:

model.double()

Depeding, , Float Double.

+6

, PyTorch . data - DoubleTensor, FloatTensor. . @mexmex, data FloatTensor, .

! PyTorch, . , .

+1

All Articles