Adding L1 / L2 Regulation in PyTorch?

Is there any way I can add a simple L1 / L2 regulation in PyTorch? We can probably calculate the regularized loss by simply adding data_loss with reg_loss , but is there any explicit way, any support for the PyTorch library, to make this easier without doing it manually?

+20
pytorch
source share
4 answers

This is presented in the documentation for PyTorch. Take a look at http://pytorch.org/docs/optim.html#torch.optim.Adagrad . You can add L2 loss using the weight reduction parameter to the optimization function.

+16
source share

The following should help for L2 regularization:

 optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5) 
+18
source share

To regularize L2,

 lambda = torch.tensor(1.) l2_reg = torch.tensor(0.) for param in model.parameters(): l2_reg += torch.norm(param) loss += lambda * l2_reg 

Recommendations:

+10
source share

Interestingly, torch.norm slower on the CPU and faster on the GPU than the direct approach.

 import torch x = torch.randn(1024,100) y = torch.randn(1024,100) %timeit torch.sqrt((x - y).pow(2).sum(1)) %timeit torch.norm(x - y, 2, 1) 

Out:

 1000 loops, best of 3: 910 µs per loop 1000 loops, best of 3: 1.76 ms per loop 

On the other hand:

 import torch x = torch.randn(1024,100).cuda() y = torch.randn(1024,100).cuda() %timeit torch.sqrt((x - y).pow(2).sum(1)) %timeit torch.norm(x - y, 2, 1) 

Out:

 10000 loops, best of 3: 50 µs per loop 10000 loops, best of 3: 26 µs per loop 
+2
source share

All Articles