I do neural networking at nolearn, anano-based library that uses lasagne.
I do not understand how to define my own cost function.
The output layer is only 3 neurons [0, 1, 2] , and I want it to be mostly sure when it gives 1 or 2, but otherwise - if it is really not sure about 1, 2 - it just returns 0 .
So, I came up with a cost function (a setup is required) where the cost doubles for 1 and 2 than for 0, but I canβt figure out how to talk about it online.
This is the code to update, but how can I tell SGD to use my cost function instead of my own?
EDIT: Full network code:
def nn_loss(data, x_period, columns, num_epochs, batchsize, l_rate=0.02): net1 = NeuralNet( layers=[('input', layers.InputLayer), ('hidden1', layers.DenseLayer), ('output', layers.DenseLayer), ],
EDIT Error when using regression=True
Got 99960 testing datasets. # Neural Network with 18403 learnable parameters ## Layer information # name size --- ------- ------ 0 input 180 1 hidden1 100 2 output 3 Traceback (most recent call last): File "/Users/morgado/anaconda/lib/python3.4/site-packages/theano/compile/function_module.py", line 607, in __call__ outputs = self.fn() ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[1] == 1, but the output size on that axis is 3. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train_nolearn_simple.py", line 272, in <module> main(**kwargs) File "train_nolearn_simple.py", line 239, in main nn_loss_fit = nn_loss(data, x_period, columns, num_epochs, batchsize) File "train_nolearn_simple.py", line 217, in nn_loss net1.fit(data['X_train'], data['y_train']) File "/Users/morgado/anaconda/lib/python3.4/site-packages/nolearn/lasagne/base.py", line 416, in fit self.train_loop(X, y) File "/Users/morgado/anaconda/lib/python3.4/site-packages/nolearn/lasagne/base.py", line 462, in train_loop self.train_iter_, Xb, yb) File "/Users/morgado/anaconda/lib/python3.4/site-packages/nolearn/lasagne/base.py", line 516, in apply_batch_func return func(Xb) if yb is None else func(Xb, yb) File "/Users/morgado/anaconda/lib/python3.4/site-packages/theano/compile/function_module.py", line 618, in __call__ storage_map=self.fn.storage_map) File "/Users/morgado/anaconda/lib/python3.4/site-packages/theano/gof/link.py", line 297, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/Users/morgado/anaconda/lib/python3.4/site-packages/six.py", line 658, in reraise raise value.with_traceback(tb) File "/Users/morgado/anaconda/lib/python3.4/site-packages/theano/compile/function_module.py", line 607, in __call__ outputs = self.fn() ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[1] == 1, but the output size on that axis is 3. Apply node that caused the error: GpuElemwise{Sub}[(0, 1)](GpuElemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0, GpuFromHost.0) Toposort index: 22 Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32, matrix)] Inputs shapes: [(200, 3), (200, 1)] Inputs strides: [(3, 1), (1, 0)] Inputs values: ['not shown', 'not shown'] Outputs clients: [[GpuCAReduce{pre=sqr,red=add}{1,1}(GpuElemwise{Sub}[(0, 1)].0), GpuElemwise{Mul}[(0, 0)](GpuElemwise{Sub}[(0, 1)].0, GpuElemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0, GpuElemwise{sub,no_inplace}.0), GpuElemwise{mul,no_inplace}(CudaNdarrayConstant{[[ 2.]]}, GpuElemwise{Composite{(inv(i0) / i1)},no_inplace}.0, GpuElemwise{Sub}[(0, 1)].0, GpuElemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0, GpuElemwise{sub,no_inplace}.0)]] HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.