Local reverse neural network

I have a neural network with N input nodes and N output nodes and possibly several hidden layers and repetitions in it, but first forget about them. The goal of a neural network is to study the N-dimensional variable Y *, given the N-dimensional value of X. Let's say that the output of the neural network is equal to Y, which should be close to Y * after training. My question is: is it possible to get feedback between the neural network to output Y *? That is, how do I get the value of X * that will give Y * when introduced into a neural network? (or something close to him)

The main problem is that N is very large, usually of the order of 10,000 or 100,000, but if someone knows how to solve this for small networks without any repetitions or hidden layers that might already be useful. Thanks.

+4
source share
4 answers

If you can choose a neural network so that the number of nodes in each layer is the same, and the weight matrix is ​​nonsingular, and the transfer function is reversible (for example, leaky relu), then the function will be reversible.

Such a neural network is simply the composition of matrix multiplication, the addition of bias and transfer function. To invert, you just need to apply the converse to each operation in the reverse order. That is, take the result, apply the inverse transfer function, multiply it by the inverse of the last weight matrix, minus the offset, apply the inverse transfer function, multiply it by the inverse of the second to the last weight matrix, etc. etc..

+2
source

This is a problem that can be solved using autoencoders . You might also be interested in generative models such as Restricted Boltzmann Machines ( RBM ), which can be stacked to form Deep Belief ( DBN ) networks. RBMs create an internal data model h, v, which can be used to restore v. In DBNs, h of the first level will be v of the second level, etc.

+1
source

Zenna is right. If you use bijective (reversible) activation functions, you can invert layer by layer, subtract the offset and take a pseudo-inversion (if you have the same number of neurons per level, this is also the exact opposite under some conditions of mild regularity). To repeat the conditions: dim (X) == dim (Y) == dim (layer_i), det (Wi) not = 0

Example: Y = tanh (W2 * tanh (W1 * X + b1) + b2) X = W1p * (tanh ^ -1 (W2p * (tanh ^ -1 (Y) - b2)) -b1), where W2p and W1p are pseudoinverse matrices W2 and W1, respectively.

0
source

The following document is an example for inverting a function obtained from Neural Networks. This is an industry example and looks like a good start to understanding how to solve the problem.

0
source

All Articles