Simple sparkling neural network code

I am interested in convolutional neural networks (CNN) as an example of a computationally wide application that is suitable for acceleration using reconfigurable equipment (i.e. allows FPGA to be spoken)

To do this, I need to study a simple CNN code, which I can use to understand how they are implemented, how calculations are performed in each layer, how the output of each level is fed to the next input. I am familiar with the theoretical part ( http://cs231n.imtqy.com/convolutional-networks/ )

But I'm not interested in learning CNN, I want the full, self-contained CNN code to be pre-prepared, and all weight and offset values ​​known.

I know that there are many CNN libraries, i.e. Caffe, but the problem is that there is no non-trivial sample code that is self-contained. even for the simplest Caffe example "cpp_classification" many libraries are called, the CNN architecture is expressed as a .prototxt file, and other input types are involved, such as .caffemodel and .binaryproto. The openCV2 libraries are also called. there are layers and layers of abstraction and different libraries working together to obtain a classification result.

I know that these abstractions are necessary to create a “usable” CNN implementation, but for a hardware person who needs to learn the code for bones, this is too much “unrelated work”.

My question is: can someone lead me to a simple and standalone CNN implementation with which I can start?

+7
deep-learning caffe
source share
2 answers

I can recommend tiny-cnn . It is simple, easy (for example, only for the header) and only for the processor, while providing several layers that are often used in the literature (for example, combining levels, screening layers or a local level of response normalization). This means that you can easily explore the efficient implementation of these levels in C ++ without requiring CUDA knowledge and digging through the I / O code and framework as required by an infrastructure such as Caffe . Some comments are missing from the implementation, but the code is still easy to read and understand.

The provided MNIST example is fairly easy to use (he tried it a while ago) and is effectively training. After training and testing, weights are written to a file. Then you have a simple pre-prepared model that you can start with, see Provided by examples / mnist / test.cpp and examples / mnist / train.cpp . It can be easily downloaded for testing (or digit recognition) so that you can debug code while the model is being trained.

If you want to test a more complex network, see Cifar-10 Example .

+9
source share

This is the simplest implementation I've seen: DNN McCaffrey

In addition, the source code for this in Carpathia looks quite simple.

+5
source share

All Articles