If I understand your final goal correctly, the Caffe convolution level can already perform several I / O convolutions with common / shared filters, such as:
layer { name: "conv" type: "Convolution" bottom: "in1" bottom: "in2" bottom: "in3" top: "out1" top: "out2" top: "out3" convolution_param { num_output : 10 #the same 10 filters for all 3 inputs kernel_size: 3 } }
Assuming you have all the threads separated (the slice layer can do this), and finally, you can combine them if you wish using the concat or eltwise layer.
This avoids the need to change the blob shape, convolution, and then reformat it, which can lead to inter-channel interference near the fields.
Francis lau
source share