Keras: Feeding In Part Of Previous Layer To Next Layer, In Cnn
Solution 1:
After posting this and this questions in StackOverflow, and some personal exploring, I came up with a solution. One can possibly do this with Lambda
layers; by calling a Lambda
layer to extract a sub part of the previous layer. For example, if the Lambda
function is defined as,
deflayer_slice(x,i):
return x[:,:,:,i:i+1]
and then, called as,
k = 5
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = np.empty(k, dtype=object)
for i inrange(0,k):
y[i] = Lambda(layer_slice, arguments={'i':i})(x)
y[i] = Conv2D(1,(3,3), data_format='channels_last', padding='same')(y[i])
y = keras.layers.concatenate([y[i] for i inrange (0,k)], axis=3, name='block1_conv1_loc')
out = Activation('relu')(y)
print ('Output shape is, ' +str(out.get_shape()))
it should effectively feed in individual kernel outputs to a new Conv2D
layer. The layer shapes and corresponding number of trainable parameters being obtained from model.summary()
matches the expectation. Thanks to Daniel for pointing out that Lambda
layers cannot have trainable weights.
Solution 2:
Prabaha. I know you've solved your problem, but now I see your answer, you can do that without using the lambda layer too, just split the first Conv2D in many. One layer with k filters is equivalent to k layers with one filter:
for i in range(0,k):
y[i] = Conv2D(1, (3,3), ... , name='block1_conv'+str(i))(inputs)
y[i] = Conv2D(1,(3,3), ...)(y[i])
y = Concatenate()([y[i] for i in range (0,k)])
out = Activation('relu')(y)
You can count the total parameters in your answer and in this answer to compare.
Post a Comment for "Keras: Feeding In Part Of Previous Layer To Next Layer, In Cnn"