Valueerror When Loading A Previously Saved Retrained Vgg16 Model Using Keras
Solution 1:
The problem is with the line model.layers.pop()
. When you pop a layer directly from the list model.layers
, the topology of this model is not updated accordingly. So all following operations would be wrong, given a wrong model definition.
Specifically, when you add a layer with model.add(layer)
, the list model.outputs
is updated to be the output tensor of that layer. You can find the following lines in the source code of Sequential.add()
:
output_tensor = layer(self.outputs[0])
# ... skipping irrelevant linesself.outputs = [output_tensor]
When you call model.layers.pop()
, however, model.outputs
is not updated accordingly. As a result, the next added layer will be called with a wrong input tensor (because self.outputs[0]
is still the output tensor of the removed layer).
This can be demonstrated by the following lines:
model = Sequential()
for layer in vgg16_model.layers:
model.add(layer)
model.layers.pop()
model.add(Dense(3, activation='softmax'))
print(model.layers[-1].input)
# => Tensor("predictions_1/Softmax:0", shape=(?, 1000), dtype=float32)# the new layer is called on a wrong input tensorprint(model.layers[-1].kernel)
# => <tf.Variable 'dense_1/kernel:0' shape=(1000, 3) dtype=float32_ref># the kernel shape is also wrong
The incorrect kernel shape is why you're seeing an error about incompatible shapes [4096,3]
versus [1000,3]
.
To solve the problem, simply don't add the last layer to the Sequential
model.
model = Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
Post a Comment for "Valueerror When Loading A Previously Saved Retrained Vgg16 Model Using Keras"