Skip to content Skip to sidebar Skip to footer

Valueerror: Input 0 Of Layer Sequential_16 Is Incompatible With The Layer: Expected Ndim=5, Found Ndim=4. Full Shape Received: [none, 224, 224, 3]

I am using transfer learning with MobileNet and then sending the extracted features to a LSTM for video data classification. Images are resized to (224,224) when I set the train,te

Solution 1:

You model is perfectly fine. Its the way you are feeding the data is the problem.

Your model code:

import tensorflow as tf
import keras
from keras.layers import GlobalMaxPool2D, TimeDistributed, Dense, Dropout, LSTM
from keras.applications import MobileNetV2
from keras.models import Sequential
import numpy as np
from keras.preprocessing.sequence import pad_sequences

TARGETX = 224
TARGETY = 224
CLASSES = 3
SIZE = (TARGETX,TARGETY)
INPUT_SHAPE = (TARGETX, TARGETY, 3)
CHANNELS = 3
NBFRAME = 5
INSHAPE = (NBFRAME, TARGETX, TARGETY, 3)

defbuild_mobilenet(shape=INPUT_SHAPE, nbout=CLASSES):
    # INPUT_SHAPE = (224,224,3)# CLASSES = 3
    model = MobileNetV2(
        include_top=False,
        input_shape=shape,
        weights='imagenet')
    model.trainable = True
    output = GlobalMaxPool2D()
    return Sequential([model, output])

defaction_model(shape=INSHAPE, nbout=3):
    # INSHAPE = (5, 224, 224, 3)
    convnet = build_mobilenet(shape[1:])
    
    model = Sequential()
    model.add(TimeDistributed(convnet, input_shape=shape))
    model.add(LSTM(64))
    model.add(Dense(1024, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(64, activation='relu'))
    model.add(Dense(nbout, activation='softmax'))
    return model    

Lets try out this model with some dummy data now:

So you model accepts a sequence of images (i.e frames of the video) and classified them (the video) into one of the 3 classes.

Lets create a dummy data with 4 videos each of 10 frames, i.e batch size = 4 and time steps = 10

X = np.random.randn(4, 10, TARGETX, TARGETY, 3)
y = model(X)
print (y.shape)

Output:

(4,3)

As expected the output size is (4,3)

Now the problem you will be facing with using image_dataset_from_direcctory will be how to batch variable length videos since the number of frames in each video will/might vary. The way to handle it is using pad_sequences.

For example if first video has 10 frames second has 9 and so on you can do something like below

X = [np.random.randn(10, TARGETX, TARGETY, 3), 
     np.random.randn(9, TARGETX, TARGETY, 3), 
     np.random.randn(8, TARGETX, TARGETY, 3), 
     np.random.randn(7, TARGETX, TARGETY, 3)]

X = pad_sequences(X)
y = model(X)
print (y.shape)

Output:

(4,3)

So once you read images using image_dataset_from_direcctory you will have to pad the variable length frames into batch.

Post a Comment for "Valueerror: Input 0 Of Layer Sequential_16 Is Incompatible With The Layer: Expected Ndim=5, Found Ndim=4. Full Shape Received: [none, 224, 224, 3]"