Skip to content Skip to sidebar Skip to footer

Show Original Image Pixels Instead Of Mask In Python

I have a deep learning model which returns to me an array which when plotted like this res = deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1)

Solution 1:

It's not entirely clear how the labels array works here. Assuming that it contains values greater than zero where the cat and dog are, you can create the masked image with something like,

mask = lables > 0
newimage = np.zeros(image.shape)
newimage[mask] = image[mask]

where I've create a zero image based on the original and set the original pixels where the labels are greater than zero.

Solution 2:

I was able to reverse this and achieve what I wanted to get

mask = labels[:-pad_x] == 0 resizedOrig = cv2.resize(frame, (512,384)) resizedOrig[mask] = 0

Post a Comment for "Show Original Image Pixels Instead Of Mask In Python"