Show Original Image Pixels Instead Of Mask In Python
I have a deep learning model which returns to me an array which when plotted like this res = deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1)
Solution 1:
It's not entirely clear how the labels array works here. Assuming that it contains values greater than zero where the cat and dog are, you can create the masked image with something like,
mask = lables > 0
newimage = np.zeros(image.shape)
newimage[mask] = image[mask]
where I've create a zero image based on the original and set the original pixels where the labels are greater than zero.
Post a Comment for "Show Original Image Pixels Instead Of Mask In Python"