Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the exemple given using Unet3p on Oxford Pet dataset with a dataset containing gray images #61

Open
hubsilon opened this issue Sep 7, 2022 · 1 comment

Comments

@hubsilon
Copy link

hubsilon commented Sep 7, 2022

Dear all,

I hope you're going well.

I would like to use the given exemple using Unet3p on Oxford Pet dataset with a dataset containing gray images as input (masks have 3 labels, namely 1 2 3).

I tried without changing anything, as if the gray image were rgb images, Here is the error:
ValueError: Dimensions must be equal, but are 1048576 and 1572864 for '{{node hybrid_loss/mul}} = Mul[T=DT_FLOAT](hybrid_loss/Reshape, hybrid_loss/Reshape_1)' with input shapes: [1048576], [1572864].

The error lies in the line:
# train on batch
loss_ = unet3plus.train_on_batch([train_input,],
[train_target, train_target, train_target, train_target, train_target,])

Then I tried by changing the input_size from (128, 128, 3) to (128, 128, 1) and changing as well channel=3 to channel=1 but it did not work as well. Same Error, same line.

I would appreciate your help on this matter and/or the best way to use this script on gray images.

best regards

@hubsilon hubsilon changed the title Possible bug in the exemple "UNET 3+ with deep supervision, classification-guided module, and hybrid loss" // n_labels vs input_tensor How to use the exemple given using Unet3p on Oxford Pet dataset with a dataset containing gray images Sep 8, 2022
@murdav
Copy link

murdav commented Nov 19, 2022

Try to have a look at this example (lines 8-10):
https://github.com/yingkaisha/keras-vision-transformer/blob/main/examples/Swin_UNET_oxford_iiit.ipynb

I think you have used n_labels equals for input and output layers. If so, you need to have hot-encoded mask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants