Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not reproduce the segmentation mIoU on ADE20k #3

Open
shiyutang opened this issue Feb 9, 2023 · 7 comments
Open

Can not reproduce the segmentation mIoU on ADE20k #3

shiyutang opened this issue Feb 9, 2023 · 7 comments

Comments

@shiyutang
Copy link

shiyutang commented Feb 9, 2023

I try to reproduce the base model, but I only get mIoU=40.0 rather than 41.2 as reported in the codebase.
image

Here are some change I do to the original code.

  1. I train the model on 4 cards with 4 images for each card.
    image

  2. I load the pre-trained classification model through the config:
    image

And the script I used to train is: export CUDA_VISIBLE_DEVICES=3,5,6,7;sh tools/dist_train.sh local_configs/seaformer/seaformer_base_512x512_160k_2x8_ade20k.py 4 --work-dir output

Could you guys point out where could be wrong? Thank you very much :)

@shiyutang shiyutang changed the title Can not reproduce the segmentation mIoU on Can not reproduce the segmentation mIoU on ADE20k Feb 9, 2023
@wwqq
Copy link
Collaborator

wwqq commented Feb 9, 2023

We reported mIoU=40.2 in the paper. We noticed that you have already reproduced this result. The model released in this repo is a better one after multiple training.

@shiyutang
Copy link
Author

Thank you very much. And I wonder, did you tune anything during multiple training? or did you achieve a 1% raise in mIoU out of training fluctuation?

@wwqq
Copy link
Collaborator

wwqq commented Feb 9, 2023

We didn't tune anything. We used the same settings as in this repo.

@shiyutang
Copy link
Author

Thanks a lot.

@nizhenliang
Copy link

Hello, I meet the same problem. I reproduce seaformer-base on ADR20K. But only get 39.96 mIoU. I find the lr in paper is 0.0005, but in this repo lr is 0.00025. I want to know whcih one is your config?

@wwqq
Copy link
Collaborator

wwqq commented Mar 3, 2023

@nizhenliang Hello, lr is 0.0005 for batch_size 32 and 0.00025 for 16

@1787648106
Copy link

We didn't tune anything. We used the same settings as in this repo.

hello,I wonder the value of the seed in this experiment,thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants