You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently I'm facing a problem with this project. I built up the environment with RTX3090 GPU, and trained a model with Waymo segmentation dataset. It works perfectly with RTX3090, while it doesn't work if I load the built-up envieonment with A100. It does not work for inference with the trained model by RTX3090 and it also does not work when I try to train the same model with A100.
Do you have any ideas how to fix it?
Thank you!
The text was updated successfully, but these errors were encountered:
Hi, may I confirm that you directly load the environment build on 3090? If so, note that a100 and 3090 are different architectures and should build different environments from scratch. If not, could you provide the error message?
yes, I directly load the environment build on 3090. So it is recommanded to rebuild an environment from the scratch on a100? or is it possible that I only rebuild/reinstall some of the packages in the environment. And what do you mean by "architectures"? May I have some references for it? I am trying to find an easier way to make it work.
yes, I directly load the environment build on 3090. So it is recommanded to rebuild an environment from the scratch on a100? or is it possible that I only rebuild/reinstall some of the packages in the environment. And what do you mean by "architectures"? May I have some references for it? I am trying to find an easier way to make it work.
Yes, or you can build all cuda libraries with a long cuda arch list, e.g.:
Hi authors,
Thanks a lot for your excellent work, and code.
Currently I'm facing a problem with this project. I built up the environment with RTX3090 GPU, and trained a model with Waymo segmentation dataset. It works perfectly with RTX3090, while it doesn't work if I load the built-up envieonment with A100. It does not work for inference with the trained model by RTX3090 and it also does not work when I try to train the same model with A100.
Do you have any ideas how to fix it?
Thank you!
The text was updated successfully, but these errors were encountered: