-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why did I test the program's inference speed on the computer and achieve 50s instead of the 40ms mentioned in the article? #92
Comments
Your local machine is 4090? |
A100 |
Wow. I tested the inference latency testing with our training script and comment code for backward. |
So may I ask if the "44ms" mentioned in the third part of Figure 1 of the paper by Faster Speed refers to the time required for backbone inference, rather than the time required for the model to infer a frame of data? |
Actually, inference one frame one is sufficient for the application. (Simple interpolating the prediction to the original resolution also performs well). The complex testing process is just for getting a higher number. |
May I ask how long it takes for you to infer a frame of data during testing? |
Do you mean the current default test script? |
yes |
I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end. |
OK, thank you for your answer. |
If you want to save more time, you can reduce the number of augmentation for testing. |
Yes, I found through testing that even with only one enhancement, its inference time can reach 200ms to 400ms. |
Usually, it is good enough. But you know, for benchmarking, we are usually willing to spend more testing time for even 0.1% enhancement. |
OK |
The text was updated successfully, but these errors were encountered: