Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why did I test the program's inference speed on the computer and achieve 50s instead of the 40ms mentioned in the article? #92

Open
amazingpanpanda opened this issue Sep 10, 2024 · 15 comments

Comments

@amazingpanpanda
Copy link

1725962698758

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Your local machine is 4090?

@amazingpanpanda
Copy link
Author

Your local machine is 4090?

A100

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Your local machine is 4090?

A100

Wow. I tested the inference latency testing with our training script and comment code for backward.

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

1725962698758

Oh I know your meaning. Inference latency means the time for one forward. The test script was forwarded multiple times to get the best performance.

@amazingpanpanda
Copy link
Author

Your local machine is 4090?

A100

Wow. I tested the inference latency testing with our training script and comment code for backward.

So may I ask if the "44ms" mentioned in the third part of Figure 1 of the paper by Faster Speed refers to the time required for backbone inference, rather than the time required for the model to infer a frame of data?

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Actually, inference one frame one is sufficient for the application. (Simple interpolating the prediction to the original resolution also performs well). The complex testing process is just for getting a higher number.

@amazingpanpanda
Copy link
Author

Actually, inference one frame one is sufficient for the application. (Simple interpolating the prediction to the original resolution also performs well). The complex testing process is just for getting a higher number.

May I ask how long it takes for you to infer a frame of data during testing?

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Do you mean the current default test script?

@amazingpanpanda
Copy link
Author

Do you mean the current default test script?

yes

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Do you mean the current default test script?

yes

I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.

@amazingpanpanda
Copy link
Author

Do you mean the current default test script?

yes

I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.

OK, thank you for your answer.

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Do you mean the current default test script?

yes

I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.

OK, thank you for your answer.

If you want to save more time, you can reduce the number of augmentation for testing.

@amazingpanpanda
Copy link
Author

Do you mean the current default test script?

yes

I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.

OK, thank you for your answer.

If you want to save more time, you can reduce the number of augmentation for testing.

Yes, I found through testing that even with only one enhancement, its inference time can reach 200ms to 400ms.

@Gofinge
Copy link
Member

Gofinge commented Sep 11, 2024

Usually, it is good enough. But you know, for benchmarking, we are usually willing to spend more testing time for even 0.1% enhancement.

@amazingpanpanda
Copy link
Author

Usually, it is good enough. But you know, for benchmarking, we are usually willing to spend more testing time for even 0.1% enhancement.

OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants