-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question]: Multiple inference for single frame #161
Comments
Yes, you can simply add multiple stages to your pipeline, each stage will have a |
Does this mean for every object detected from previous stage, i need to add stages dynamically? I can do this in a process.py using a loop. But process.py doesn't allow me to run onnx model in CUDAExecutionProvider. So, I wanted to know if this is possible using process.json |
Oh, I understand it now. That cannot be achieved with |
While running the ONNX model with CUDA execution provider, it is showing segmentation fault and failing during initialization itself |
While using
process.json
file, setting the inputs ininference_input
field for frame data is working for single inference in a frame.I want to run multiple inferences for each detected objects of a frame as my model doesn't support batch processing. Can this be achieved using
process.json
file? Doesinference_input
supports list of inputs for multiple inferences?The text was updated successfully, but these errors were encountered: