Replies: 2 comments
-
hi there - sorry for late response, just realised this question was asked. few questions:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
and also - the setup seems to be fitting what we prepared for video-processing inside inference server: https://inference.roboflow.com/workflows/video_processing/overview/ this is not yet fully stable, but may fit your needs |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
My name is Erwin, I am on team of AI, and we currently experimenting using inference pipeline. We trying to make multiple AI CCTV Camera stream with inference pipeline.
My basic system,
for example
camera1
has 3 AI module
3 Inference Pipeline intialized
Then,
If we want to remove the camera, we delete that instance / object of Inference Pipeline
Problem,
for example,
if I initiated camera1 (initial VRAM = 100MB), VRAM will increase let's say (VRAM = 200 MB), and then I stopped the camera1, the VRAM is decrease, but there is little left over (VRAM = 120MB). There is 20MB stuck in out VRAM.
So in here I tried to benchmark the VRAM GPU
My scenario in this benchmark is
I'm trying to turn on one camera, then turn it off; turn on two cameras, then turn off two cameras, and so on. My observations:
Stable means that if a new camera is added when it's in the stable range, the increase in RAM isn’t too high, compared to the first and second times a camera was added.
My system,
PyPI Package:
Inference-GPU: 0.23.0
Software & Hardware
OS: Ubuntu 22.04.5 LTS x86_64
GPU: Tesla P4 8GB (Nvidia driver: 535.183.01 )
CPU : Intel Xeon Silver 4216 (12) @ 2.095GHz
Beta Was this translation helpful? Give feedback.
All reactions