Skip to content

Latest commit

 

History

History
executable file
·
89 lines (57 loc) · 4.11 KB

YOLO.md

File metadata and controls

executable file
·
89 lines (57 loc) · 4.11 KB

YOLO

I believe YOLO should great picking up small object and do the analysis ASAP.

Performance Tuning

$ yolo export model=yolo11n.pt format=engine  # creates 'yolo11n.engine'
$ yolo export model=yolo11n.pt format=engine half=True
$ yolo export model=yolo11n.pt format=engine int8=True

Note1: int8 improves a lot. So it's crucial to export model, adapting hardware acceleration.

Note2: Make sure maximize Jetson Orin's performance.

$ sudo nvpmodel -m 0
$ sudo jetson_clocks
$ yolo export model=yolo11n.pt format="engine" batch=8 workspace=4 int8=True data="coco.yaml"
$ yolo export model=yolo11n.pt format="engine" batch=8 workspace=8 dynamic=True int8=True data="coco.yaml"
$ yolo export model=yolo11n.pt format="engine" batch=8 workspace=2.0 imgsz=320 dynamic=True int8=True data="coco.yaml"
$ yolo export model=yolov5nu.pt format="engine" batch=8 workspace=2.0 imgsz=320 dynamic=True int8=True data="coco.yaml"
$ yolo export model=yolov8n.pt format="engine" batch=8 workspace=2.0 imgsz=320 dynamic=True int8=True data="coco.yaml"

Note1: It's NOT good choice with imgsz=1920,1080, 640(default)/320 or 416(real time+GOOD accuracy)/256 or 128(embedded+NG accuracy).

Note2: dynamic=False improves speed, but input size will be different from image size on fpv requirements. Maybe more coding logical to handle larger sensor data coverage.

Note3: batch improves real time response, but need large resources. There is a balance between time delay/accuracy.

  • Step 6: Using YOLO's plot function increases speed

jetson-fpv/utils/yolo.py

Lines 351 to 354 in 3aeebbb

if len(results) > 0 and hasattr(results[0], 'plot'):
annotated_frame = results[0].plot()
else:
annotated_frame = cv2_frame.copy()

stride=3 means that the detector would only be run on every 3rd frame. The other two frames would be interpolated using the Kalman filter predictions.

jetson-fpv/utils/yolo.py

Lines 110 to 157 in 68d2053

def process_frame(numFrames, start_frame, stride, model, cv2_frame, path, class_indices):
result = None # Initialize result
# Check if interpolation is needed
if numFrames >= start_frame and numFrames % stride != 0:
# Interpolation mode
results = interpolate(model, cv2_frame, path)
# Check if boxes exist in the result
if hasattr(results, 'boxes') and results.boxes is not None and len(results.boxes.data) > 0:
boxes = results.boxes.data # Access the raw tensor or ndarray containing box information
for box in boxes:
#print("Debug: Individual Box:", box) # Debug each box's content
x1, y1, x2, y2 = box[:4] # Coordinates
confidence = box[4] # Confidence score
class_id = int(box[5]) # Class ID
# Draw the bounding box
cv2.rectangle(cv2_frame, (int(x1), int(y1)), (int(x2), int(y2)), FONT_COLOR, BOX_THICKNESS)
label = f"{model.names[class_id]} {confidence:.2f}"
cv2.putText(cv2_frame, label, (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, FONT_SCALE, FONT_COLOR, FONT_THICKNESS)
#else:
# print("Debug: No bounding boxes found during interpolation.")
else:
# Normal tracking mode
result = model.track(cv2_frame, persist=True, verbose=True, classes=class_indices, imgsz=[320, 320], iou=0.5, conf=0.15)[0]
# Check if result contains bounding boxes
if hasattr(result, 'boxes') and result.boxes is not None and len(result.boxes.data) > 0:
boxes = result.boxes.data # Access the raw tensor or ndarray containing box information
for box in boxes:
#print("Debug: Individual Box:", box) # Debug each box's content
x1, y1, x2, y2 = box[:4] # Coordinates
confidence = box[4]
class_id = int(box[5])
# Draw the bounding box
cv2.rectangle(cv2_frame, (int(x1), int(y1)), (int(x2), int(y2)), FONT_COLOR, BOX_THICKNESS)
label = f"{model.names[class_id]} {confidence:.2f}"
cv2.putText(cv2_frame, label, (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, FONT_SCALE, FONT_COLOR, FONT_THICKNESS)
#else:
# print("Debug: No bounding boxes found in tracking mode.")
# Update path if not already set
if path == "":
path = result.path
return result, path

Note: It's significantly speed up performance.

Ultralytics YOLO11 on NVIDIA Jetson using DeepStream SDK and TensorRT

Firstly, clarify Which DS version for Jetson Orin Nano/Jetpack 5.1.4/L4T 35.6.0?

TBD.

ByteTrack: Multi-Object Tracking by Associating Every Detection Box

TBD.