Skip to content

Commit

Permalink
Merge pull request #32 from hawkeye217/0.16
Browse files Browse the repository at this point in the history
updates
  • Loading branch information
hawkeye217 authored Nov 1, 2024
2 parents e5ebf93 + 93bc14e commit 2949c30
Show file tree
Hide file tree
Showing 25 changed files with 1,756 additions and 80 deletions.
7 changes: 6 additions & 1 deletion .cspell/frigate-dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ aarch
absdiff
airockchip
Alloc
alpr
Amcrest
amdgpu
analyzeduration
Expand Down Expand Up @@ -60,6 +61,7 @@ dsize
dtype
ECONNRESET
edgetpu
facenet
fastapi
faststart
fflags
Expand Down Expand Up @@ -113,6 +115,8 @@ itemsize
Jellyfin
jetson
jetsons
jina
jinaai
joserfc
jsmpeg
jsonify
Expand Down Expand Up @@ -186,6 +190,7 @@ openai
opencv
openvino
OWASP
paddleocr
paho
passwordless
popleft
Expand Down Expand Up @@ -305,4 +310,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
default_target: local

COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.15.0
VERSION = 0.16.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty
Expand Down
5 changes: 5 additions & 0 deletions docker/main/requirements-wheels.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 2.1.*
python-multipart == 0.0.12
# General
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
Expand Down Expand Up @@ -43,3 +45,6 @@ openai == 1.51.*
# push notifications
py-vapid == 1.9.*
pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
2 changes: 2 additions & 0 deletions docker/main/rootfs/usr/local/nginx/conf/nginx.conf
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,8 @@ http {
proxy_no_cache $should_not_cache;
add_header X-Cache-Status $upstream_cache_status;

client_max_body_size 10M;

location /api/vod/ {
include auth_request.conf;
proxy_pass http://frigate_api/vod/;
Expand Down
21 changes: 21 additions & 0 deletions docs/docs/configuration/face_recognition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
id: face_recognition
title: Face Recognition
---

Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.

Frigate has support for FaceNet to create face embeddings, which runs locally. Embeddings are then saved to Frigate's database.

## Minimum System Requirements

Face recognition works by running a large AI model locally on your system. Systems without a GPU will not run Face Recognition reliably or at all.

## Configuration

Face recognition is disabled by default and requires semantic search to be enabled, face recognition must be enabled in your config file before it can be used. Semantic Search and face recognition are global configuration settings.

```yaml
face_recognition:
enabled: true
```
45 changes: 45 additions & 0 deletions docs/docs/configuration/license_plate_recognition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
id: license_plate_recognition
title: License Plate Recognition (LPR)
---

Frigate can recognize license plates on vehicles and automatically add the detected characters as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street with a dedicated LPR camera.

Users running a Frigate+ model should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.

LPR is most effective when the vehicle’s license plate is fully visible to the camera. For moving vehicles, Frigate will attempt to read the plate continuously, refining its detection and keeping the most confident result. LPR will not run on stationary vehicles.

## Minimum System Requirements

License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.

## Configuration

License plate recognition is disabled by default. Enable it in your config file:

```yaml
lpr:
enabled: true
```
## Advanced Configuration
Several options are available to fine-tune the LPR feature. For example, you can adjust the `min_area` setting, which defines the minimum size in pixels a license plate must be before LPR runs. The default is 500 pixels.

Additionally, you can define `known_plates` as strings or regular expressions, allowing Frigate to label tracked vehicles with custom sub_labels when a recognized plate is detected. This information is then accessible in the UI, filters, and notifications.

```yaml
lpr:
enabled: true
min_area: 500
known_plates:
Wife's Car:
- "ABC-1234"
- "ABC-I234"
Johnny:
- "J*N-*234" # Using wildcards for H/M and 1/I
Sally:
- "[S5]LL-1234" # Matches SLL-1234 and 5LL-1234
```

In this example, "Wife's Car" will appear as the label for any vehicle matching the plate "ABC-1234." The model might occasionally interpret the digit 1 as a capital I (e.g., "ABC-I234"), so both variations are listed. Similarly, multiple possible variations are specified for Johnny and Sally.
8 changes: 8 additions & 0 deletions docs/docs/configuration/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,6 +522,14 @@ semantic_search:
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"

# Optional: Configuration for face recognition capability
face_recognition:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"

# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
Expand Down
2 changes: 2 additions & 0 deletions docs/sidebars.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ const sidebars: SidebarsConfig = {
'Semantic Search': [
'configuration/semantic_search',
'configuration/genai',
'configuration/face_recognition',
'configuration/license_plate_recognition',
],
Cameras: [
'configuration/cameras',
Expand Down
56 changes: 56 additions & 0 deletions frigate/api/classification.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
"""Object classification APIs."""

import logging

from fastapi import APIRouter, Request, UploadFile
from fastapi.responses import JSONResponse

from frigate.api.defs.tags import Tags
from frigate.embeddings import EmbeddingsContext

logger = logging.getLogger(__name__)

router = APIRouter(tags=[Tags.events])


@router.get("/faces")
def get_faces():
return JSONResponse(content={"message": "there are faces"})


@router.post("/faces/{name}")
async def register_face(request: Request, name: str, file: UploadFile):
# if not file.content_type.startswith("image"):
# return JSONResponse(
# status_code=400,
# content={
# "success": False,
# "message": "Only an image can be used to register a face.",
# },
# )

context: EmbeddingsContext = request.app.embeddings
context.register_face(name, await file.read())
return JSONResponse(
status_code=200,
content={"success": True, "message": "Successfully registered face."},
)


@router.delete("/faces")
def deregister_faces(request: Request, body: dict = None):
json: dict[str, any] = body or {}
list_of_ids = json.get("ids", "")

if not list_of_ids or len(list_of_ids) == 0:
return JSONResponse(
content=({"success": False, "message": "Not a valid list of ids"}),
status_code=404,
)

context: EmbeddingsContext = request.app.embeddings
context.delete_face_ids(list_of_ids)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)
3 changes: 3 additions & 0 deletions frigate/api/defs/events_body.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ class EventsSubLabelBody(BaseModel):
subLabelScore: Optional[float] = Field(
title="Score for sub label", default=None, gt=0.0, le=1.0
)
camera: Optional[str] = Field(
title="Camera this object is detected on.", default=None
)


class EventsDescriptionBody(BaseModel):
Expand Down
1 change: 1 addition & 0 deletions frigate/api/defs/tags.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ class Tags(Enum):
review = "Review"
export = "Export"
events = "Events"
classification = "classification"
auth = "Auth"
55 changes: 38 additions & 17 deletions frigate/api/event.py
Original file line number Diff line number Diff line change
Expand Up @@ -890,38 +890,59 @@ def set_sub_label(
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
if not body.camera:
return JSONResponse(
content=(
{
"success": False,
"message": "Event "
+ event_id
+ " not found and camera is not provided.",
}
),
status_code=404,
)

event = None

if request.app.detected_frames_processor:
tracked_obj: TrackedObject = (
request.app.detected_frames_processor.camera_states[
event.camera if event else body.camera
].tracked_objects.get(event_id)
)
else:
tracked_obj = None

if not event and not tracked_obj:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
content=(
{"success": False, "message": "Event " + event_id + " not found."}
),
status_code=404,
)

new_sub_label = body.subLabel
new_score = body.subLabelScore

if not event.end_time:
# update tracked object
tracked_obj: TrackedObject = (
request.app.detected_frames_processor.camera_states[
event.camera
].tracked_objects.get(event.id)
)

if tracked_obj:
tracked_obj.obj_data["sub_label"] = (new_sub_label, new_score)
if tracked_obj:
tracked_obj.obj_data["sub_label"] = (new_sub_label, new_score)

# update timeline items
Timeline.update(
data=Timeline.data.update({"sub_label": (new_sub_label, new_score)})
).where(Timeline.source_id == event_id).execute()

event.sub_label = new_sub_label
if event:
event.sub_label = new_sub_label

if new_score:
data = event.data
data["sub_label_score"] = new_score
event.data = data
if new_score:
data = event.data
data["sub_label_score"] = new_score
event.data = data

event.save()

event.save()
return JSONResponse(
content=(
{
Expand Down
12 changes: 11 additions & 1 deletion frigate/api/fastapi_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,16 @@
from starlette_context.plugins import Plugin

from frigate.api import app as main_app
from frigate.api import auth, event, export, media, notification, preview, review
from frigate.api import (
auth,
classification,
event,
export,
media,
notification,
preview,
review,
)
from frigate.api.auth import get_jwt_secret, limiter
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
Expand Down Expand Up @@ -95,6 +104,7 @@ async def startup():
# Routes
# Order of include_router matters: https://fastapi.tiangolo.com/tutorial/path-params/#order-matters
app.include_router(auth.router)
app.include_router(classification.router)
app.include_router(review.router)
app.include_router(main_app.router)
app.include_router(preview.router)
Expand Down
3 changes: 2 additions & 1 deletion frigate/comms/embeddings_updater.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ class EmbeddingsRequestEnum(Enum):
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
register_face = "register_face"


class EmbeddingsResponder:
Expand All @@ -22,7 +23,7 @@ def __init__(self) -> None:

def check_for_request(self, process: Callable) -> None:
while True: # load all messages that are queued
has_message, _, _ = zmq.select([self.socket], [], [], 0.1)
has_message, _, _ = zmq.select([self.socket], [], [], 0.01)

if not has_message:
break
Expand Down
19 changes: 18 additions & 1 deletion frigate/config/camera/objects.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from typing import Any, Optional, Union

from pydantic import Field, field_serializer
from pydantic import Field, PrivateAttr, field_serializer

from ..base import FrigateBaseModel

Expand Down Expand Up @@ -53,3 +53,20 @@ class ObjectConfig(FrigateBaseModel):
default_factory=dict, title="Object filters."
)
mask: Union[str, list[str]] = Field(default="", title="Object mask.")
_all_objects: list[str] = PrivateAttr()

@property
def all_objects(self) -> list[str]:
return self._all_objects

def parse_all_objects(self, cameras):
if "_all_objects" in self:
return

# get list of unique enabled labels for tracking
enabled_labels = set(self.track)

for camera in cameras.values():
enabled_labels.update(camera.objects.track)

self._all_objects = list(enabled_labels)
Loading

0 comments on commit 2949c30

Please sign in to comment.