Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-2509: Document create separate tflite module #3655

Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/appendix/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@ date: "2024-09-18"

<!-- If there is no concrete date for a change that makes sense, use the end of the month it was released in. -->

{{% changelog date="2024-11-05" color="added" title="TFLite model moved to module" %}}

The `TFLite CPU` ML model service is now supported as a [module](https://app.viam.com/module/viam/tflite_cpu).
sguequierre marked this conversation as resolved.
Show resolved Hide resolved

{{% /changelog %}}

{{% changelog date="2024-10-28" color="added" title="Deprecate builtin pi model" %}}

The built-in `pi` sensor and board models have moved to the [`raspberry-pi` module](https://github.com/viam-modules/raspberry-pi).
Expand Down
6 changes: 3 additions & 3 deletions docs/how-tos/detect-people.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,15 +114,15 @@ For more detailed configuration information and troubleshooting, see the [`webca
In this guide you'll use a publicly available model to detect people.
You will deploy this model to your machine using the ML model service.

Click **+**, click **Service** and select the `ML model` type, then select the `TFLite CPU` model.
Create the service.
Click **+**, click **Service** and search for and select the `ML model / TFLite CPU` model.
Click **Add module** and create the service.

In the resulting ML model service configuration pane, ensure that **Deploy model on machine** is selected for the **Deployment** field.

Then click on **Select model**, switch to the **Registry** tab and select the **people** model by **ml-models-scuttle** to deploy a model that has been trained to be able to detect people.
This model is a TFLite model.

For more detailed information, including optional attribute configuration, see the [`tflite_cpu` docs](/services/ml/tflite_cpu/).
For more detailed information, including optional attribute configuration, see the [`tflite_cpu` docs](https://github.com/viam-modules/mlmodel-tflite).

{{% /expand%}}
{{%expand "Step 5: Configure a vision service" %}}
Expand Down
2 changes: 1 addition & 1 deletion docs/how-tos/image-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ The following steps use the [`filtered_camera`](https://github.com/erh/filtered_
{{<imgproc src="/services/ml/train.svg" class="fill alignleft" style="width: 150px" declaredimensions=true alt="Train models">}}
**1. Add an ML model service to your machine**

Add an ML model service on your machine that is compatible with the ML model you want to use, for example [TFLite CPU](/services/ml/tflite_cpu/).
Add an ML model service on your machine that is compatible with the ML model you want to use, for example [TFLite CPU](https://github.com/viam-modules/mlmodel-tflite).

{{% /tablestep %}}
{{% tablestep link="/services/vision/"%}}
Expand Down
4 changes: 2 additions & 2 deletions docs/how-tos/train-deploy-ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ If you used a custom training script, you may need a different [ML model service
{{<imgproc src="/services/icons/vision.svg" class="fill alignleft" style="width: 150px" declaredimensions=true alt="Configure a service">}}
**2. Configure an <code>mlmodel</code> vision service**

The ML model service will deploy and run the model.
The ML model service deploys and runs the model.

The vision service works with the ML model services.
It uses the ML model and applies it to the stream of images from your camera.
Expand All @@ -590,7 +590,7 @@ Then, from the **Select model** dropdown, select the name of the ML model servic

You can test your vision service by clicking on the **Test** area of its configuration panel or from the [**CONTROL** tab](/fleet/control/).

The camera stream will show when the vision service identifies something.
The camera stream shows when the vision service identifies something.
Try pointing the camera at a scene similar to your training data.

{{< imgproc src="/tutorials/data-management/blue-star.png" alt="Detected blue star" resize="x200" >}}
Expand Down
2 changes: 1 addition & 1 deletion docs/registry/ml-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Viam currently supports the following frameworks:
<!-- prettier-ignore -->
| Model Framework | ML Model Service | Hardware Support | System Architecture | Description |
| --------------- | --------------- | ---------------- | ------------------- | ----------- |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | [`tflite_cpu`](/services/ml/tflite_cpu/) | Any CPU <br> Nvidia GPU | Linux, Raspbian, MacOS, Android | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the [model requirements](/services/ml/tflite_cpu/#model-requirements). |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | [`tflite_cpu`](https://github.com/viam-modules/mlmodel-tflite) | Any CPU <br> Nvidia GPU | Linux, Raspbian, MacOS | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the [model requirements.](https://github.com/viam-modules/mlmodel-tflite) |
| [ONNX](https://onnx.ai/) | [`onnx_cpu`](https://github.com/viam-labs/onnx-cpu) | Any CPU <br> Nvidia GPU | Android, MacOS, Linux arm-64 | Universal format that is not optimized for hardware inference but runs on a wide variety of machines. |
| [TensorFlow](https://www.tensorflow.org/) | [`triton`](https://github.com/viamrobotics/viam-mlmodelservice-triton) | Nvidia GPU | Linux (Jetson) | A full framework that is made for more production-ready systems. |
| [PyTorch](https://pytorch.org/) | [`triton`](https://github.com/viamrobotics/viam-mlmodelservice-triton) | Nvidia GPU | Linux (Jetson) | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers. |
Expand Down
10 changes: 9 additions & 1 deletion docs/services/data/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -440,9 +440,17 @@ This example configuration captures data from the `CaptureAllFromCamera` method
"name": "mlmodel-1",
"namespace": "rdk",
"type": "mlmodel",
"model": "tflite_cpu",
"model": "viam:mlmodel-tflite:tflite_cpu",
"attributes": {}
}
],
"modules": [
{
"type": "registry",
"name": "viam_tflite_cpu",
"module_id": "viam:tflite_cpu",
"version": "0.0.3"
}
]
}
```
Expand Down
174 changes: 0 additions & 174 deletions docs/services/ml/tflite_cpu.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/tutorials/projects/integrating-viam-with-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,9 +251,9 @@ To configure an ML model service:
- Select the **CONFIGURE** tab.
- Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
- Select the `ML model` type, then select the `TFLite CPU` model.
- Enter the name `stuff_detector` for your service and click **Create**.
- Enter the name `stuff_detector` for your service, click **Add module** and click **Create**.

Your robot will register this as a machine learning model and make it available for use.
Your robot registers this as a machine learning model and makes it available for use.

Select **Deploy model on machine** for the **Deployment** field.
Click **Select model**, then select the `viam-labs:EfficientDet-COCO` model from the modal that appears.
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/send-security-photo.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,14 +96,14 @@ If you want to train your own model instead, follow the instructions in [train a
Click the **+** (Create) button next to your main part in the left-hand menu and select **Service**.
Start typing `ML model` and select **ML model / TFLite CPU** from the builtin options.

Enter `people` as the name, then click **Create**.
Enter `people` as the name, click **Add Module**, then click **Create**.

In the new ML Model service panel, configure your service.

![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/send-security-photo/app-service-ml-before.png)

Select **Deploy model on machine** for the **Deployment** field.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Select model** dropdown.

1. **Configure an mlmodel detector** [vision service](/services/vision/)

Expand Down
Loading