Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Commit

Permalink
Merge branch 'master' into patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
mdurrani808 authored Jun 1, 2024
2 parents 90797a8 + 5fa174f commit df259dd
Show file tree
Hide file tree
Showing 33 changed files with 194 additions and 76 deletions.
8 changes: 7 additions & 1 deletion source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
# -- Project information -----------------------------------------------------

project = 'PhotonVision'
copyright = '2023, PhotonVision'
copyright = '2024, PhotonVision'
author = 'Banks Troutman, Matt Morley'

# -- General configuration ---------------------------------------------------
Expand Down Expand Up @@ -118,3 +118,9 @@ def setup(app):
suppress_warnings = ['epub.unknown_project_files']

sphinx_tabs_valid_builders = ['epub', 'linkcheck']

# Excluded links for linkcheck
# These should be periodically checked by hand to ensure that they are still functional
linkcheck_ignore = [
'https://www.raspberrypi.com/software/'
]
6 changes: 6 additions & 0 deletions source/docs/additional-resources/best-practices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@ During the Competition
* Move the robot close, far, angled, and around the field to ensure no extra targets are found anywhere when looking for a target.
* Go to a practice match to ensure everything is working correctly.

* After field calibration, use the "Export Settings" button in the "Settings" page to create a backup.
* Do this for each coprocessor on your robot that runs PhotonVision, and name your exports with meaningful names.
* This will contain camera information/calibration, pipeline information, network settings, etc.
* In the event of software/hardware failures (IE lost SD Card, broken device), you can then use the "Import Settings" button and select "All Settings" to restore your settings.
* This effectively works as a snapshot of your PhotonVision data that can be restored at any point.

* Before every match, check the ethernet connection going into your coprocessor and that it is seated fully.
* Ensure that exposure is as low as possible and that you don't have the dashboard up when you don't need it to reduce bandwidth.
* Stream at as low of a resolution as possible while still detecting targets to stay within bandwidth limits.
2 changes: 1 addition & 1 deletion source/docs/apriltag-pipelines/2D-tracking-tuning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Tracking Apriltags
------------------

Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.
Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.

.. image:: images/apriltag.png
:align: center
Expand Down
6 changes: 3 additions & 3 deletions source/docs/apriltag-pipelines/3D-tracking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Ambiguity

Translating from 2D to 3D using data from the calibration and the four tag corners can lead to "pose ambiguity", where it appears that the AprilTag pose is flipping between two different poses. You can read more about this issue `here. <https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html#d-to-3d-ambiguity>` Ambiguity is calculated as the ratio of reprojection errors between two pose solutions (if they exist), where reprojection error is the error corresponding to the image distance between where the apriltag's corners are detected vs where we expect to see them based on the tag's estimated camera relative pose.

There a few steps you can take to resolve/mitigate this issue:
There are a few steps you can take to resolve/mitigate this issue:

1. Mount cameras at oblique angles so it is less likely that the tag will be seen straght on.
1. Mount cameras at oblique angles so it is less likely that the tag will be seen straight on.
2. Use the :ref:`MultiTag system <docs/apriltag-pipelines/multitag:MultiTag Localization>` in order to combine the corners from multiple tags to get a more accurate and unambiguous pose.
3. Reject all tag poses where the ambiguity ratio (availiable via PhotonLib) is greater than 0.2.
3. Reject all tag poses where the ambiguity ratio (available via PhotonLib) is greater than 0.2.
2 changes: 1 addition & 1 deletion source/docs/apriltag-pipelines/about-apriltags.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ AprilTags are a common type of visual fiducial marker. Visual fiducial markers a

A more technical explanation can be found in the `WPILib documentation <https://docs.wpilib.org/en/latest/docs/software/vision-processing/apriltag/apriltag-intro.html>`_.

.. note:: You can get FIRST's `official PDF of the targets used in 2023 here <https://firstfrc.blob.core.windows.net/frc2023/FieldAssets/TeamVersions/AprilTags-UserGuideandImages.pdf>`_.
.. note:: You can get FIRST's `official PDF of the targets used in 2024 here <https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf>`_.

39 changes: 27 additions & 12 deletions source/docs/apriltag-pipelines/coordinate-systems.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,39 @@ You define the camera to robot transform in the robot coordinate frame.
Camera Coordinate Frame
-----------------------

The camera coordinate system is defined as follows, relative to the camera sensor itself, and when looking in the same direction as the sensor points:
OpenCV by default uses x-left/y-down/z-out for camera transforms. PhotonVision applies a base rotation to this transformation to make robot to tag transforms more in line with the WPILib coordinate system. The x, y, and z axes are also shown in red, green, and blue in the 3D mini-map and targeting overlay in the UI.

* The origin is the center.
* The x-axis points to the left
* The y-axis points up.
* The z-axis points out toward the subject.
* The origin is the focal point of the camera lens
* The x-axis points out of the camera
* The y-axis points to the left
* The z-axis points upwards


.. image:: images/camera-coord.png
:scale: 45 %
:align: center

|
.. image:: images/multiple-tags.png
:scale: 45 %
:align: center

|
AprilTag Coordinate Frame
-------------------------

The AprilTag coordinate system is defined as follows, relative to the center of the AprilTag itself, and when viewing the tag as a robot would:
The AprilTag coordinate system is defined as follows, relative to the center of the AprilTag itself, and when viewing the tag as a robot would. Again, PhotonVision changes this coordinate system to be more in line with WPILib. This means that a robot facing a tag head-on would see a robot-to-tag transform with a translation only in x, and a rotation of 180 degrees about z. The tag coordinate system is also shown with x/y/z in red/green/blue in the UI target overlay and mini-map.

* The origin is the center of the tag
* The x-axis is normal to the plane the tag is printed on, pointing outward from the visible side of the tag.
* The y-axis points to the right
* The z-axis points upwards

* The origin is the center.
* The x-axis points to your right
* The y-axis points upwards.
* The z-axis is normal to the plane the tag is printed on, pointing outward from the visible side of the tag.

.. image:: images/apriltag-coords.png
:scale: 45 %
:align: center
:scale: 50%
:alt: AprilTag Coordinate System

|
Binary file modified source/docs/apriltag-pipelines/images/apriltag-coords.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions source/docs/apriltag-pipelines/multitag.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
MultiTag Localization
=====================

PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera wih information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.
PhotonVision can combine AprilTag detections from multiple simultaneously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.

.. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output.
.. warning:: MultiTag requires an accurate field layout JSON to be uploaded! Differences between this layout and the tags' physical location will drive error in the estimated pose output.

Enabling MultiTag
^^^^^^^^^^^^^^^^^

Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag using the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.
Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag to use the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.

.. image:: images/multitag-ui.png
:width: 600
Expand Down Expand Up @@ -48,6 +48,6 @@ PhotonVision ships by default with the `2024 field layout JSON <https://github.c
:width: 600
:alt: The currently saved field layout in the Photon UI

An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "Apriltag Layout" type and choose a updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should update to reflect the new layout.
An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "AprilTag Layout" type and choose an updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should be updated to reflect the new layout.

.. note:: Currently, there is no way to update this layout using PhotonLib, although this feature is under consideration.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Installing Python Dependencies
------------------------------
You must install a set of Python dependencies in order to build the documentation. To do so, you can run the following command in the root project directory:

``pip install -r requirements.txt``
``python -m pip install -r requirements.txt``

Building the Documentation
--------------------------
Expand Down
4 changes: 2 additions & 2 deletions source/docs/contributing/photonvision/build-instructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Get the source code from git:
git clone https://github.com/PhotonVision/photonvision
or alternatively download to source code from github and extract the zip:
or alternatively download the source code from github and extract the zip:

.. image:: assets/git-download.png
:width: 600
Expand Down Expand Up @@ -96,7 +96,7 @@ Running the following command under the root directory will build the jar under
Build and Run PhotonVision on a Raspberry Pi Coprocessor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As a convinenece, the build has built in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.
As a convenience, the build has a built-in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.

An architecture override is required to specify the deploy target's architecture.

Expand Down
2 changes: 1 addition & 1 deletion source/docs/hardware/picamconfig.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,4 @@ Save the file, close the editor, and eject the drive. The boot configuration sho
Additional Information
----------------------

See `the libcamera documentation <https://github.com/raspberrypi/documentation/blob/develop/documentation/asciidoc/computers/camera/rpicam_apps_getting_started.adoc>`_ for more details on configuring cameras.
See `the libcamera documentation <https://github.com/raspberrypi/documentation/blob/679fab721855a3e8f17aa51819e5c2a7c447e98d/documentation/asciidoc/computers/camera/rpicam_configuration.adoc>`_ for more details on configuring cameras.
18 changes: 9 additions & 9 deletions source/docs/hardware/selecting-hardware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Minimum System Requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^

* Ubuntu 22.04 LTS or Windows 10/11
* We don't reccomend using Windows for anything except testing out the system on a local machine.
* We don't recommend using Windows for anything except testing out the system on a local machine.
* CPU: ARM Cortex-A53 (the CPU on Raspberry Pi 3) or better
* At least 8GB of storage
* 2GB of RAM
Expand All @@ -20,7 +20,7 @@ Minimum System Requirements
* Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors may not work.
* Ethernet port for networking

Coprocessor Reccomendations
Coprocessor Recommendations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Below is a list of supported hardware, along with some notes on each.
Expand All @@ -30,7 +30,7 @@ When selecting a coprocessor, it is important to consider various factors, parti
* Raspberry Pi 4/5 ($55-$80)
* This is the recommended coprocessor for teams on a budget. It has a less powerful CPU than the Orange Pi 5, but is still capable of running PhotonVision at a reasonable FPS.
* Mini PCs (such as Beelink N5095)
* This coprcoessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
* This coprocessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
* Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly.

Choosing a Camera
Expand All @@ -46,17 +46,17 @@ PhotonVision relies on `CSCore <https://github.com/wpilibsuite/allwpilib/tree/ma
.. note::
We do not currently support the usage of two of the same camera on the same coprocessor. You can only use two or more cameras if they are of different models or they are from Arducam, which has a `tool that allows for cameras to be renamed <https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/>`_.

Reccomended Cameras
Recommended Cameras
^^^^^^^^^^^^^^^^^^^
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We reccomend the Pi Camera V1 or a high fps USB camera.
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend the Pi Camera V1 or a high fps USB camera.

For driver camera, we reccomend a USB camera with a fisheye lens, so your driver can see more of the field.
For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field.

For AprilTag detection, we reccomend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.
For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.

* Reccomendations For AprilTag Detection
* Recommendations For AprilTag Detection
* Arducam USB OV9281
* This is the reccomended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
* This is the recommended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
* Innomaker OV9281
* Spinel AR0144
* Pi Camera Module V1
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion source/docs/installation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This page will help you install PhotonVision on your coprocessor, wire it, and p
Step 1: Software Install
------------------------

This section will walk you through how to install PhotonVision on your coprcoessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).
This section will walk you through how to install PhotonVision on your coprocessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).

.. warning:: You only need to install PhotonVision on the coprocessor/device that is being used to detect targets, you do NOT need to install it on the device you use to view the webdashboard. All you need to view the webdashboard is for a device to be on the same network as your vision coprocessor and an internet browser.

Expand Down
4 changes: 2 additions & 2 deletions source/docs/installation/sw_install/gloworm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ Flashing the Gloworm Image
--------------------------
Plug a USB C cable from your computer into the USB C port on Gloworm labeled with a download icon.

Use `Balena Etcher <https://www.balena.io/etcher/>`_ to flash an image onto the coprocessor.
Use the 1.18.11 version of `Balena Etcher <https://github.com/balena-io/etcher/releases/tag/v1.18.11>`_ to flash an image onto the coprocessor.

Run BalenaEtcher as an administrator. Select the downloaded ``.zip`` file.

Select the compute module. If it doesn't show up after 30s try using another USB port, initialization may take a while. If prompted, install the recommended missing drivers.

Hit flash. Wait for flashing to complete, then disconnect your USB C cable.

.. warning:: Using an older version of Balena Etcher may cause bootlooping (the system will repeatedly boot and restart) when imaging your Gloworm. Updating to the latest Balena Etcher will fix this issue.
.. warning:: Using a version of Balena Etcher older than 1.18.11 may cause bootlooping (the system will repeatedly boot and restart) when imaging your Gloworm. Updating to 1.18.11 will fix this issue.

Final Steps
-----------
Expand Down
2 changes: 2 additions & 0 deletions source/docs/installation/sw_install/limelight.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ Download the hardwareConfig.json file for the version of your Limelight:
- :download:`Limelight Version 2 <files/Limelight2/hardwareConfig.json>`.
- :download:`Limelight Version 2+ <files/Limelight2+/hardwareConfig.json>`.

.. note:: No hardware config is provided for the Limelight 3 as AprilTags do not require the LEDs (meaning nobody has reverse-engineered what I/O pins drive the LEDs) and the camera FOV is determined as part of calibration.

:ref:`Import the hardwareConfig.json file <docs/additional-resources/config:Importing and Exporting Settings>`. Again, this is **REQUIRED** or target measurements will be incorrect, and LEDs will not work.

After installation you should be able to `locate the camera <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_ at: ``http://photonvision.local:5800/`` (not ``gloworm.local``, as previously)
Expand Down
Loading

0 comments on commit df259dd

Please sign in to comment.