Skip to content

Commit

Permalink
Adds data collection pipeline (#40)
Browse files Browse the repository at this point in the history
# Description

Adds data collection pipeline. 

Fixes #7 #14 #36 

## Type of change

- New feature (non-breaking change which adds functionality)

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./formatter.sh`
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
  • Loading branch information
pascal-roth authored Nov 19, 2024
1 parent 5fc8a3e commit 7d481c1
Show file tree
Hide file tree
Showing 20 changed files with 1,680 additions and 41 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,10 @@ Here an overview of the steps involved in training the policy.
For more detailed instructions, please refer to [TRAINING.md](TRAINING.md).

0. Training Data Generation <br>
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using developed Isaac Sim Extension, the extensions are part of a new internal project (``isaac-nav-suite``) and will be open sourced with that project. In the case that you require an earlier access, please contact us via mail.
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using IsaacLab. For detailed instruction on how to install the extension and run the data collection script, please see [here](omniverse/README.md)

1. Build Cost-Map <br>
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information).
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information). The point-cloud of the simulated environments can be generated with the [reconstruction-script](viplanner/depth_reconstruct.py) with config [here](viplanner/config/costmap_cfg.py).

2. Training <br>
Once the cost-map is constructed, the next step is to train the policy. The policy is a machine learning model that learns to make decisions based on the depth and semantic measurements. An example training script can be found [here](viplanner/train.py) with configs [here](viplanner/config/learning_cfg.py)
Expand Down
14 changes: 10 additions & 4 deletions TRAINING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

Here an overview of the steps involved in training the policy is provided.


## Data Generation

For the data generation, please follow the instruction given in [here](omniverse/README.md).


## Cost-Map Building

Cost-Map building is an essential step in guiding optimization and representing the environment.
Expand All @@ -28,13 +34,14 @@ If depth and semantic images of the simulation are available, then first 3D reco
├── xxxx.png # images saved with 4 digits, e.g. 0000.png
```

when both depth and semantic images are available, then define sem_suffic and depth_suffix in ReconstructionCfg to differentiate between the two with the following structure:
In the case that the semantic and depth images have an offset in their position (as typical on some robotic platforms),
define a `sem_suffic` and `depth_suffix` in `ReconstructionCfg` to differentiate between the two with the following structure:

``` graphql
env_name
├── camera_extrinsic{depth_suffix}.txt # format: x y z qx qy qz qw
├── camera_extrinsic{sem_suffix}.txt # format: x y z qx qy qz qw
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images (depth first)
├── depth # either png and/ or npy, if both npy is used
| ├── xxxx{depth_suffix}.png # images saved with 4 digits, e.g. 0000.png
| ├── xxxx{depth_suffix}.npy # arrays saved with 4 digits, e.g. 0000.npy
Expand All @@ -49,7 +56,7 @@ If depth and semantic images of the simulation are available, then first 3D reco

3. **Cost-Building** <br>

Fully automated, either a geometric or semantic cost map can be generated running the following command:
Either a geometric or semantic cost map can be generated running the following command:

```
python viplanner/cost_builder.py
Expand All @@ -72,7 +79,6 @@ If depth and semantic images of the simulation are available, then first 3D reco
```



## Training

Configurations of the training given in [TrainCfg](viplanner/config/learning_cfg.py). Training can be started using the example training script [train.py](viplanner/train.py).
Expand Down
40 changes: 27 additions & 13 deletions omniverse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)
[![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause)

The ViPlanner Omniverse Extension offers a testing environment for ViPlanner.
The ViPlanner Omniverse Extension offers a testing environment for ViPlanner and includes the data collection pipeline.
Within NVIDIA Isaac Sim as a photorealistic simulator and using [IsaacLab](https://isaac-sim.github.io/IsaacLab/), this extension provides an assessment tool for ViPlanner's performance across diverse environments.


Expand Down Expand Up @@ -62,16 +62,9 @@ It is necessary to comply with PEP660 for the install. This requires the followi
./isaaclab.sh -p -m pip install --upgrade setuptools
```
## Usage
A demo script is provided to run the planner in three different environments: [Matterport](https://niessner.github.io/Matterport/), [Carla](https://carla.org//), and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/features/environment_setup/assets/usd_assets_environments.html#warehouse).
In each scenario, the goal is represented as a movable cube within the environment.
To run the demo, download the model: [[checkpoint](https://drive.google.com/file/d/1PY7XBkyIGESjdh1cMSiJgwwaIT0WaxIc/view?usp=sharing)] [[config](https://drive.google.com/file/d/1r1yhNQAJnjpn9-xpAQWGaQedwma5zokr/view?usp=sharing)] and the environment files. Then adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files.
## Download the Simulation Environments
### Matterport
[Config](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)
To download Matterport datasets, please refer to the [Matterport3D](https://niessner.github.io/Matterport/) website. The dataset should be converted to USD format using Isaac Sim by executing the following steps:
1. Run the `convert_mesh.py` script to convert the `.obj` file (located under `matterport_mesh`) to `USD`. With the recent update of the asset converter script, use the resulting `*_non_metric.usd` file.
Expand All @@ -92,6 +85,20 @@ To download Matterport datasets, please refer to the [Matterport3D](https://nies
top left corner, select `Show by Type -> Physics -> Colliders` and set the value to `All` ). The colliders should be visible as pink linkes. In the case that no colliders are presented, select the mesh in the stage,
go the `Property` section and click `Add -> Physics -> Colliders Preset`. Then save the asset.
### Carla
We provide an already converted asset of the `Town01` of Carla. It can be downloaded as USD asset: [Download USD Link](https://drive.google.com/file/d/1wZVKf2W0bSmP1Wm2w1XgftzSBx0UR1RK/view?usp=sharing)
## Planer Demo
A demo script is provided to run the planner in three different environments: [Matterport](https://niessner.github.io/Matterport/), [Carla](https://carla.org//), and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/features/environment_setup/assets/usd_assets_environments.html#warehouse).
In each scenario, the goal is represented as a movable cube within the environment.
To run the demo, download the model: [[checkpoint](https://drive.google.com/file/d/1PY7XBkyIGESjdh1cMSiJgwwaIT0WaxIc/view?usp=sharing)] [[config](https://drive.google.com/file/d/1r1yhNQAJnjpn9-xpAQWGaQedwma5zokr/view?usp=sharing)] and the environment files. Then adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files.
### Matterport
[Config](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)
The demo uses the **2n8kARJN3HM** scene from the Matterport dataset. A preview is available [here](https://aspis.cmpt.sfu.ca/scene-toolkit/scans/matterport3d/houses).
```
Expand All @@ -100,7 +107,7 @@ cd IsaacLab
```
### Carla
[Download USD Link](https://drive.google.com/file/d/1wZVKf2W0bSmP1Wm2w1XgftzSBx0UR1RK/view?usp=sharing) | [Config](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py)
[Config](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py)
```
cd IsaacLab
Expand All @@ -115,7 +122,14 @@ cd IsaacLab
./isaaclab.sh -p <path-to-viplanner-repo>/omniverse/standalone/viplanner_demo.py --scene warehouse --model_dir <path-to-model-download-dir>
```
## Data Collection and Evaluation
## Data Collection
The training data is generated from different simulation environments. After they have been downloaded and converted to USD, adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files ([Carla](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py) and [Matterport](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)).
The rendered viewpoints are collected by executing
```
cd IsaacLab
./isaaclab.sh -p <path-to-viplanner-repo>/omniverse/standalone/viplanner_demo.py --scene <matterport/carla/warehouse> --num_samples <how-many-viewpoints>
```
The data collection is currently included in a new internal project and will be released with this project in the future.
If you require the code, please contact us per mail.
To test that the data has been correctly extracted, please run the 3D reconstruction and see if the results fits to the simulated environment.
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ def _initialize_impl(self):
# More Information: https://github.com/niessner/Matterport/blob/master/data_organization.md#house_segmentations
mapping = pd.read_csv(DATA_DIR + "/mappings/category_mapping.tsv", sep="\t")
self.mapping_mpcat40 = torch.tensor(mapping["mpcat40index"].to_numpy(), device=self._device, dtype=torch.long)
self.classes_mpcat40 = pd.read_csv(DATA_DIR + "/mappings/mpcat40.tsv", sep="\t")["mpcat40"].to_numpy()
self._color_mapping()

def _color_mapping(self):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
# SPDX-License-Identifier: BSD-3-Clause

floor:
- SM_Floor0
- SM_Floor1
- SM_Floor2
- SM_Floor3
Expand All @@ -31,6 +32,7 @@ ceiling:

static:
- LampCeiling
- Section
- SM_FloorDecal
- SM_FireExtinguisher

Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Copyright (c) 2023-2024, ETH Zurich (Robotics Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

from .viewpoint_sampling import ViewpointSampling
from .viewpoint_sampling_cfg import ViewpointSamplingCfg
Loading

0 comments on commit 7d481c1

Please sign in to comment.