You need to have a point cloud with the three coordinates x, y, z and the verticality.
- Open it in cloud compare and cut the point cloud to save only the admissible centers (the ones that will be selected afterwards). To do so, use to tool called
segment
. - Save the admissible centers as binary in data/centers.
- Go to the city_statistics notebook, you will need to change to absolute path so that the jupyter kernel is in the top folder of the repository.
- Change the
point_cloud_name
and run all cells. You can then add the new values obtained from the output's cells to the init file. In total, the dictionariesCENTERS
,Z_GROUNDS
andROTATIONS
need to be updated. - You can check the visualizations cells of the notebook to see if everything is happening fine.
- To use
RangeNet++
afterwards, you need to update the dictionnaryCITY_INFERANCE_FOLDER
in the init file.
You can now freely generate new samples from the point cloud that you have. For that, you need to use the command line create_dataset
. If by any chance the pipeline gets broken, a notebook called create_dataset has been made to help debugging.
You can open the Jupyter Notebook directly in colab by clicking here: Consider restarting the runtime if a module is not found.
The step are explained in the notebook. You need to upload your data into the Colab session. The way it is done here is by using Google Drive.
For further details, please go and check the documentation of RangeNet++.
The steps to follow are the following:
- First generate the samples you would like to predict on by using the command line
create_dataset
. - Then generate the predictions with the weights that you have trained from the previous section.
- Merge the predictions by using the command line
merge_labels
.
It has been coded here: Consider restarting the runtime if a module is not found.