Skip to content

Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud

Notifications You must be signed in to change notification settings

MedVIC-Lab/Point2SSM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point2SSM

Implementation of "Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud" spotlight presentation at ICLR 2024.

Please cite this paper if you use the code in work that leads to published research:

@inproceedings{adams2023point2ssm,
    title={{Point2SSM: Learning Morphological Variations of Anatomies from Point Clouds}},
    author={Jadie Adams and Shireen Elhabian},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=DqziS8DG4M}
}

This code includes the proposed Point2SSM model, as well as implementations of:

Setup

To setup an anaconda environment, run:

source setup.sh

Datastes

The paper uses aligned versions of the spleen and pancreas Medical Decathlon public datasets.

Original Medical Decathlon Data

The original, unaligned data is available here: http://medicaldecathlon.com/. The data is available with a permissive copyright-license (CC-BY-SA 4.0), allowing for data to be shared, distributed and improved upon. All data has been labeled and verified by an expert human rater, and with the best effort to mimic the accuracy required for clinical use. To cite this data, please refer to https://arxiv.org/abs/1902.09063.

Aligned Medical Decathlon Data

Alignment and pre-processing was performed using ShapeWorks mesh grooming tools. The aligned spleen dataset is available in this repo in the data/spleen/ folder. The aligned version of the pancreas data can be downloaded using download.py if you make a free account at https://www.shapeworks-cloud.org/#/.

If you use either of these pre-processed datasets in work that leads to published research, we humbly ask that you cite ShapeWorks, and add the following to the 'Acknowledgments' section of your paper: "The National Institutes of Health supported this work under grant numbers NIBIB-U24EB029011, NIAMS-R01AR076120, NHLBI-R01HL135568, NIBIB-R01EB016701, and NIGMS-P41GM103545." and add the following 'disclaimer': "The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health." When referencing this dataset groomed with ShapeWorks, please include a bibliographical reference to the paper below, and, if possible, include a link to shapeworks.sci.utah.edu.

    @incollection{cates2017shapeworks,
    title = {Shapeworks: particle-based shape correspondence and visualization software},
    author = {Cates, Joshua and Elhabian, Shireen and Whitaker, Ross},
    booktitle = {Statistical Shape and Deformation Analysis},
    pages = {257--298},
    year = {2017},
    publisher = {Elsevier}
    }

Model Training

To train a model, call train.py with the appropriate config yaml file. For example:

python train.py -c cfgs/point2ssm.yaml

Specific parameters are set in the config file, including dataset, noise level, learning rate, etc.

Model Testing

To test a model, call test.py with the appropriate config yaml file. For example:

python test.py -c experiments/spleen/point2ssm/point2ssm.yaml

Note the output Chamfer distance is computed on scaled data, whereas the values reported in the paper are computed on the unscaled data. When test.py is run, the predicted points are written to files.

Acknowledgements

This code utilizes the following Pytorch 3rd-party libraries:

This code includes the following models:

About

Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 71.7%
  • Cuda 20.0%
  • C++ 6.5%
  • C 1.2%
  • Shell 0.6%