For inference using a pre-trained model follow this steps:
-
Put all your images under a folder (symlinks are allowed). For example
data
-
Download the pre-trained model (see available model bellow), and it's respective config file.
-
Run P2PaLA:
python P2PaLA.py --config <path_to_config_file> --prev_model <path_to_model> --prod_data <pointer_to_your_images>
Note: This command will force to use GPU, if you want to use CPU just add
--gpu -1
- Example:
Make sure you have an
input
folder in your main directory, also that you have downloaded the config and the pretrained model.
python P2PaLA.py --config config_ALAR_min_model_17_12_18_inference.txt --prev_model ALAR_min_model_17_12_18.pth --prod_data ./input
If you want to use CPU just add --gpu -1
-
ALAR:
- model:
wget --no-check-certificate https://www.prhlt.upv.es/~lquirosd/P2PaLA/ALAR_min_model_17_12_18.pth
- config:
wget --no-check-certificate https://www.prhlt.upv.es/~lquirosd/P2PaLA/config_ALAR_min_model_17_12_18_inference.txt
WIP
WIP
WIP