⚠️ Notice: Full code will be released soon. Stay tuned for updates.
Make sure you have (ana)conda installed on your machine.
Move into the directory where you cloned the repository:
cd <PATH_TO_REPOSITORY>/satellite-vehicle-point-detection
Where <PATH_TO_REPOSITORY> is the path to the root folder of this repository on your machine.
Run the following commands to install the environment:
conda env create -f environment.yml -n satdet
conda activate satdet
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
Place the datasets under /var/storage/<USERNAME>/Detection, where <USERNAME> is your username on the server. If needed, create the Detection directory manually, because that is the location where the code will search for the datasets. For example, you can end up with the following directories:
/var/storage/<USERNAME>/Detection/Real/Real-LINZ_384px_0.125m_small-only
/var/storage/<USERNAME>/Detection/Synthetic/Syn-PT3D_GoogleMaps_384px_0.125m_Blur-2.4
The datasets are set up and ready.
Place the models under <PATH_TO_REPOSITORY>/satellite-vehicle-point-detection/saved_models. Create the saved_models directory manually if it does not exist in your repository. It does not matter how you name the models that you save in this folder. For example, you can end up with the following files:
saved_models/
├── RetinaNet_Real-LINZ_384px_0.125px
│ ├── config.yaml
│ └── model_best.pth
└── RetinaNet_Syn-PT3D_384px_0.125px
├── config.yaml
└── model_best.pth
The models are set up and ready.
Let us evaluate the pretrained models on the real validation set.
# Evaluating the real model on the real validation set
python train.py --eval-only --num-gpus 8 --config-file saved_models/RetinaNet_Real-LINZ_384px_0.125px/config.yaml MODEL.WEIGHTS saved_models/RetinaNet_Real-LINZ_384px_0.125px/model_best.pth DATASETS.TEST '("Real/Real-LINZ_384px_0.125m_small-only/validation",)' OUTPUT_DIR saved_models/RetinaNet_Real-LINZ_384px_0.125px
# Evaluating the synthetic model on the real validation set
python train.py --eval-only --num-gpus 8 --config-file saved_models/RetinaNet_Syn-PT3D_384px_0.125px/config.yaml MODEL.WEIGHTS saved_models/RetinaNet_Syn-PT3D_384px_0.125px/model_best.pth DATASETS.TEST '("Real/Real-LINZ_384px_0.125m_small-only/validation",)' OUTPUT_DIR saved_models/RetinaNet_Real-LINZ_384px_0.125px
See the Detectron2 documentation for an in-depth explanation of what each parameter in the command means.
Note that the dataset name (DATASETS.TEST) is basically the path to the dataset folder (including train/validation/test) starting from the Detection folder.
After running the evaluations, you should get the following numbers:
- Real Model: AP=93.16%
- Synthetic Model: AP=62.34%
We use train.py to train new models. We use the command line to supply the necessary parameter values to train. The following command trains a RetinaNet on the real data.
python train.py --config-file configs/RetinaConfig.yaml --num-gpus 8 SOLVER.IMS_PER_BATCH 640 SOLVER.MAX_ITER 10000 SOLVER.WARMUP_ITERS 1000 SOLVER.STEPS 4000,8000 SOLVER.BASE_LR 0.0005 DATASETS.TRAIN '("SatDet-Real-LINZ-384px-0.125m-small-cars/train",)' DATASETS.TEST '("SatDet-Real-LINZ-384px-0.125m-small-cars/validation",)' TEST.EVAL_PERIOD 500 OUTPUT_DIR saved_models/real-retinanet
The final weights will be saved in <PATH_TO_REPOSITORY>/satellite-vehicle-point-detection/saved_models/real-retinanet. You can use them for evaluation as described in the "Evaluation" section above.