Releases: fieldsoftheworld/ftw-baselines
Releases · fieldsoftheworld/ftw-baselines
v3 "PRUE" release
This release contains a new series of models corresponding to the configs at configs/release/prue/:
prue_efnet{3,5,7}_checkpoint.ckptare 3-class U-Net models trained with channel shuffle, normalization augmentation, resize augmentation, a logcosh dice loss function, class weighting of background:0.05, field:0.2, field boundary:0.75 and efficientnet b{3,5,7} encoders on the full FTW dataset.prue_efnet{3,5,7}_standard_weight_checkpoint.ckptare the same as above, but using the same class weights as the v1 models.prue_logcoshdice_only_checkpoint.ckptis the same as the v1 model, but using a logcosh dice loss function.
We find these models outperform the previously released models in both performance and deployment metrics.
v2 model release
This release contains two models:
3_Class_FULL_FTW_Pretrained_v2.ckptwas trained in the same way as the previous3_Class_FULL_FTW_Pretrained.ckpt, but with a random shuffling of the order of window A and window B. This model has almost identical test set performance to the v1 model when the window ordering is the same, and much better performance when the window ordering is swapped.3_Class_FULL_FTW_Pretrained_singleWindow_v2.ckptwas also trained in the same way as3_Class_FULL_FTW_Pretrained.ckpt, but with a random selection of window A or window B for each sample. As a result, this model takes a single 4-channel input instead of the previous concatenated 8-channel input.
v1
The first release of pre-trained models from the Fields of The World project.
Example Pre-trained Model
This release contains a src.ftw.trainers.CustomSemanticSegmentationTask checkpoint that has been pre-trained on the 3-class semantic segmentation labels using the training set from each country in FTW.