Fields of The World: A Machine Learning Benchmark Dataset For Global Agricultural Field Boundary Segmentation

1Arizona State University, Tempe, AZ 85281 USA, hkerner@asu.edu
2Microsoft AI for Good Research Lab, Redmond, WA 98052 USA
3Taylor Geospatial Institute, St Louis, MO 63108 USA
4Washington University in St Louis, St Louis, MO 63130 USA
5Taylor Geospatial Engine, St Louis, MO 63130 USA

Abstract

Crop field boundaries are foundational datasets for agricultural monitoring and assessments but are expensive to collect manually. Machine learning (ML) methods for automatically extracting field boundaries from remotely sensed images could help realize the demand for these datasets at a global scale. However, current ML methods for field instance segmentation lack sufficient geographic coverage, accuracy, and generalization capabilities. Further, research on improving ML methods is restricted by the lack of labeled datasets representing the diversity of global agricultural fields. We present Fields of The World (FTW) -- a novel ML benchmark dataset for agricultural field instance segmentation spanning 24 countries on four continents (Europe, Africa, Asia, and South America). FTW is an order of magnitude larger than previous datasets with 70,462 samples, each containing instance and semantic segmentation masks paired with multi-date, multi-spectral Sentinel-2 satellite images. We provide results from baseline models for the new FTW benchmark, show that models trained on FTW have better zero-shot and fine-tuning performance in held-out countries than models that aren't pre-trained with diverse datasets, and show positive qualitative zero-shot results of FTW models in a real-world scenario -- running on Sentinel-2 scenes over Ethiopia.

BibTeX

@article{kerner2024fields,
  title={Fields of The World: A Machine Learning Benchmark Dataset For Global Agricultural Field Boundary Segmentation},
  author={Kerner, Hannah and Chaudhari, Snehal and Ghosh, Aninda and Robinson, Caleb and Ahmad, Adeel and Choi, Eddie and Jacobs, Nathan and Holmes, Chris and Mohr, Matthias and Dodhia, Rahul and others},
  journal={arXiv preprint arXiv:2409.16252},
  year={2024}
}