This project provides a FastAPI-based implementation of the Fields of the World Inference API based on the OpenAPI specification. It enables running machine learning inference on satellite imagery using the ftw-tools package.
-
Install UV:
curl -LsSf https://astral.sh/uv/install.sh | sh # macOS/Linux/WSL # or: brew install uv # macOS # or: pip install uv # Any platform
-
Clone and setup:
git clone https://github.com/fieldsoftheworld/ftw-inference-api cd ftw-inference-api uv sync --group dev
For rapid deployment on AWS EC2 instances using Ubuntu Deep Learning AMI with NVIDIA drivers:
curl -L https://raw.githubusercontent.com/fieldsoftheworld/ftw-inference-api/main/deploy.sh | bashTo deploy a specific branch:
curl -L https://raw.githubusercontent.com/fieldsoftheworld/ftw-inference-api/main/deploy.sh | bash -s -- -b your-branch-nameThis script will:
- Install UV package manager
- Clone the repository and checkout the specified branch
- Install production dependencies using UV
- Enable GPU support in configuration
- Configure a systemd service for automatic startup
- Set up log rotation
Note: Model weights (~1.5GB total across 8 models) are automatically downloaded on first use and cached at ~/.cache/torch/hub/checkpoints/. The first inference request for each model will take longer due to download time.
Service management:
sudo systemctl status ftw-inference-api # Check status
sudo systemctl start ftw-inference-api # Start service
sudo systemctl stop ftw-inference-api # Stop service
sudo systemctl restart ftw-inference-api # Restart service
sudo journalctl -u ftw-inference-api -f # Follow logs
sudo journalctl -u ftw-inference-api --since today # Today's logs- Docker (required for DynamoDB Local)
-
Setup DynamoDB Local (required for development):
# Copy example environment file and configure local DynamoDB cp .env.example .env # Edit .env to uncomment DynamoDB local settings: # DYNAMODB__DYNAMODB_ENDPOINT="http://localhost:8001"
-
Start services:
# Terminal 1: Start DynamoDB Local docker run -p 8001:8000 amazon/dynamodb-local:latest -jar DynamoDBLocal.jar -sharedDb -inMemory # Terminal 2: Export AWS credentials (local dev only) and start server export AWS_ACCESS_KEY_ID="test" AWS_SECRET_ACCESS_KEY="test" cd server && uv run python run.py --debug
cd server && uv run python run.py --debug # Development server with debug mode and auto reloadOr run with custom options:
cd server && uv run python run.py --host 127.0.0.1 --port 8080 --debugCommand-line options:
--host HOST: Host address (default: 0.0.0.0)--port PORT: Port number (default: 8000)--config CONFIG: Custom config file path--debug: Enable debug mode and auto-reload
The server loads configuration from server/config/base.toml by default. Settings can be overridden using environment variables with double underscore delimiter (e.g., SECURITY__SECRET_KEY).
You can specify a custom configuration file using the --config command-line option:
cd server && uv run python run.py --config /path/to/custom_config.tomlThe API provides the following versioned endpoints under /v1/:
GET /: Root endpoint that returns basic API informationPUT /v1/example: Compute field boundaries for a small area quickly and return as GeoJSONPOST /v1/scene-selection: Find optimal Sentinel-2 scenes for a specified area and time periodPOST /v1/projects: Create a new projectGET /v1/projects: List all projectsGET /v1/projects/{project_id}: Get details of a specific projectDELETE /v1/projects/{project_id}: Delete a specific projectPUT /v1/projects/{project_id}/images/{window}: Upload an image for a project (window can be 'a' or 'b')PUT /v1/projects/{project_id}/inference: Run inference on project imagesPUT /v1/projects/{project_id}/polygons: Run polygonization on inference resultsGET /v1/projects/{project_id}/inference: Get inference results for a project
The API uses Bearer token authentication. Include the Authorization header with a valid JWT token:
Authorization: Bearer <your_token_here>For development and testing, you can disable authentication by setting auth_disabled to true in server/config/base.toml.
You still need to send a Bearer token to the API, but you can define a token via jwt.io for example.
The important part is that the secret key in config and in the config file align.
You also need to set the sub to guest.
For the default config, the following token can be used:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJndWVzdCIsIm5hbWUiOiJHdWVzdCIsImlhdCI6MTc0ODIxNzYwMCwiZXhwaXJlcyI6OTk5OTk5OTk5OX0.lJIkuuSdE7ihufZwWtLx10D_93ygWUcUrtKhvlh6M8k
The application follows clean architecture principles with clear separation of concerns:
server/
├── app/ # Main application package
│ ├── api/v1/ # API endpoints and dependencies
│ ├── services/ # Business logic layer
│ ├── ml/ # ML pipeline and validation
│ ├── core/ # Infrastructure (auth, config, storage)
│ ├── schemas/ # Pydantic request/response models
│ ├── db/ # Database models and connection
│ └── main.py # FastAPI application setup
├── config/ # Configuration files
├── data/ # ML models, results, temp files
├── tests/ # Test suite
└── run.py # Development server runner
Uses Ruff for linting/formatting and pre-commit hooks for quality checks.
uv run ruff check . # Check code without fixing
uv run ruff format . # Auto-format code
uv run mypy server/app # Type checkSetup pre-commit:
uv run pre-commit installcd server && uv run pytest -v --cov=app --cov-report=xml --cov-report=term-missingSee the LICENSE file for details.