SIGINT forensics platform for wireless analysis optimized for mapping and spatial correlation using PostGIS — built with TypeScript, React (Vite), and modern tooling.
Table of contents
- Overview
- Key Features
- Architecture
- Requirements
- Quickstart (Docker Compose)
- Manual Local Setup
- Database (PostGIS) Setup
- Configuration
- Usage & Examples
- Security & Privacy
- Testing
- Deployment
- Roadmap
- Contributing
- License
- Acknowledgements
- Contacts
ShadowCheck is a SIGINT-focused forensics and analysis platform for wireless network data that combines spatial analysis through PostGIS with a modern web frontend. It provides capabilities to ingest, visualize, correlate, and export wireless observations and derived artifacts for investigative workflows.
- Spatially-aware storage and indexing using PostgreSQL + PostGIS
- Interactive mapping and timeline visualizations (React + Vite)
- Ingest pipeline for wireless capture data (PCAP) and derived metadata
- Correlation and enrichment of observations with geospatial queries
- RESTful API backend implemented in TypeScript
- Docker-friendly for reproducible deployments
- Extensible data model for signals, sessions, devices, and annotations
Basic high-level architecture (recommended folder layout):
- /backend — TypeScript Node.js API server (Express, Nest, or Fastify)
- /frontend — React + Vite single-page application
- /db — database scripts, migrations, SQL helpers, GIS assets
- /docker — docker-compose configuration for dev/test
- /docs — additional diagrams, data model, and SOPs
Flow:
- Ingest (PCAP → parser) → 2. Enrich (metadata, geolocation) → 3. Store (Postgres/PostGIS) → 4. Query & Visualize (API → frontend)
- Docker & Docker Compose (recommended for development)
- Node.js (18+) and npm / pnpm / yarn (if running locally)
- PostgreSQL 14+ with PostGIS extension (if not using Docker)
- Modern browser for UI (Chrome, Firefox)
- Optional: tools for PCAP parsing (tshark, scapy)
The fastest way to get ShadowCheck running for development/testing is with Docker.
- Copy the example env:
cp .env.example .env- Start services:
docker compose up --build- Wait until Postgres + PostGIS are ready, then run migrations (if applicable):
# Example (replace with your migration tool)
docker compose exec backend npm run migrate- Open the frontend at:
- http://localhost:3000 (or the port configured in .env)
Backend
cd backend
cp .env.example .env
npm install
npm run dev
# or
pnpm install && pnpm devFrontend
cd frontend
cp .env.example .env
npm install
npm run dev
# default Vite URL: http://localhost:5173If you manage the database manually, create the database and enable PostGIS.
- Create database and user:
CREATE USER shadow_user WITH PASSWORD 'strong_password';
CREATE DATABASE shadowcheck OWNER shadow_user;- Connect to the DB and enable PostGIS:
\c shadowcheck
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS postgis_topology;- Apply schema/migrations:
- If using migrations (TypeORM, Knex, Prisma): run the migration command provided in /backend.
- If using raw SQL: run files in /db/migrations/.
Environment configuration should be stored in .env files and secrets handled with a secrets manager in production.
Typical variables (backend .env):
PORT=4000
DATABASE_URL=postgres://shadow_user:password@db:5432/shadowcheck
JWT_SECRET=replace_with_strong_secret
NODE_ENV=development
LOG_LEVEL=info
Typical variables (frontend .env):
VITE_API_URL=http://localhost:4000/api
API conventions (examples — confirm with your actual implementation):
- GET /api/health — health-check
- POST /api/ingest — submit parsed capture or metadata for ingestion
- GET /api/observations — query observations with spatial filters
- GET /api/devices/:id — get device/session details and history
- GET /api/map/tiles — geojson or vector tile endpoints for visualizations
Example: Query observations within a bounding box
GET /api/observations?bbox=-122.5,37.7,-122.3,37.8Importing PCAP-derived JSON
- Convert PCAP to JSON metadata (e.g., using tshark/scapy custom scripts).
- POST the JSON to /api/ingest or drop into an ingestion directory watched by your backend.
- This project deals with sensitive signal data. Ensure access controls, encrypted transport (TLS), and strong authentication (JWT/OAuth + MFA) in production.
- Keep personally-identifying information (PII) handling and retention policies compliant with applicable laws.
- Use role-based access control on API endpoints and GIS data layers.
- Backend: unit tests and integration tests (Jest/Mocha)
- Frontend: UI tests and component tests (Vitest / React Testing Library)
- Run tests:
# backend
cd backend
npm test
# frontend
cd frontend
npm testSuggested production steps:
- Build frontend static assets and host behind CDN (or serve from backend).
- Deploy backend as containerized service (Kubernetes, ECS, or plain Docker) behind HTTPS load balancer.
- Use managed Postgres with PostGIS enabled or host in your infrastructure; ensure backups and point-in-time recovery.
- Monitor: metrics (Prometheus), logs (ELK/LogDNA), and alerts for resource and security events.
Planned improvements (example; adapt to your priorities):
- Enrichment services for device fingerprinting and signal triangulation
- Vector-tile support for large-scale mapping
- User/role management and audit trails
- Integrations: MISP, Elastic, kyber/tshark-based parsers
- Stream processing for near-real-time ingestion (Kafka)
We welcome contributions. Suggested workflow:
- Fork the repo
- Create a branch: git checkout -b feat/short-description
- Add tests for new features
- Open a PR against master describing changes and rationale
Please follow the project's coding style and add/adjust documentation where necessary.
No license is currently selected for this repository. Add a LICENSE file (recommended: MIT, Apache-2.0) to make the project's license explicit.
- Built on open-source building blocks: PostgreSQL, PostGIS, Node.js, React, Vite
- Thanks to the maintainers of the libraries and tools used by ShadowCheck
Repository: https://github.com/cyclonite69/shadowcheck Owner: @cyclonite69
- This README is designed as a practical, developer-friendly starting point. Adjust commands and sections to reflect the exact toolchain and folder layout used in your repository.
- If you want, I can:
- tailor the README to the repo's actual file structure after scanning the tree,
- add a LICENSE file and open a PR,
- or generate a docker-compose.yml and .env.example tuned to your code.