🚧 Early Development Stage - Architecture defined, core implementation in progress
cmdai converts natural language descriptions into safe POSIX shell commands using local LLMs. Built with Rust for blazing-fast performance, single-binary distribution, and safety-first design.
$ cmdai "list all PDF files in Downloads folder larger than 10MB"
Generated command:
find ~/Downloads -name "*.pdf" -size +10M -ls
Execute this command? (y/N) yThis project is in active early development. The architecture and module structure are in place, with implementation ongoing.
- Core CLI structure with comprehensive argument parsing
- Modular architecture with trait-based backends
- Embedded model backend with MLX (Apple Silicon) and CPU variants ✨
- Remote backend support (Ollama, vLLM) with automatic fallback ✨
- Safety validation with pattern matching and risk assessment
- Configuration management with TOML support
- Interactive user confirmation flows
- Multiple output formats (JSON, YAML, Plain)
- Contract-based test structure with TDD methodology
- Multi-platform CI/CD pipeline
- Model downloading and caching system
- Advanced command execution engine
- Performance optimization
- Multi-step goal completion
- Advanced context awareness
- Shell script generation
- Command history and learning
- 🚀 Instant startup - Single binary with <100ms cold start (target)
- 🧠 Local LLM inference - Optimized for Apple Silicon with MLX
- 🛡️ Safety-first - Comprehensive command validation framework
- 📦 Zero dependencies - Self-contained binary distribution
- 🎯 Multiple backends - Extensible backend system (MLX, vLLM, Ollama)
- 💾 Smart caching - Hugging Face model management
- 🌐 Cross-platform - macOS, Linux, Windows support
- Rust 1.75+ with Cargo
- macOS with Apple Silicon (for MLX backend, optional)
# Clone the repository
git clone https://github.com/wildcard/cmdai.git
cd cmdai
# Build the project
cargo build --release
# Run the CLI
./target/release/cmdai --version# Run tests
make test
# Format code
make fmt
# Run linter
make lint
# Build optimized binary
make build-release
# Run with debug logging
RUST_LOG=debug cargo run -- "your command"cmdai [OPTIONS] <PROMPT># Basic command generation
cmdai "list all files in the current directory"
# With specific shell
cmdai --shell zsh "find large files"
# JSON output for scripting
cmdai --output json "show disk usage"
# Adjust safety level
cmdai --safety permissive "clean temporary files"
# Auto-confirm dangerous commands
cmdai --confirm "remove old log files"
# Verbose mode with timing info
cmdai --verbose "search for Python files"| Option | Description | Status |
|---|---|---|
-s, --shell <SHELL> |
Target shell (bash, zsh, fish, sh, powershell, cmd) | ✅ Implemented |
--safety <LEVEL> |
Safety level (strict, moderate, permissive) | ✅ Implemented |
-o, --output <FORMAT> |
Output format (json, yaml, plain) | ✅ Implemented |
-y, --confirm |
Auto-confirm dangerous commands | ✅ Implemented |
-v, --verbose |
Enable verbose output with timing | ✅ Implemented |
-c, --config <FILE> |
Custom configuration file | ✅ Implemented |
--show-config |
Display current configuration | ✅ Implemented |
--auto |
Execute without confirmation | 📅 Planned |
--allow-dangerous |
Allow potentially dangerous commands | 📅 Planned |
--verbose |
Enable verbose logging | ✅ Available |
# Simple command generation
cmdai "compress all images in current directory"
# With specific backend
cmdai --backend mlx "find large log files"
# Verbose mode for debugging
cmdai --verbose "show disk usage"cmdai/
├── src/
│ ├── main.rs # CLI entry point
│ ├── backends/ # LLM backend implementations
│ │ ├── mod.rs # Backend trait definition
│ │ ├── mlx.rs # Apple Silicon MLX backend
│ │ ├── vllm.rs # vLLM remote backend
│ │ └── ollama.rs # Ollama local backend
│ ├── safety/ # Command validation
│ │ └── mod.rs # Safety validator
│ ├── cache/ # Model caching
│ ├── config/ # Configuration management
│ ├── cli/ # CLI interface
│ ├── models/ # Data models
│ └── execution/ # Command execution
├── tests/ # Contract-based tests
└── specs/ # Project specifications
- CommandGenerator Trait - Unified interface for all LLM backends
- SafetyValidator - Command validation and risk assessment
- Backend System - Extensible architecture for multiple inference engines
- Cache Manager - Hugging Face model management (planned)
#[async_trait]
trait CommandGenerator {
async fn generate_command(&self, request: &CommandRequest)
-> Result<GeneratedCommand, GeneratorError>;
async fn is_available(&self) -> bool;
fn backend_info(&self) -> BackendInfo;
}- Rust 1.75+
- Cargo
- Make (optional, for convenience commands)
- Docker (optional, for development container)
# Clone and enter the project
git clone https://github.com/wildcard/cmdai.git
cd cmdai
# Install dependencies and build
cargo build
# Run tests
cargo test
# Check formatting
cargo fmt -- --check
# Run clippy linter
cargo clippy -- -D warningscmdai supports multiple inference backends with automatic fallback:
- MLX: Optimized for Apple Silicon Macs (M1/M2/M3)
- CPU: Cross-platform fallback using Candle framework
- Model: Qwen2.5-Coder-1.5B-Instruct (quantized)
- No external dependencies required
Configure in ~/.config/cmdai/config.toml:
[backend]
primary = "embedded" # or "ollama", "vllm"
enable_fallback = true
[backend.ollama]
base_url = "http://localhost:11434"
model_name = "codellama:7b"
[backend.vllm]
base_url = "http://localhost:8000"
model_name = "codellama/CodeLlama-7b-hf"
api_key = "optional-api-key"The project uses several configuration files:
Cargo.toml- Rust dependencies and build configuration~/.config/cmdai/config.toml- User configurationclippy.toml- Linter rulesrustfmt.toml- Code formatting rulesdeny.toml- Dependency audit configuration
The project uses contract-based testing:
- Unit tests for individual components
- Integration tests for backend implementations
- Contract tests to ensure trait compliance
- Property-based testing for safety validation
cmdai includes comprehensive safety validation to prevent dangerous operations:
- ✅ System destruction patterns (
rm -rf /,rm -rf ~) - ✅ Fork bombs detection (
:(){:|:&};:) - ✅ Disk operations (
mkfs,dd if=/dev/zero) - ✅ Privilege escalation detection (
sudo su,chmod 777 /) - ✅ Critical path protection (
/bin,/usr,/etc) - ✅ Command validation and sanitization
- Safe (Green) - Normal operations, no confirmation needed
- Moderate (Yellow) - Requires user confirmation in strict mode
- High (Orange) - Requires confirmation in moderate mode
- Critical (Red) - Blocked in strict mode, requires explicit confirmation
Configure safety levels in ~/.config/cmdai/config.toml:
[safety]
enabled = true
level = "moderate" # strict, moderate, or permissive
require_confirmation = true
custom_patterns = ["additional", "dangerous", "patterns"]We welcome contributions! This is an early-stage project with many opportunities to contribute.
- 🔌 Backend implementations
- 🛡️ Safety pattern definitions
- 🧪 Test coverage expansion
- 📚 Documentation improvements
- 🐛 Bug fixes and optimizations
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure all tests pass
- Submit a pull request
- Follow Rust best practices
- Add tests for new functionality
- Update documentation as needed
- Use conventional commit messages
- Run
make checkbefore submitting
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the LICENSE file for details.
- ✅ Commercial use
- ✅ Modification
- ✅ Distribution
- ✅ Private use
⚠️ Network use requires source disclosure⚠️ Same license requirement⚠️ State changes documentation
- MLX - Apple's machine learning framework
- vLLM - High-performance LLM serving
- Ollama - Local LLM runtime
- Hugging Face - Model hosting and caching
- clap - Command-line argument parsing
- 🐛 Bug Reports: GitHub Issues
- 💡 Feature Requests: GitHub Discussions
- 📖 Documentation: See
/specsdirectory for detailed specifications
- CLI argument parsing
- Module architecture
- Backend trait system
- Basic command generation
- Dangerous pattern detection
- POSIX compliance checking
- User confirmation workflows
- Risk assessment system
- vLLM HTTP API support
- Ollama local backend
- Response parsing
- Error handling
- FFI bindings with cxx
- Metal Performance Shaders
- Unified memory handling
- Apple Silicon optimization
- Comprehensive testing
- Performance optimization
- Binary distribution
- Package manager support
Built with Rust | Safety First | Open Source
Note: This is an active development project. Features and APIs are subject to change. See the specs directory for detailed design documentation.