Skip to content

Commit eb79c6f

Browse files
committed
feat(cli): load provider-specific credentials from YAML profiles
- add llm_providers.yaml support with per-provider API keys and defaults - expose --providers-config and merge profiles into the existing flag/env precedence - document the workflow and supply an example profile file
1 parent ddebdc2 commit eb79c6f

File tree

8 files changed

+309
-10
lines changed

8 files changed

+309
-10
lines changed

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@ Cargo.lock
66
coverage/
77
*.profraw
88
*.profdata
9-
/.DS_Store
9+
.DS_Store
1010
/.env
11+
llm_providers.yaml
1112
.env.local
1213
/dist/

Cargo.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ tracing = "0.1"
1515
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
1616
serde = { version = "1", features = ["derive"] }
1717
serde_json = "1"
18+
serde_yaml = "0.9"
1819
aho-corasick = "1"
1920
regex = "1"
2021
clap = { version = "4", features = ["derive"] }

PLAN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ This living document tracks implementation progress for the LLM-Guard project, d
118118

119119
- [~] Replace existing LLM adapter wiring with rig.rs (OpenAI, Anthropic, Gemini, Azure now route through the rig adapter; noop remains standalone)
120120
- [~] Map current provider implementations (Anthropic, Gemini, Azure, noop) into rig.rs abstractions (noop client still separate)
121-
- [~] Ensure configuration precedence (config → env → flags) is preserved via rig.rs (CLI now exposes deployment/project/workspace overrides)
121+
- [~] Ensure configuration precedence (config → env → flags) is preserved via rig.rs (CLI now exposes deployment/project/workspace overrides and provider profiles)
122122
- [ ] Update CLI tests and documentation to reflect the new runtime
123123

124124
---

README.md

Lines changed: 28 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ A fast, explainable **Rust** CLI that scans prompts and logs for **prompt-inject
5656

5757
## Hackathon Context
5858

59-
This project was developed during the **[AI Coding Accelerator](https://maven.com/nila/ai-coding-accelerator)** hackathon (Maven) as an experiment in **AI-assisted software development**.
59+
This project was developed during the **[AI Coding Accelerator](https://maven.com/nila/ai-coding-accelerator)** hackathon (Maven) as an experiment in **AI-assisted software development**. The entire project was built in a **single day (~7 hours)** using AI coding assistants.
6060

6161
**Instructors:** [Vignesh Mohankumar](https://x.com/vig_xyz) and [Jason Liu](https://x.com/jxnlco)
6262

@@ -208,6 +208,26 @@ export LLM_GUARD_API_VERSION=2024-02-15-preview
208208

209209
**CLI overrides:** Use `--provider`, `--model`, `--endpoint`, `--deployment`, `--project`, and `--workspace` to override these values for a single run without touching environment variables.
210210

211+
**Provider profiles (`llm_providers.yaml`):**
212+
213+
The CLI also looks for an optional `llm_providers.yaml` (override with `--providers-config`). This file lets you store credentials and defaults per provider so you can keep multiple API keys side-by-side. Example:
214+
215+
```yaml
216+
providers:
217+
- name: "openai"
218+
api_key: "OPENAI_API_KEY"
219+
model: "gpt-4o-mini"
220+
- name: "azure"
221+
api_key: "AZURE_OPENAI_KEY"
222+
endpoint: "https://your-resource.openai.azure.com"
223+
deployment: "gpt-4o-production"
224+
api_version: "2024-02-15-preview"
225+
timeout_secs: 60
226+
max_retries: 3
227+
```
228+
229+
Credentials are merged with environment variables and CLI flags using the usual precedence (flags → env → provider profile). To get started quickly, copy `llm_providers.example.yaml` to `llm_providers.yaml` and replace the placeholder values.
230+
211231
**Loading from `.env` file:**
212232

213233
```bash
@@ -368,7 +388,7 @@ This project demonstrates a **PRD-driven, multi-agent AI coding workflow** optim
368388
- **Multi-Agent Specialization:** GPT-5 Codex for implementation + Claude Code for reviews = better outcomes than single agent
369389
- **Separated Tool Contexts:** Cursor (review) + separate terminals (coding) + Tower (git) created clear mental boundaries
370390
- **MCP Context Servers:** RepoPrompt provided excellent repository-wide context for Codex CLI
371-
- **Rust Learning Accelerator:** AI assistants dramatically shortened learning curve for Rust newcomer (zero to functional CLI in days)
391+
- **Rust Learning Accelerator:** AI assistants dramatically shortened learning curve for Rust newcomer (zero to functional CLI in ~7 hours)
372392
- **Living Documentation:** [`AGENTS.md`](./AGENTS.md) successfully onboarded AI agents with consistent conventions across sessions
373393
- **Perplexity for Research:** Quick ramp-up on Rust best practices through targeted research queries
374394
@@ -497,12 +517,12 @@ MIT License — see [`LICENSE`](./LICENSE) file for details.
497517
498518
**AI Coding Accelerator Hackathon**
499519
- **Course:** [Maven's AI Coding Accelerator](https://maven.com/nila/ai-coding-accelerator)
500-
- **Instructors:** Vignesh Mohankumar and Jason Liu
520+
- **Instructors:** [Vignesh Mohankumar](https://x.com/vig_xyz) and [Jason Liu](https://x.com/jxnlco)
501521
- **Focus:** Practical applications of AI coding tools in modern software development
502522
503523
**Tools & Technologies**
504-
- **AI Agents:** GPT-5 Codex (via Codex CLI), Claude Code (Anthropic)
505-
- **MCP Servers:** RepoPrompt, Context7
506-
- **IDE:** Cursor
507-
- **Research:** Perplexity
508-
- **Git Client:** Tower
524+
- **AI Agents:** [Codex CLI (OpenAI)](https://github.com/openai/codex-cli), [Claude Code (Anthropic)]((https://claude.ai))
525+
- **MCP Servers:** [RepoPrompt](https://repoprompt.com/), [Context7](https://context7.com/)
526+
- **IDE:** [Cursor](https://cursor.sh)
527+
- **Research:** [Perplexity](https://www.perplexity.ai/)
528+
- **Git Client:** [Tower](https://www.git-tower.com/mac)

crates/llm-guard-cli/Cargo.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ clap.workspace = true
1111
tracing.workspace = true
1212
tracing-subscriber.workspace = true
1313
serde_json.workspace = true
14+
serde.workspace = true
15+
serde_yaml.workspace = true
1416
tokio.workspace = true
1517
llm-guard-core = { path = "../llm-guard-core" }
1618
config.workspace = true
@@ -19,3 +21,4 @@ config.workspace = true
1921
assert_cmd = "2"
2022
predicates = "3"
2123
tempfile = "3"
24+
once_cell.workspace = true

0 commit comments

Comments
 (0)