Skip to content

Commit e509d70

Browse files
committed
test/docs: add rig fallback coverage, fuzz env parsing, expand usage guide
1 parent 8fb4db6 commit e509d70

File tree

4 files changed

+31
-6
lines changed

4 files changed

+31
-6
lines changed

PLAN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,8 +177,8 @@ This living document tracks implementation progress for the LLM-Guard project, d
177177

178178
### Next Steps (Quick Reference)
179179

180-
1. ⚒️ Advance Phase 7 with remaining fuzzing (streaming ingestion edge cases, LLM adapter mocks).
181-
2. 📘 Expand Phase 8 documentation (usage guide landing in `docs/USAGE.md` can seed the README refresh).
180+
1. ⚒️ Advance Phase 7 with remaining fuzzing (tail CLI edge cases, LLM mock fallbacks, health command coverage).
181+
2. 📘 Expand Phase 8 documentation (embed rig screenshots, add FAQ/troubleshooting, refresh README examples).
182182
3. 🧪 Finalise Phase 9 with refreshed CLI tests demonstrating the rig-backed providers.
183183

184184
Keep this list in sync with the checkboxes above as you iterate.

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -208,11 +208,10 @@ export LLM_GUARD_API_VERSION=2024-02-15-preview
208208
--provider noop
209209

210210
# Debug mode (dump raw provider verdict payloads on parse errors)
211-
./target/release/llm-guard scan --file samples/chat.txt --with-llm \
212-
--provider anthropic --debug
211+
./target/release/llm-guard --debug scan --file samples/chat.txt --with-llm
213212

214-
# Health check against configured providers (uses llm_providers.yaml if present)
215-
./target/release/llm-guard health --providers-config llm_providers.yaml
213+
# Rig-backed health diagnostics (respects llm_providers.yaml overrides)
214+
./target/release/llm-guard --debug health --provider openai
216215
```
217216

218217
**Streaming Mode:**

crates/llm-guard-core/src/llm/rig_adapter.rs

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -615,4 +615,20 @@ mod tests {
615615
assert!(verdict.rationale.contains("anthropic"));
616616
assert!(!verdict.mitigation.is_empty());
617617
}
618+
619+
#[test]
620+
fn parse_verdict_returns_fallback_for_invalid_json() {
621+
let verdict =
622+
parse_verdict_json("not-json", "openai", "gpt-test").expect("should fallback");
623+
assert_eq!(verdict.label, "unknown");
624+
assert!(verdict.rationale.contains("openai"));
625+
}
626+
627+
#[test]
628+
fn truncate_adds_ellipsis_when_exceeding_limit() {
629+
let long = "abcdefghijklmnopqrstuvwxyz";
630+
let truncated = truncate(long, 10);
631+
assert!(truncated.ends_with('…'));
632+
assert_eq!(truncated.chars().count(), 11);
633+
}
618634
}

docs/USAGE.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -472,6 +472,16 @@ llm-guard scan --file /var/log/chatbot.log --tail --with-llm --json | \
472472

473473
---
474474

475+
## Rig Provider Troubleshooting
476+
477+
- **`API key must be provided`** — Ensure `LLM_GUARD_API_KEY` (or provider profile entry) is set and non-empty. Noop provider is the only exception.
478+
- **`requires endpoint` (Azure)** — Supply `--endpoint` or `LLM_GUARD_ENDPOINT` pointing to your Azure resource (e.g., `https://example.openai.azure.com`).
479+
- **`requires deployment`** — Provide `--deployment`/`LLM_GUARD_DEPLOYMENT` or reuse `--model` as the deployment name.
480+
- **Empty or non-JSON verdicts** — Enable `--debug` to log raw provider responses; the rig adapter will fall back to an "unknown" verdict rather than panic.
481+
- **HTTP 401/403** — Regenerate API keys or confirm tenant/project values (Anthropic/Gemini frequently require project/workspace settings).
482+
483+
---
484+
475485
## Related Documentation
476486

477487
- **[README.md](../README.md)** — Project overview, features, and AI workflow insights

0 commit comments

Comments
 (0)