Module ID: tool-workflow
Declarative workflow orchestration for Amplifier with agent delegation and state persistence.
This module provides workflow execution capabilities that integrate seamlessly with Amplifier's coordinator to enable complex, multi-step AI workflows with:
- Agent Delegation: Leverage Amplifier's agents for specialized tasks
- Conditional Routing: Dynamic workflow paths based on outputs
- State Persistence: Checkpoint and resume long-running workflows
- Context Management: Share data between workflow nodes
- Multiple Node Types: AI, agent, bash, python, and terminal nodes
Define workflows in YAML with clear, readable syntax:
workflow:
name: "code-review"
description: "Automated code review workflow"
nodes:
- id: "analyze"
name: "Analyze Code"
agent: "zen-architect"
prompt: "Analyze the code in {file_path}"
outputs:
- analysis_report
next: "find-bugs"
- id: "find-bugs"
name: "Hunt for Bugs"
agent: "bug-hunter"
prompt: "Review code and analysis: {analysis_report}"
outputs:
- bug_reportSeamlessly delegate to Amplifier agents:
zen-architect: Architecture and design analysisbug-hunter: Systematic bug detectionmodular-builder: Implementation tasks- Custom agents as needed
- Automatic checkpointing after each node
- Resume interrupted workflows
- Query workflow status
- List all sessions
This module is designed to be used with Amplifier. Install via:
# As part of Amplifier workspace
cd .amplifier/modules
ln -s /path/to/amplifier-module-tool-workflow tool-workflow
# Or via git
git submodule add <repo-url> .amplifier/modules/tool-workflowWhen integrated with Amplifier, the module provides these tools:
Execute a workflow from a YAML file:
result = await coordinator.invoke_tool(
"workflow_run",
workflow_file="path/to/workflow.yaml",
context={"file_path": "src/main.py"},
save_checkpoints=True
)Resume an interrupted workflow:
result = await coordinator.invoke_tool(
"workflow_resume",
session_id="workflow_20250119_143022_a3f2"
)Check workflow status:
status = await coordinator.invoke_tool(
"workflow_status",
session_id="workflow_20250119_143022_a3f2"
)List all workflow sessions:
sessions = await coordinator.invoke_tool(
"workflow_list",
show_completed=False
)The module also provides a standalone CLI for testing and development:
# Run a workflow
dotrunner run workflow.yaml
# Run with context override
dotrunner run workflow.yaml --context '{"file": "main.py"}'
# List sessions
dotrunner list
# Check status
dotrunner status workflow_20250119_143022_a3f2
# Resume workflow
dotrunner resume workflow_20250119_143022_a3f2workflow:
name: "workflow-name"
description: "What this workflow does"
version: "1.0.0"
context:
# Global context variables
key: value
nodes:
- id: "unique-id"
name: "Human-readable name"
prompt: "Task with {context_vars}"
agent: "agent-name" # Optional
agent_mode: "ANALYZE" # Optional
outputs: ["var1", "var2"]
next: "next-node-id"
retry_on_failure: 1-
AI Nodes (default): Text-only LLM interaction
- id: "analyze" prompt: "Analyze this code: {code}" outputs: ["analysis"]
-
Agent Nodes: Full agent with tools
- id: "implement" type: "agent" agent: "modular-builder" agent_mode: "EXECUTE" prompt: "Implement feature X"
-
Bash Nodes: Execute shell commands
- id: "run-tests" type: "bash" command: "pytest tests/" outputs: ["test_results"]
-
Python Nodes: Execute Python scripts
- id: "process" type: "python" script: | result = context['data'] * 2 processed = f"Doubled: {result}" outputs: ["processed"]
-
Terminal Nodes: Mark end points
- id: "done" type: "terminal"
Route based on node outputs:
- id: "check-size"
outputs: ["size"]
next:
"large": "deep-review"
"small": "quick-review"
default: "standard-review"Access variables in prompts:
- Global context: Defined in
workflow.context - Node outputs: Automatically added to context
- Syntax:
{variable_name}in prompts
See the examples/ directory for complete workflow examples:
simple_linear.yaml: Sequential code review workflowconditional_flow.yaml: PR review with dynamic routingexample_agent_node.yaml: Agent node demonstrationmulti_type_workflow.yaml: All node types showcase
Tool Module (amplifier-module-tool-workflow)
def mount(coordinator, config=None) -> WorkflowTool:
"""Mount the workflow tool module"""- Coordinator: For agent spawning via
coordinator.spawn_agent() - State Persistence: Local
.dotrunner/sessions/directory - Workflow Files: YAML definitions loaded from filesystem
- workflow.py: Workflow and Node data models
- engine.py: Orchestration logic
- executor.py: Node execution (coordinator integration)
- state.py: State tracking data structures
- persistence.py: Checkpoint save/load
- context.py: Variable interpolation
- evaluator.py: Conditional expression evaluation
# Run tests
pytest
# With coverage
pytest --cov# Install in editable mode
pip install -e .
# Run CLI
dotrunner --helpThis module follows Amplifier's implementation philosophy:
- Ruthless Simplicity: Minimal abstractions, clear data flow
- Modular Design: Self-contained with clear boundaries
- Coordinator Integration: Uses coordinator for agent delegation (not subprocess)
- State Management: Explicit checkpoint/resume capability
- Pure Functions: Context interpolation, validation are pure
amplifier-core: Core Amplifier functionalitypyyaml>=6.0: YAML parsingclick>=8.0: CLI frameworkrich>=13.0: Terminal output formatting
MIT License - See LICENSE file for details
This module is part of the Amplifier ecosystem. Follow Amplifier's contribution guidelines.
For issues and questions:
- Check the
examples/directory for usage patterns - Review workflow validation errors for structure issues
- Use
dotrunner statusto inspect workflow state - Enable debug logging for detailed execution traces