Skip to content

Commit a7ccd60

Browse files
authored
update q cli doc (#115)
Signed-off-by: Manabu McCloskey <[email protected]>
1 parent 1e7a4e6 commit a7ccd60

File tree

1 file changed

+84
-32
lines changed

1 file changed

+84
-32
lines changed

examples/integrations/amazon-q-cli/README.md

Lines changed: 84 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -4,66 +4,87 @@ Connect Amazon Q CLI to Spark History Server for command-line Spark analysis.
44

55
## Prerequisites
66

7-
1. **Clone and setup repository**:
8-
```bash
9-
git clone https://github.com/kubeflow/mcp-apache-spark-history-server.git
10-
cd mcp-apache-spark-history-server
11-
12-
# Install Task (if not already installed)
13-
brew install go-task # macOS
14-
# or see https://taskfile.dev/installation/ for other platforms
15-
16-
# Setup dependencies
17-
task install
18-
```
7+
1. **Install uv** (if not already installed):
198

20-
2. **Start Spark History Server with sample data**:
219
```bash
22-
task start-spark-bg
23-
# Starts server at http://localhost:18080 with 3 sample applications
10+
# macOS/Linux
11+
curl -LsSf https://astral.sh/uv/install.sh | sh
12+
# or see https://docs.astral.sh/uv/getting-started/installation/
2413
```
2514

26-
3. **Verify setup**:
15+
2. **Start Spark History Server** (optional - for testing with sample data):
16+
2717
```bash
18+
# Clone repository for sample data
19+
git clone https://github.com/kubeflow/mcp-apache-spark-history-server.git
20+
cd mcp-apache-spark-history-server
21+
22+
# Install Task and start sample server
23+
brew install go-task # macOS, see https://taskfile.dev/installation/ for others
24+
task start-spark-bg # Starts server at http://localhost:18080 with 3 sample applications
25+
26+
# Verify setup
2827
curl http://localhost:18080/api/v1/applications
2928
# Should return 3 applications
3029
```
3130

3231
## Setup
3332

34-
1. **Add MCP server**:
33+
1. **Add MCP server** (using default configuration):
34+
3535
```bash
3636
q mcp add \
3737
--name spark-history-server \
38-
--command $(git rev-parse --show-toplevel)/spark_history_server_mcp_launcher.sh \
39-
--args "-p,q_cli" \
38+
--command uvx \
39+
--args "--from,mcp-apache-spark-history-server,spark-mcp" \
4040
--env SHS_MCP_TRANSPORT=stdio \
4141
--scope global
4242
```
4343

44-
Results should look something like this
44+
2. **Add MCP server with custom configuration**:
45+
46+
```bash
47+
# Using command line config argument
48+
q mcp add \
49+
--name spark-history-server \
50+
--command uvx \
51+
--args "--from,mcp-apache-spark-history-server,spark-mcp,--config,/path/to/config.yaml" \
52+
--env SHS_MCP_TRANSPORT=stdio \
53+
--scope global
54+
55+
# Using environment variable
56+
q mcp add \
57+
--name spark-history-server \
58+
--command uvx \
59+
--args "--from,mcp-apache-spark-history-server,spark-mcp" \
60+
--env SHS_MCP_CONFIG=/path/to/config.yaml \
61+
--env SHS_MCP_TRANSPORT=stdio \
62+
--scope global
4563
```
46-
cat ~/.aws/amazonq/mcp.json
4764

65+
Results should look something like this:
66+
67+
68+
```bash
69+
cat ~/.aws/amazonq/mcp.json
70+
```
71+
```json
4872
{
4973
"mcpServers": {
5074
"spark-history-server": {
51-
"command": "/Users/name/mcp-apache-spark-history-server/spark_history_server_mcp_launcher.sh",
75+
"command": "uvx",
5276
"args": [
53-
"-p",
54-
"q_cli" # pre-appends to mcp_server_output.log
77+
"--from",
78+
"mcp-apache-spark-history-server",
79+
"spark-mcp"
5580
],
56-
"env": {
57-
"SHS_MCP_TRANSPORT": "stdio"
58-
},
5981
"timeout": 120000,
6082
"disabled": false
6183
}
84+
}
6285
}
6386
```
6487

65-
2. **Test connection**
66-
6788
## Usage
6889

6990
Start interactive session:
@@ -87,18 +108,49 @@ echo "What are the bottlenecks in spark-cc4d115f011443d787f03a71a476a745?"
87108
- List servers: `q mcp list`
88109
- Remove: `q mcp remove --name mcp-apache-spark-history-server`
89110

90-
## Remote Spark History Server
111+
## Configuration
112+
113+
The MCP server supports flexible configuration file paths:
114+
115+
### Configuration Priority
116+
1. **Command line argument** (highest priority): `--config /path/to/config.yaml`
117+
2. **Environment variable**: `SHS_MCP_CONFIG=/path/to/config.yaml`
118+
3. **Default**: Uses `config.yaml` in current directory
91119

92-
To connect to a remote Spark History Server, edit `config.yaml` in the repository:
120+
### Configuration File Format
121+
Create a `config.yaml` file for your Spark History Server:
93122

94123
```yaml
95124
servers:
96125
production:
97126
default: true
98127
url: "https://spark-history-prod.company.com:18080"
99-
auth:
128+
auth: # optional
100129
username: "user"
101130
password: "pass"
131+
staging:
132+
url: "https://spark-history-staging.company.com:18080"
133+
```
134+
135+
### Remote Spark History Server Examples
136+
137+
**Using command line config:**
138+
```bash
139+
q mcp add \
140+
--name spark-history-server \
141+
--command uvx \
142+
--args "--from,mcp-apache-spark-history-server,spark-mcp,--config,/path/to/prod-config.yaml" \
143+
--scope global
144+
```
145+
146+
**Using environment variable:**
147+
```bash
148+
q mcp add \
149+
--name spark-history-server \
150+
--command uvx \
151+
--args "--from,mcp-apache-spark-history-server,spark-mcp" \
152+
--env SHS_MCP_CONFIG=/path/to/staging-config.yaml \
153+
--scope global
102154
```
103155

104156
**Note**: Amazon Q CLI requires local MCP server execution. For remote MCP servers, consider:

0 commit comments

Comments
 (0)