Skip to content

Commit 1bf3fd4

Browse files
committed
docs: add readme
1 parent 484f600 commit 1bf3fd4

File tree

5 files changed

+209
-0
lines changed

5 files changed

+209
-0
lines changed
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Model Context Protocol (MCP) - Proof of Concept
2+
3+
This directory contains a **proof of concept (MVP)** demonstrating how Model Context Protocol (MCP) works. This is **NOT** the full OpenEdx AI Extensions integration yet, but rather a working example to understand MCP concepts and test the infrastructure.
4+
5+
## Overview
6+
7+
This MVP demonstrates the MCP architecture and workflow using a simple dice-rolling example. The implementation consists of three main components:
8+
9+
1. **`server.py`** - A FastMCP server with an example tool (`roll_dice`)
10+
2. **`run_server.py`** - Script to run the MCP server in HTTP mode
11+
3. **`client_example.py`** - Example client showing how to interact with the MCP server
12+
13+
**Note:** The actual OpenEdx-specific tools and integration will be implemented in future iterations. This MVP focuses on validating the MCP infrastructure and communication patterns.
14+
15+
## Architecture
16+
17+
The current implementation uses:
18+
- **FastMCP** - A framework for building MCP servers
19+
- **Streamable HTTP** transport - Allows the server to be exposed via HTTP
20+
- **LiteLLM** - For integrating the MCP server with language models
21+
22+
### Example Tool: `roll_dice`
23+
24+
The server currently implements a simple example tool that rolls dice. **This is a demonstration tool only** to show how MCP tools work. This pattern will later be extended to implement OpenEdx-specific operations like course management, user administration, content creation, etc.
25+
26+
## Setup
27+
28+
### Prerequisites
29+
30+
Install the required dependencies:
31+
32+
```bash
33+
pip install fastmcp litellm openai
34+
```
35+
36+
### Environment Variables
37+
38+
Set your OpenAI API key:
39+
40+
```bash
41+
export OPENAI_API_KEY="your_openai_api_key_here"
42+
```
43+
44+
Or update it directly in `client_example.py`.
45+
46+
## Running the Server
47+
48+
### Step 1: Start the MCP Server
49+
50+
Run the server locally on port 9001:
51+
52+
```bash
53+
python run_server.py
54+
```
55+
56+
The server will start and listen on `http://127.0.0.1:9001/mcp`.
57+
58+
### Step 2: Expose the Server with ngrok
59+
60+
Since the MCP protocol requires a publicly accessible endpoint for certain use cases, you need to expose your local server using ngrok:
61+
62+
```bash
63+
# Install ngrok if you haven't already
64+
# Visit https://ngrok.com/ to download and set up
65+
66+
# Expose port 9001
67+
ngrok http 9001
68+
```
69+
70+
ngrok will provide you with a public URL like:
71+
```
72+
https://abc123.ngrok-free.app
73+
```
74+
75+
**Important**: Copy the ngrok URL (including the subdomain) as you'll need it for the client configuration.
76+
77+
### Step 3: Update the Client Configuration
78+
79+
Edit `client_example.py` and update the `server_url` with your ngrok URL:
80+
81+
```python
82+
tools=[
83+
{
84+
"type": "mcp",
85+
"server_label": "dice_server",
86+
"server_url": "https://<your_ngrok_subdomain>.ngrok-free.app/mcp/",
87+
"require_approval": "never",
88+
},
89+
],
90+
```
91+
92+
Replace `<your_ngrok_subdomain>` with your actual ngrok subdomain (e.g., `abc123.ngrok-free.app`).
93+
94+
### Step 4: Run the Client Example
95+
96+
In a new terminal (while the server and ngrok are still running):
97+
98+
```bash
99+
python client_example.py
100+
```
101+
102+
## Testing Workflow
103+
104+
Here's the complete workflow for testing:
105+
106+
1. **Terminal 1** - Start the MCP server:
107+
```bash
108+
cd backend/openedx_ai_extensions/mcp
109+
python run_server.py
110+
```
111+
112+
2. **Terminal 2** - Expose with ngrok:
113+
```bash
114+
ngrok http 9001
115+
```
116+
Copy the ngrok URL from the output.
117+
118+
3. **Terminal 3** - Run the client:
119+
```bash
120+
# Update client_example.py with your ngrok URL first
121+
python client_example.py
122+
```
123+
124+
## Expected Output
125+
126+
When running the client, you should see:
127+
128+
1. List of available tools from the MCP server
129+
2. The AI model response after using the `roll_dice` tool
130+
131+
Example:
132+
```
133+
Available resources: ['roll_dice']
134+
Response: <LiteLLM response object with dice roll results>
135+
```
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
"""
2+
Model Context Protocol (MCP) integration for OpenEdx AI Extensions
3+
"""
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# client_litellm.py
2+
import asyncio
3+
from fastmcp import Client, tools
4+
from litellm import completion, responses
5+
import openai
6+
7+
8+
import os
9+
os.environ["OPENAI_API_KEY"] = "your_openai_api_key_here"
10+
11+
12+
openai_client = openai.OpenAI()
13+
14+
async def main():
15+
# Connect to the MCP server
16+
async with Client("http://127.0.0.1:9001/mcp") as client:
17+
18+
# List resources (optional, just to show it works)
19+
resources = await client.list_tools()
20+
print("Available resources:", [r.name for r in resources])
21+
22+
response = responses(
23+
model="gpt-4.1-nano",
24+
reasoning=None,
25+
tools=[
26+
{
27+
"type": "mcp",
28+
"server_label": "dice_server",
29+
"server_url": "https://<your_ngrok_subdomain>.ngrok-free.app/mcp/",
30+
"require_approval": "never",
31+
},
32+
],
33+
input="Roll a dice for me.",
34+
)
35+
36+
print("Response:", response)
37+
38+
39+
if __name__ == "__main__":
40+
asyncio.run(main())
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
#!/usr/bin/env python
2+
"""
3+
Run the FastMCP server in stdio mode
4+
"""
5+
from server import mcp
6+
7+
8+
if __name__ == "__main__":
9+
10+
mcp.run(transport="streamable-http")
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
"""
2+
MCP Server implementation using FastMCP for OpenEdx AI Extensions
3+
4+
This module provides a Model Context Protocol (MCP) server that exposes
5+
tools and resources for AI assistants to interact with the OpenEdx system.
6+
"""
7+
import logging
8+
from typing import Any
9+
from mcp.server.fastmcp import FastMCP
10+
from starlette.responses import PlainTextResponse
11+
from starlette.requests import Request
12+
import random
13+
14+
logger = logging.getLogger(__name__)
15+
16+
mcp = FastMCP("dice_server", port=9001)
17+
18+
@mcp.tool()
19+
def roll_dice(n_dice: int) -> list[int]:
20+
"""Roll `n_dice` 6-sided dice and return the results."""
21+
return [random.randint(1, 6) for _ in range(n_dice)]

0 commit comments

Comments
 (0)