Skip to content

Commit 8230d70

Browse files
authored
Adding swagger for EigenAI API (#216)
1 parent d9b015b commit 8230d70

File tree

7 files changed

+228
-37
lines changed

7 files changed

+228
-37
lines changed

docs/eigenai/howto/use-eigenai.mdx

Lines changed: 5 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,11 @@ See [Try EigenAI](../try-eigenai.md#get-started-for-free) for information on obt
1212

1313
We're starting off with supporting the `gpt-oss-120b-f16` and `qwen3-32b-128k-bf16` models based on initial demand and expanding from there. To get started or request another model, visit our [onboarding page](https://onboarding.eigencloud.xyz/).
1414

15-
## Chat Completions API
15+
## Chat Completions API Reference
16+
17+
Refer to the [swagger documentation for the EigenAI API](https://docs.eigencloud.xyz/api).
18+
19+
## Chat Completions API Examples
1620

1721
<Tabs>
1822
<TabItem value="testnet" label="Testnet Request">
@@ -233,33 +237,3 @@ We're starting off with supporting the `gpt-oss-120b-f16` and `qwen3-32b-128k-bf
233237
</TabItem>
234238
</Tabs>
235239

236-
## Supported parameters
237-
238-
This list will be expanding to cover the full parameter set of the Chat Completions API.
239-
240-
- `messages: array`
241-
- A list of messages comprising the conversation so far
242-
- `model: string`
243-
- Model ID used to generate the response, like `gpt-oss-120b-f16`
244-
- `max_tokens: (optional) integer`
245-
- The maximum number of [tokens](https://platform.openai.com/tokenizer) that can be generated in the chat completion. This value can be used to control [costs](https://openai.com/api/pricing/) for text generated via API.
246-
- `seed: (optional) integer`
247-
- If specified, our system will run the inference deterministically, such that repeated requests with the same `seed` and parameters should return the same result.
248-
- `stream: (optional) bool`
249-
- If set to true, the model response data will be streamed to the client as it is generated using Server-Side Events (SSE).
250-
- `temperature: (optional) number`
251-
- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
252-
- `top_p: (optional) number`
253-
- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
254-
- `logprobs: (optional) bool`
255-
- Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`
256-
- `frequency_penalty: (optional) number`
257-
- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
258-
- `presence_penalty: (optional) number`
259-
- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
260-
- `tools: array`
261-
- A list of tools ([function tools](https://platform.openai.com/docs/guides/function-calling)) the model may call.
262-
- `tool_choice: (optional) string`
263-
- “auto”, “required”, “none”
264-
- Controls which (if any) tool is called by the model. `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
265-
- `none` is the default when no tools are present. `auto` is the default if tools are present.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
title: EigenAI API
3+
sidebar_position: 1
4+
---
5+
6+
Refer to the [swagger documentation for the EigenAI API](https://docs.eigencloud.xyz/api).

docusaurus.config.js

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -402,10 +402,6 @@ const redirects = [
402402
},
403403

404404
//External references
405-
{
406-
from: '/api',
407-
to: '/eigenlayer/reference/apis-and-dashboards'
408-
},
409405
{
410406
from: '/developers/slashing-background',
411407
to: '/eigenlayer/developers/concepts/slashing/slashing-concept-developers'

package-lock.json

Lines changed: 18 additions & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,8 @@
3535
"rehype-katex": "^7.0.0",
3636
"remark-gfm": "^4.0.0",
3737
"remark-math": "^6.0.0",
38-
"repomix": "^0.3.6"
38+
"repomix": "^0.3.6",
39+
"swagger-ui-dist": "^5.30.3"
3940
},
4041
"devDependencies": {
4142
"@docusaurus/eslint-plugin": "^3.8.1",

src/pages/api.jsx

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
import React, { useEffect } from "react";
2+
import SwaggerUI from "swagger-ui-dist/swagger-ui-es-bundle";
3+
import "swagger-ui-dist/swagger-ui.css";
4+
5+
export default function ApiDocs() {
6+
useEffect(() => {
7+
SwaggerUI({
8+
dom_id: "#swagger-container",
9+
url: "/openapi.yaml",
10+
deepLinking: true,
11+
});
12+
}, []);
13+
14+
return (
15+
<div style={{ height: "100%" }}>
16+
<div id="swagger-container" />
17+
</div>
18+
);
19+
}

static/openapi.yaml

Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
openapi: 3.1.0
2+
info:
3+
title: EigenAI Chat API
4+
version: 0.1.0
5+
description: Chat completion API for EigenAI.
6+
7+
servers:
8+
- url: https://api.eigencloud.xyz
9+
10+
paths:
11+
/chat:
12+
post:
13+
summary: Create a chat completion
14+
operationId: createChatCompletion
15+
description: Generates a model response for a given chat conversation.
16+
17+
requestBody:
18+
required: true
19+
content:
20+
application/json:
21+
schema:
22+
type: object
23+
properties:
24+
25+
model:
26+
type: string
27+
description: >
28+
Model ID used to generate the response, e.g. `gpt-oss-120b-f16`.
29+
30+
messages:
31+
type: array
32+
description: A list of messages representing the conversation so far.
33+
items:
34+
type: object
35+
properties:
36+
role:
37+
type: string
38+
enum: [system, user, assistant, tool]
39+
content:
40+
type: string
41+
42+
disable_auto_reasoning_format:
43+
type: boolean
44+
description: >
45+
Controls response parsing and separating out the reasoning trace from the content of the response. For client calls, this is a custom parameter. For example, in the OpenAI client, it's set in the `extra_body` field. Refer to the relevant client SDK documentation for information on how to set this parameter.
46+
47+
max_tokens:
48+
type: integer
49+
nullable: true
50+
description: >
51+
Optional. Maximum number of tokens to generate.
52+
53+
seed:
54+
type: integer
55+
nullable: true
56+
description: >
57+
Optional. If provided, inference becomes deterministic for repeated (seed + params).
58+
59+
stream:
60+
type: boolean
61+
nullable: true
62+
description: >
63+
Optional. If true, response is streamed using Server-Sent Events (SSE).
64+
65+
temperature:
66+
type: number
67+
format: float
68+
nullable: true
69+
description: >
70+
Optional. Sampling temperature between 0 and 2.
71+
72+
top_p:
73+
type: number
74+
format: float
75+
nullable: true
76+
description: >
77+
Optional. Nucleus sampling threshold (top-p).
78+
79+
logprobs:
80+
type: boolean
81+
nullable: true
82+
description: >
83+
Optional. If true, includes token-level log probabilities in the response.
84+
85+
frequency_penalty:
86+
type: number
87+
format: float
88+
nullable: true
89+
description: >
90+
Optional. Number between -2.0 and 2.0 penalizing token repetition frequency.
91+
92+
presence_penalty:
93+
type: number
94+
format: float
95+
nullable: true
96+
description: >
97+
Optional. Number between -2.0 and 2.0 penalizing previously seen tokens.
98+
99+
tools:
100+
type: array
101+
description: A list of tools the model may call.
102+
items:
103+
type: object
104+
properties:
105+
type:
106+
type: string
107+
enum: [function]
108+
function:
109+
type: object
110+
properties:
111+
name:
112+
type: string
113+
description: Name of the function.
114+
description:
115+
type: string
116+
parameters:
117+
type: object
118+
description: JSON schema of function parameters.
119+
120+
tool_choice:
121+
description: >
122+
Optional. Controls how the model uses tools.
123+
- `none`: never call tools
124+
- `auto`: model decides (default if tools exist)
125+
- `required`: must call tools
126+
- or specify a particular function
127+
oneOf:
128+
- type: string
129+
enum: [none, auto, required]
130+
- type: object
131+
properties:
132+
type:
133+
type: string
134+
enum: [function]
135+
function:
136+
type: object
137+
properties:
138+
name:
139+
type: string
140+
141+
required:
142+
- model
143+
- messages
144+
145+
responses:
146+
"200":
147+
description: Successful completion response. The response includes a cryptographic signature field that proves the response was generated by the EigenAI Operator (see [Verify Signature](https://docs.eigencloud.xyz/eigenai/howto/verify-signature) for more information).
148+
content:
149+
application/json:
150+
schema:
151+
type: object
152+
properties:
153+
id:
154+
type: string
155+
object:
156+
type: string
157+
created:
158+
type: integer
159+
model:
160+
type: string
161+
choices:
162+
type: array
163+
items:
164+
type: object
165+
properties:
166+
index:
167+
type: integer
168+
message:
169+
type: object
170+
properties:
171+
role:
172+
type: string
173+
content:
174+
type: string
175+
finish_reason:
176+
type: string
177+
signature:
178+
type: string

0 commit comments

Comments
 (0)