Skip to content

Getting openai.BadRequestError when trying to run with local vllm openai entrypoints #2129

@amitli1

Description

@amitli1

I'm trying to run the get_weather example and to work with my LLM server:

import asyncio
import os
from agents import Agent, Runner, function_tool, set_tracing_disabled, OpenAIChatCompletionsModel
from openai import AsyncOpenAI

async def main():


    set_tracing_disabled(True)
    llm_url = "http://localhost:8090/v1"
    llm_model = "models/Qwen3-4B-AWQ"
    api_key = "EMPTY"
    os.environ["OPENAI_API_KEY"] = api_key

    client     = AsyncOpenAI(base_url= llm_url, api_key=api_key)
    chat_model = OpenAIChatCompletionsModel(model=llm_model, openai_client=client)


    @function_tool
    def get_weather(city: str) -> str:
        return f"The weather in {city} is sunny."


    agent = Agent(
        name="Hello world",
        model=chat_model,
        instructions="You are a helpful agent.",
        tools=[get_weather],
    )

    result = await Runner.run(agent, input="What's the weather in Tokyo?")
    print(result.final_output)


if __name__ == "__main__":
    asyncio.run(main())`

And I'm getting error:

raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': '"auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set', 'type': 'BadRequestError', 'param': None, 'code': 400}

How can I run this example with my llm ?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions