Skip to content

Issue: Ollama Models (Qwen, Llama 3.1) Stuck on 'Thinking' / 'Does Not Support Tools' - Likely API Formatting Mismatch #216

@chrisjeffries24

Description

@chrisjeffries24

When using Ollama models (specifically Qwen and Llama 3.1) within Dive AI, the application either gets stuck on "Thinking" indefinitely without producing output, or for Llama 3.1, it explicitly displays an error message "does not support tools."

Crucially, both Qwen and Llama 3.1 models work perfectly and respond as expected when run directly from the Ollama command-line interface (e.g., ollama run qwen or ollama run llama3.1). This indicates that the Ollama server and the models themselves are functional. The issue appears to be in the communication or interpretation layer between Dive AI and the Ollama API, particularly regarding tool calling.

Based on recent Ollama community discussions, it appears that Ollama versions v0.8.0 and later (v0.9.6 in my case) changed how tool calls are returned in API responses. Instead of a dedicated tool_calls JSON field, they might be embedded within the content field as a formatted string (e.g., tool_call\n{...}). Dive AI might be expecting the older format or a direct tool_calls field, causing it to fail to process the response or detect tool capabilities.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions