You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Ollama models (specifically Qwen and Llama 3.1) within Dive AI, the application either gets stuck on "Thinking" indefinitely without producing output, or for Llama 3.1, it explicitly displays an error message "does not support tools."
Crucially, both Qwen and Llama 3.1 models work perfectly and respond as expected when run directly from the Ollama command-line interface (e.g., ollama run qwen or ollama run llama3.1). This indicates that the Ollama server and the models themselves are functional. The issue appears to be in the communication or interpretation layer between Dive AI and the Ollama API, particularly regarding tool calling.
Based on recent Ollama community discussions, it appears that Ollama versions v0.8.0 and later (v0.9.6 in my case) changed how tool calls are returned in API responses. Instead of a dedicated tool_calls JSON field, they might be embedded within the content field as a formatted string (e.g., tool_call\n{...}). Dive AI might be expecting the older format or a direct tool_calls field, causing it to fail to process the response or detect tool capabilities.