-
-
Notifications
You must be signed in to change notification settings - Fork 643
Description
Hi, I was trying to use this script to test different models, but only OpenAI CUA and Anthropic CUA worked successfully.
When I tested UI-TARS (deployed via TGI on HF, following this guide
) and Omniparser + [other thinking model], both failed.
For UI-TARS as a computer-use model, it failed right after the first turn:
raise APIError( litellm.exceptions.APIError: litellm.APIError: HuggingfaceException - Failed to deserialize the JSON body into the target type: messages[2]: data did not match any variant of untagged enum MessageBody at line 1 column 1985 LiteLLM Retried: 3 times
For the composed model, it seems like the message handling between components didn’t work as expected:
raise BadRequestError( litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - Missing required parameter: 'tools[0].function'. LiteLLM Retried: 3 times
I’ve been trying to fix this by reading the docs for days, but I still can’t get it working.
Has anyone else run into similar issues? Or is there another test example I could follow?
Any suggestions would be helpful, thanks!