-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: blank model name in settings cause problem #5169
Comments
I didn't know it's possible to have no model name :D . That's an interesting configuration. |
I put aaa as provider and ccc as model name, then the same error.
|
The settings of the Openhands is so hard to understand. Why it defines several available providers and models, and even there is a list of models when choosing ollama as provider? Developers using ollama always define their own models, or try the newest models eg. qwen2.5. Ollama works with these models very well, but Openhands limits lots of scenarios. As my understanding of Openhands, it should be a simple prompt agent which is responsible for translating code instructions to prompt words and then send to any backend LLM provider, ChatGPT, Claude, X-AI, .... and locally Ollama, llama.cpp, or other funny agents... However, obviously I was wrong. Could you please let me know where I was wrong?? |
Hi @adrianzhang, sorry for the frustration! You were actually not wrong, but please lets figure out how to do this. OpenHands uses the liteLLM SDK undercover to support a lot of LLM APIs/providers, including custom ones. LiteLLM is not an inference engine, but it can work with ollama, LMStudio, and llama.cpp just the same, as long as you give liteLLM (via our settings) an endpoint to call. An important detail, which should make everything easier, is if this endpoint to call is an openai-compatible endpoint. In my understanding, llama.cpp can do that. Then litelllm needs to receive a Does that make sense? |
Hi @enyst , Thank you so much for detailed explanation. I assume the root cause of the problem is that Openhands doesn't use Ollama API to query its models before transfer parameters to the liteLLM. I'v read liteLLM source code after got your reply, and found that it will verify parameters (of cause security / robust reason). So the liteLLM 's function is correct. What Openhands can do is to query model names from Ollama before sending model name parameter to liteLLM. First of all, Ollama is based on llama.cpp, so they typically provide lots of similar functions. Same solution is for both. What do you think? |
OpenHands can do that, there is code already that is / was doing it, to populate the list from ollama. But also, if you're running with ollama,
|
In my case, OpenHands did not obtain model names from the Ollama instance. (My instance running on another host in LAN.) I have successfully got the model name by using |
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
When I create a local LLM service with llama.cpp, I have verified it works great without any key or model name. However, when I set the same URL, Openhands always shows error in console of docker.
I am considering users have no chance to use local model except Ollama?
OpenHands Installation
Docker command in README
OpenHands Version
latest docker image
Operating System
Linux
Logs, Errors, Screenshots, and Additional Context
The text was updated successfully, but these errors were encountered: