-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
40 VARCHAR Limit on Model Name #6615
Comments
For those interested, we are hosting HuggingFace Text Embedding Inference server, which exposes an OpenAI compatible API. In order to ask it to infer on a certain model, we pass the model name to the server, which are usually very long names. |
Related #1857 @crazywoola |
Actually, it's pretty easy to fix this, I will upgrade the migrations later. |
I have tested this change, but the issue still persists, unfortunately. The PR updates |
@mbbyn |
Self Checks
Dify version
0.6.14
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
First, add an OpenAI API compatible model with a long name.
Then, try to use the model in a Knowledge Base settings. Observe the failure and error logs.
✔️ Expected Behavior
It should work with long model names.
❌ Actual Behavior
It fails to save the model name since it's >40 VARCHAR.
The text was updated successfully, but these errors were encountered: