You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I really like your app, it looks great!
However, I am lacking information about using local models through llama.cpp. I am using the packaged client + server on Linux. I use the "--llama" parameter to give path to my llama.cpp repository with compiled llama-server etc... but I don't see models. How should I use this please?
Thank you.
The text was updated successfully, but these errors were encountered:
The directory where lluminous looks for models is hardcoded as "models" inside the llama.cpp directory you pass through the --lama parameter, so you'll need to create it and move your models inside there.
It's worth noting though, that local model support currently only works with models that use the ChatML template format.
The feature I'm currently working on is adding support for Ollama, which means it's going to work with any model, and hopefully with even less hassle. Sorry for the inconvenience!
Hello,
I really like your app, it looks great!
However, I am lacking information about using local models through llama.cpp. I am using the packaged client + server on Linux. I use the "--llama" parameter to give path to my llama.cpp repository with compiled llama-server etc... but I don't see models. How should I use this please?
Thank you.
The text was updated successfully, but these errors were encountered: