You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why do LLMs like Llama 10B use very little RAM?
Before loading
After loading
What about adding the ability to attach files to text? Maybe use file=path/to/file after promt or add special button? And use the view:
[begin of something.txt file]
Content of something.txt
...
...
[end of something.txt file]
The text was updated successfully, but these errors were encountered:
Please ensure that auto offload/load is disabled in the app (on the settings page). Otherwise, the model will be offloaded when the app goes into the background. I am going to close this issue as this is not a direct app issue, but if the models doesn't work feel free to open the issue again.
As for file attach it is not planed yet. RAG is challenging to implement for on-device, unless we fit the entire text in the context.
Why do LLMs like Llama 10B use very little RAM?
Before loading
After loading
What about adding the ability to attach files to text? Maybe use file=path/to/file after promt or add special button? And use the view:
[begin of something.txt file]
Content of something.txt
...
...
[end of something.txt file]
The text was updated successfully, but these errors were encountered: