Releases: twinnydotdev/twinny
Releases · twinnydotdev/twinny
v3.0.0
- Added support hosted llama.cpp servers
- Added configuration options for separate FIM and Chat completion server endpoints as llama.cpp server can only host one model at a time and fim/chat don't work interchangeably with the same model
- Some settings have been re-named but the defaults stay the same
- Remove support for deepseek models as was causing code smell inside the prompt templates (need to improve model support)
v2.6.14
Enabled cancellation of model download when starting twinny and an option to re-enable it.
v2.6.13
- Add option to click status bar icon to stop generation and destroy stream
- Add max tokens for fim and chat to options.
v2.6.11
- Major refactoring and maintenance.
v2.6.9
Add option to disable neighbouring tabs file context as completions can vary due to the model getting confused.
v2.6.8
Fix and create prompt context from related open files.
v2.6.1
- Add completion cache
v2.6.0
- Improved inline completion handler, fix double end bracket closing.
2.5.12
- Enable inline completions in the middle of text, this was purposely stopped but it does add value and more Copilot like.
v2.5.11
Refactor and clean up.