Skip to content

Releases: twinnydotdev/twinny

v3.0.0

25 Jan 14:46
Compare
Choose a tag to compare
  • Added support hosted llama.cpp servers
  • Added configuration options for separate FIM and Chat completion server endpoints as llama.cpp server can only host one model at a time and fim/chat don't work interchangeably with the same model
  • Some settings have been re-named but the defaults stay the same
  • Remove support for deepseek models as was causing code smell inside the prompt templates (need to improve model support)

v2.6.14

22 Jan 15:00
Compare
Choose a tag to compare

Enabled cancellation of model download when starting twinny and an option to re-enable it.

v2.6.13

21 Jan 19:40
Compare
Choose a tag to compare
  1. Add option to click status bar icon to stop generation and destroy stream
  2. Add max tokens for fim and chat to options.

v2.6.11

19 Jan 21:13
Compare
Choose a tag to compare
  • Major refactoring and maintenance.

v2.6.9

17 Jan 20:26
Compare
Choose a tag to compare

Add option to disable neighbouring tabs file context as completions can vary due to the model getting confused.

v2.6.8

16 Jan 20:35
Compare
Choose a tag to compare

Fix and create prompt context from related open files.

v2.6.1

15 Jan 21:36
Compare
Choose a tag to compare
  • Add completion cache

v2.6.0

15 Jan 20:28
Compare
Choose a tag to compare
  • Improved inline completion handler, fix double end bracket closing.

2.5.12

15 Jan 14:13
Compare
Choose a tag to compare
  • Enable inline completions in the middle of text, this was purposely stopped but it does add value and more Copilot like.

v2.5.11

12 Jan 20:24
Compare
Choose a tag to compare

Refactor and clean up.