LMDeploy Release V0.0.11
What's Changed
🚀 Features
💥 Improvements
- make IPv6 compatible, safe run for coroutine interrupting by @AllentDan in #487
- support deploy qwen-14b-chat by @irexyc in #482
- add tp hint for deployment by @irexyc in #555
- Move
tokenizer.py
to the folder of lmdeploy by @grimoire in #543
🐞 Bug fixes
- Change
shared_instance
type fromweakptr
toshared_ptr
by @lvhan028 in #507 - [Fix] Set the default value of
step
being 0 by @lvhan028 in #532 - [bug] fix mismatched shape for decoder output tensor by @akhoroshev in #517
- Fix typing of openai protocol. by @mokeyish in #554
📚 Documentations
- Fix typo in
docs/en/pytorch.md
by @shahrukhx01 in #539 - [Doc] update huggingface internlm-chat-7b model url by @AllentDan in #546
- [doc] Update benchmark command in w4a16.md by @del-zhenwu in #500
🌐 Other
New Contributors
- @shahrukhx01 made their first contribution in #539
- @mokeyish made their first contribution in #554
Full Changelog: v0.0.10...v0.0.11