Codepilot.el provides AI-based code completion for emacs in a similar way to copilot.el and GitHub Copilot but with open-source language models.
straight.el
(use-package codepilot
:straight (:host github :repo "ludvb/codepilot.el" :files ("*.el") )
:hook ((prog-mode . codepilot-mode)
(text-mode . codepilot-mode))
:bind (:map evil-insert-state-map
("M-j" . 'codepilot-accept-line)
("M-l" . 'codepilot-accept-char)
("M-w" . 'codepilot-accept-word)
("M-<return>" . 'codepilot-accept-all)
("M-J" . 'codepilot-complete-next)
("M-K" . 'codepilot-complete-previous))
:config
;; For llama.cpp backend:
(require 'codepilot-llamacpp)
(setq codepilot-backend 'llamacpp)
(setq codepilot-llamacpp-address "http://localhost:8080")
;; For Ollama backend:
(require 'codepilot-ollama)
(setq codepilot-backend 'ollama)
(setq codepilot-ollama-address "http://localhost:11434")")
;; Optionally, start Ollama server from emacs:
(require 'codepilot-ollama-server)
(codepilot-ollama-start))
Doom Emacs
;; packages.el
(package! codepilot
:recipe (:host github :repo "ludvb/codepilot.el" :files ("*.el")))
;; config.el
(use-package! codepilot
:hook ((prog-mode . codepilot-mode)
(text-mode . codepilot-mode))
:bind (:map evil-insert-state-map
("M-j" . 'codepilot-accept-line)
("M-l" . 'codepilot-accept-char)
("M-w" . 'codepilot-accept-word)
("M-<return>" . 'codepilot-accept-all)
("M-J" . 'codepilot-complete-next)
("M-K" . 'codepilot-complete-previous))
:config
;; For llama.cpp backend:
(require 'codepilot-llamacpp)
(setq codepilot-backend 'llamacpp)
(setq codepilot-llamacpp-address "http://localhost:8080")
;; For Ollama backend:
(require 'codepilot-ollama)
(setq codepilot-backend 'ollama)
(setq codepilot-ollama-address "http://localhost:11434")")
;; Optionally, start Ollama server from emacs:
(require 'codepilot-ollama-server)
(codepilot-ollama-start))
Codepilot.el requires one of the following backends, which needs to be installed separately:
llama.cpp
Requirements:
- llama.cpp or a wrapper, such as llama-cpp-python
- cURL
The default prompt is designed for the DeepSeek Coder base models, which can be found on Hugging Face.
If you are using another model, you may need to customize codepilot-prompt-fun
and codepilot-postprocess-fun
.
Ollama
Requirements:
By default, the Ollama backend uses the deepseek-coder:6.7b-base
model, which needs to be pulled before use by running
ollama pull deepseek-coder:6.7b-base
To use another model, customize codepilot-ollama-model
.
You may also need to customize codepilot-prompt-fun
and codepilot-postprocess-fun
as needed.
Contributions are always welcome! Feel free to open an issue if you have found a bug or have a feature request, or submit a PR that implements a proposed change.