Skip to content

Commit

Permalink
fix: Adding an LLM param to fix broken generator from llamacpp (#1519)
Browse files Browse the repository at this point in the history
  • Loading branch information
naveenk2022 authored Jan 17, 2024
1 parent e326126 commit 869233f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion private_gpt/components/llm/llm_component.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def __init__(self, settings: Settings) -> None:
context_window=settings.llm.context_window,
generate_kwargs={},
# All to GPU
model_kwargs={"n_gpu_layers": -1},
model_kwargs={"n_gpu_layers": -1, "offload_kqv": True},
# transform inputs into Llama2 format
messages_to_prompt=prompt_style.messages_to_prompt,
completion_to_prompt=prompt_style.completion_to_prompt,
Expand Down

0 comments on commit 869233f

Please sign in to comment.