Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking forward to providing OpenVINO backend support for Llama.cpp! #27736

Open
1 task done
Torinlq opened this issue Nov 26, 2024 · 1 comment
Open
1 task done

Looking forward to providing OpenVINO backend support for Llama.cpp! #27736

Torinlq opened this issue Nov 26, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request feature New feature request

Comments

@Torinlq
Copy link

Torinlq commented Nov 26, 2024

Request Description

Llama.cpp is a very popular and excellent LLM/VLM inference deployment framework, implemented in pure C/C++, without any dependencies, and cross-platform. Based on SYCL and Vulkan, it can support some Intel integrated graphics cards for inference acceleration, but there are many compatibility issues and it does not support NPU at all. Can Intel provide OpenVINO backend support for this project?

Feature Use Case

No response

Issue submission checklist

  • The feature request or improvement must be related to OpenVINO
@Torinlq Torinlq added enhancement New feature or request feature New feature request labels Nov 26, 2024
@rkazants
Copy link
Contributor

@ynimmaga, please take a look as well

Regards,
Roman

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature New feature request
Projects
None yet
Development

No branches or pull requests

4 participants