Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node:events:497 throw er; // Unhandled 'error' event ^ Error: spawn ./server ENOENT #2

Open
shammyfiveducks opened this issue Jun 10, 2024 · 3 comments

Comments

@shammyfiveducks
Copy link

Hi, I have tried all of the methods you give for running this but I get the same error each time no matter what, below is the first install method you gave as an example.

I am using fresh install of Ubuntu 22.04 LTS
installed for this project:
NPM [email protected]
Node.js v20.14.0

(I can reach the localhost URL. but if i press anything on that page, I also get the below error)
(If I do any curl etc I get the same error)

Any idea what I might be doing wrong? Thank you

Host terminal::: (note already downloaded the models in the previous run, but crashed similar to below)

LLAMANET_DEBUG=true npx llamanet@latest
no valid release

█ llamanet running at http://localhost:42424

[QUICKSTART] Try opening a new terminal and run the following command.

curl --request POST
--url http://127.0.0.1:42424/v1/chat/completions
--header "Content-Type: application/json"
--data '{
"model": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Do aliens exist?" }
]
}'
./server -c 2048 --embeddings -m /home/my/llamanet/models/huggingface/microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-q4.gguf --port 8000
node:events:497
throw er; // Unhandled 'error' event
^

Error: spawn ./server ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:292:12)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn ./server',
path: './server',
spawnargs: [
'-c',
'2048',
'--embeddings',
'-m',
'/home/my/llamanet/models/huggingface/microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-q4.gguf',
'--port',
'8000'
]
}

Node.js v20.14.0

client terminal:::

curl --request POST
--url http://127.0.0.1:42424/v1/chat/completions
--header "Content-Type: application/json"
--data '{
"model": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Do aliens exist?" }
]
}'
curl: (52) Empty reply from server

@cocktailpeanut
Copy link
Contributor

can you check if the server file is actually located inside your ~/llamanet path somewhere?

@ArturJS
Copy link

ArturJS commented Jun 13, 2024

@cocktailpeanut
I'm facing the exact same error on Windows 10 environment.
The folder "~/llamanet" has the following structure

.
|   llamacpp.zip
|
\---models
    \---huggingface
        \---microsoft
            \---Phi-3-mini-4k-instruct-gguf
                    Phi-3-mini-4k-instruct-fp16.gguf
                    Phi-3-mini-4k-instruct-q4.gguf

For an unknown reason, there's no any server within ~/llamanet.

@handsfelloff
Copy link

I found the issue is due to the latest server llama.cpp build binaries from https://github.com/ggerganov/llama.cpp have refactored server to llama-server.
I tried creating a symlink ln -s ~llamanet/build/bin/llama-server ~llamanet/build/bin/server, which mounted, but seg errored with the test payload.
I was able to quickly get things working by downloading a somewhat older build ~ month ago say https://github.com/ggerganov/llama.cpp/releases/tag/b3091
Hopefully @cocktailpeanut will update and soon, but in the meantime this is a straightforward workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants