Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port stuck if host computer crashes #353

Closed
2 tasks done
msrheidema opened this issue Jan 9, 2024 · 13 comments · Fixed by #380
Closed
2 tasks done

Port stuck if host computer crashes #353

msrheidema opened this issue Jan 9, 2024 · 13 comments · Fixed by #380
Labels
bug Something isn't working upstream

Comments

@msrheidema
Copy link

msrheidema commented Jan 9, 2024

⚠️ Please verify that this bug has NOT been reported before.

  • I checked and didn't find similar issue

🛡️ Security Policy

Description

If the host computer crashes and docker did not normally shut down the container NodeJS gets stuck in a loop where it can't start as it thinks the port is in use.

Rebooting the host does not fix the problem, only removing the container and recreating it.

👟 Reproduction steps

HALT the host computer (Power loss)

👀 Expected behavior

Start normally as the port is not held by anything

😓 Actual Behavior

Get stuck in a loop where the webserver can't start

Dockge Version

1.4

💻 Operating System and Arch

Debian GNU/Linux 11 (bullseye)

🌐 Browser

Firefox osx 20.0.1

🐋 Docker Version

Docker version 20.10.21, build baeda1f

🟩 NodeJS Version

No response

📝 Relevant log output

Node.js v18.17.1
node:internal/errors:496
    ErrorCaptureStackTrace(err);
    ^

Error: listen EADDRINUSE: address already in use /tmp/tsx-0/7.pipe
    at Server.setupListenHandle [as _listen2] (node:net:1734:21)
    at listenInCluster (node:net:1799:12)
    at Server.listen (node:net:1898:5)
    at file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:53:31317
    at new Promise (<anonymous>)
    at yn (file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:53:31295)
    at async file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:55:459 {
  code: 'EADDRINUSE',
  errno: -98,
  syscall: 'listen',
  address: '/tmp/tsx-0/7.pipe',
  port: -1
}
@msrheidema msrheidema added the bug Something isn't working label Jan 9, 2024
@cloning5480
Copy link

cloning5480 commented Jan 9, 2024

I have also just experienced this.
Changing the image tag from "louislam/dockge:1" to "louislam/dockge:1.4.1" let me start the container (using unraid template)

@fjimenezone
Copy link

I am experiencing the same effect. Every time I shutdown or reboot the server properly, afterwards I get dockge in a restarting loop with the same log messages.

Node.js v18.17.1
node:internal/errors:496
    ErrorCaptureStackTrace(err);
    ^
Error: listen EADDRINUSE: address already in use /tmp/tsx-0/7.pipe
    at Server.setupListenHandle [as _listen2] (node:net:1734:21)
    at listenInCluster (node:net:1799:12)
    at Server.listen (node:net:1898:5)
    at file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:53:31317
    at new Promise (<anonymous>)
    at yn (file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:53:31295)
    at async file:///pnpm/global/5/.pnpm/[email protected]/node_modules/tsx/dist/cli.mjs:55:459 {
  code: 'EADDRINUSE',
  errno: -98,
  syscall: 'listen',
  address: '/tmp/tsx-0/7.pipe',
  port: -1
}

@x1ao4
Copy link

x1ao4 commented Jan 17, 2024

Same here.

@AmIBeingObtuse
Copy link

AmIBeingObtuse commented Jan 17, 2024

Glad this wasn't just me. Racking my brains out trying multiple ports. Only worked after deleting container and re deploying and obviously that's not a perm fix.

@louislam
Copy link
Owner

louislam commented Jan 17, 2024

The error is thrown from tsx, which may be an upstream bug.

https://github.com/privatenumber/tsx/blob/985bbb8cff1f750ad02e299874e542b6f63495ef/src/utils/ipc/server.ts#L40

Let's see if I can reproduce with minimal steps.

@louislam
Copy link
Owner

Possibly Wordaround:

docker compose down
docker compose up

@AmIBeingObtuse
Copy link

docker compose down
docker compose up

Is this considered closed because we can down and up again? or is a permanent fix on the roadmap?

@louislam
Copy link
Owner

@AmIBeingObtuse No, I implemented a workaround, which deletes the tmp folder before start. See #380.

@AmIBeingObtuse
Copy link

AmIBeingObtuse commented Jan 19, 2024 via email

@louislam
Copy link
Owner

Awesome thanks for the fast reply and workaround will test this soon.

Thanks. You can try the nightly image:

services:
  dockge:
    image: louislam/dockge:nightly  

PS: You should switch back to 1 when 1.4.2 is released.

@AmIBeingObtuse
Copy link

Awesome thanks for the fast reply and workaround will test this soon.

Thanks. You can try the nightly image:

services:
  dockge:
    image: louislam/dockge:nightly  

PS: You should switch back to 1 when 1.4.2 is released.

Seems to be working perfectly.

Once again thank you for your fast turn around on this.

@bwirt
Copy link

bwirt commented Jul 11, 2024

Are you sure this is fixed? I experienced this in 1.4.2 as well. Taking the container down and back up fixed it.

@Zackptg5
Copy link

Zackptg5 commented Sep 6, 2024

I'm having this bug now on nightly. New deployment on debian stable and can't get it to initialize at all.

Edit: Removing apparmor fixed the issue so there's something with it's config causing the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working upstream
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants