Cluster won't start is I set "node-external-ip" option #10615
Replies: 4 comments 2 replies
-
What errors do you get ?l or what pods are failing local-path-provisioner-6795b5f9d8-5gvxt 0/1 CrashLoopBackOff
metrics-server-557ff575fb-8rhbl 0/1 CrashLoopBackOff |
Beta Was this translation helpful? Give feedback.
-
@brandond First setting up the k3s cluster without node-external-ip , no failure here is the output after third test NAME READY STATUS RESTARTS AGE
coredns-576bfc4dc7-t9cqb 1/1 Running 4 (113m ago) 2d2h
helm-install-traefik-crd-b7ws4 0/1 Completed 0 2d2h
helm-install-traefik-dhphg 0/1 Completed 2 2d2h
local-path-provisioner-6795b5f9d8-5gvxt 1/1 Running 128 (93m ago) 2d2h
metrics-server-557ff575fb-8rhbl 1/1 Running 127 (93m ago) 2d2h
svclb-traefik-8ad237a4-8t4zh 2/2 Running 2 (101m ago) 17h
svclb-traefik-8ad237a4-n7z8g 2/2 Running 8 (113m ago) 2d2h
svclb-traefik-8ad237a4-pv2fx 2/2 Running 0 40h
traefik-86d48d664-jttrn 1/1 Running 0 52m you can see many restarts for local-path-provisioner-6795b5f9d8-5gvxt 1/1 Running 128 (93m ago) 2d2h
metrics-server-557ff575fb-8rhbl 1/1 Running 127 (93m ago) 2d2h which is because of second test , using incorrect node-external IP, So the result is that node-external-ip should be the host main NIC IP, in my case lo UNKNOWN 127.0.0.1/8 ::1/128
ens3 UP 192.168.1.20/24 fe80::5054:ff:fe30:e318/64 after setting the node-external-ip to |
Beta Was this translation helpful? Give feedback.
-
I am not sure to understand you correctly, but give my reply (please correct me if I am wrong) I think you are talking about cloud when we have private network and can allocate (ask for) public IPs be assigned to our node. my setup (test) is on-perm so the external-ip will be the ip of the ingress or the load balancer (please correct me if I am wrong) allowing the traffic to come to the node, since my node has no external load balancer its own IP will be the node-external ip , I tried and worked well here is a test root@208:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
208 Ready control-plane,etcd,master 2d22h v1.30.3+k3s1 172.31.208.1 65.108.61.152 The internal ip here is the service for ingress root@208:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP
kube-dns ClusterIP 10.43.0.10 <none>
metrics-server ClusterIP 10.43.49.240 <none>
traefik LoadBalancer 10.43.21.114 2a01:4f9:c012:5985::1,65.108.61.152 Which has been set for both IPv4 and IPv6 of the node (which I set both with Everything looks fine to me Do you see any issue with this setup ? is this setup incorrect ? |
Beta Was this translation helpful? Give feedback.
-
Hi Brad,
I think my setup is exactly a NATed network environment. My k3s is
installed in my WSL 2 environment. It has a *private* IP
that systems outside of my Windows host can not access. My Windows host's
IP is the *external* IP that other k3s agents can use to talk to the master
node.
…On Wed, 14 Aug 2024 at 17:52, Shakiba Moshiri ***@***.***> wrote:
node-external-ip is not intended to for use with dual-interface nodes. It
is intended to be used when there is a public (external) IP that is NATed
to the node's private address. That IP is not expected to be bound to an
interface on the node, but the node should be reachable (via NAT) at that
address.
If you are trying to use it to get the node to use different interfaces
for different things, it is not going to do what you want.
@brandond <https://github.com/brandond>
I am not sure to understand you correctly, but give my reply (please
correct me if I am wrong)
I think you are talking about cloud when we have private network and can
allocate (ask for) public IPs be assigned to our node. my setup (test) is
on-perm so the *external-ip* will be the ip of the ingress or the load
balancer (please correct me if I am wrong) allowing the traffic to come to
the node, since my node has no external load balancer its own IP will be
the node-external ip , I tried and worked well
here is a test
***@***.***:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
208 Ready control-plane,etcd,master 2d22h v1.30.3+k3s1 172.31.208.1 65.108.61.152
The internal ip 172.31.208.1 is the private VPN set by --flannel-iface
The external ip 65.108.61.152 is a public IP set by --node-external-ip
here is the service for ingress
***@***.***:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP
kube-dns ClusterIP 10.43.0.10 <none>
metrics-server ClusterIP 10.43.49.240 <none>
traefik LoadBalancer 10.43.21.114 2a01:4f9:c012:5985::1,65.108.61.152
Which has been set for both IPv4 and IPv6 of the node (which I set both
with --node-external-ip )
Everything looks fine to me
Do you see any issue with this setup ? is this setup incorrect ?
—
Reply to this email directly, view it on GitHub
<#10615 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAQBTO6ZKTEFFUGWYOYWGDZRMEFXAVCNFSM6AAAAABLXZ2LUKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZTGQZTCMI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi,
I installed my k3s in my WSL 2 and I want to access from any other computer in the same network.
I used this configuration file and it used to be working.
But recently, some system pods started to fail. I did some troubleshooting and discovered the
node-external-ip
option is causing the problem.However, I did not find any update relating to this option on the official website.
What is the right/new way to expose a different cluster ip?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions