gpt4 book ai didi

kubernetes - Load Balancer External IP 与 K3s 集群节点的 Internal IP 相同

转载 作者:行者123 更新时间:2023-12-01 23:18:56 35 4
gpt4 key购买 nike

我已经使用以下方法在 k3s 集群中设置了一项服务:

apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 9012
targetPort: 9011
protocol: TCP

kubectl get svc -n mynamespace

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                PORT(S)          AGE
minio ClusterIP None <none> 9011/TCP 42m
minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m

kubectl describe svc myservice -n mynamespace

Name:                     myservice
Namespace: mynamespace
Labels: app=myapp
Annotations: <none>
Selector: app=myapp
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.178.112
IPs: 10.32.178.112
LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170
Port: <unset> 9012/TCP
TargetPort: 9011/TCP
NodePort: <unset> 32296/TCP
Endpoints: 10.42.10.43:9011,10.42.10.44:9011
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

我从上面假设我可以从以下位置访问 minIO 控制台: http://192.168.40.74:9012但这是不可能的。

错误信息:

curl: (7) Failed to connect to 192.168.40.74 port 9012: Connectiontimed out

此外,如果我执行

kubectl get node -o wide -n mynamespace

NAME           STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION       CONTAINER-RUNTIME
antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 <none> Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2
knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 <none> Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2
knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 <none> Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2

如上所示,节点的内部 IP 与负载均衡器的外部 IP 相同。我在这里做错了什么吗?

最佳答案

K3S集群初始配置

为了重现环境,我按照后续步骤创建了一个双节点 k3s 集群:

  1. 在所需主机上安装 k3s 控制平面:

    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -
  2. 验证它是否有效:

    k8s kubectl get nodes -o wide
  3. 要添加工作节点,应在工作节点上运行此命令:

    curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh -

其中 K3S_URL 是一个控制平面 URL(带有 IP 或 DNS)

K3S_TOKEN可以通过以下方式获取:

sudo cat /var/lib/rancher/k3s/server/node-token

您应该有一个正在运行的集群:

$ k3s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-cluster Ready control-plane,master 27m v1.21.2+k3s1 10.186.0.17 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
k3s-worker-1 Ready <none> 18m v1.21.2+k3s1 10.186.0.18 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2

复制和测试

我创建了一个基于 nginx 图像的简单部署:

$ k3s kubectl create deploy nginx --image=nginx

并暴露它:

$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80

这意味着 pod 中的 nginx 容器正在监听端口 80 并且 service 可以在端口 8080 上访问集群内:

$ k3s kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 29m <none>
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 25m app=nginx

服务可通过 IP 或 localhost 以及端口 8080NodePort 访问。

+ 考虑到你得到的错误 curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out 表示服务已配置,但是它没有正确响应(不是来自入口的 404 或 connection refused)。

第二个问题的答案 - 负载均衡器

来自 rancher k3s official documentation about LoadBalancer , Klipper Load Balancer用来。来自他们的 github 仓库:

This is the runtime image for the integrated service load balancer inklipper. This works by using a host port for each service loadbalancer and setting up iptables to forward the request to the clusterIP.

来自 how the service loadbalancer works :

K3s creates a controller that creates a Pod for the service loadbalancer, which is a Kubernetes object of kind Service.

For each service load balancer, a DaemonSet is created. The DaemonSetcreates a pod with the svc prefix on each node.

The Service LB controller listens for other Kubernetes Services. Afterit finds a Service, it creates a proxy Pod for the service using aDaemonSet on all of the nodes. This Pod becomes a proxy to the otherService, so that for example, requests coming to port 8000 on a nodecould be routed to your workload on port 8888.

If the Service LB runs on a node that has an external IP, it uses theexternal IP.

换句话说,是的,预计负载均衡器具有与主机的 internal-IP 相同的 IP 地址。 k3s 集群中每个具有负载均衡器类型的服务都将在每个节点上拥有自己的 daemonSet 来为初始服务提供直接流量。

例如我创建了第二个部署 nginx-two 并将其暴露在端口 8090 上,您可以看到有两个来自两个不同部署的 pod 和四个充当负载均衡器的 pod(请注意名称 - svclb 开头):

$ k3s kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-7m4v4 1/1 Running 0 47m 10.42.0.9 k3s-cluster <none> <none>
svclb-nginx-jc4rz 1/1 Running 0 45m 10.42.0.10 k3s-cluster <none> <none>
svclb-nginx-qqmvk 1/1 Running 0 39m 10.42.1.3 k3s-worker-1 <none> <none>
nginx-two-6fb6885597-8bv2w 1/1 Running 0 38s 10.42.1.4 k3s-worker-1 <none> <none>
svclb-nginx-two-rm594 1/1 Running 0 2s 10.42.0.11 k3s-cluster <none> <none>
svclb-nginx-two-hbdc7 1/1 Running 0 2s 10.42.1.5 k3s-worker-1 <none> <none>

两种服务都有相同的EXTERNAL-IP:

$ k3s kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 50m
nginx-two LoadBalancer 10.43.118.82 10.186.0.17,10.186.0.18 8090:31780/TCP 4m44s

关于kubernetes - Load Balancer External IP 与 K3s 集群节点的 Internal IP 相同,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68378269/

35 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com