gpt4 book ai didi

kubernetes - 裸机 K8s : How to preserve source IP of client and direct traffic to nginx replica on current server on

转载 作者:行者123 更新时间:2023-12-05 05:09:50 25 4
gpt4 key购买 nike

我想请你帮忙:

http/https 的集群入口点是 NGINX:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 作为 deamonset 运行

我想实现两件事:

  1. 保留客户端的源IP
  2. 将流量定向到 nginx 副本 当前服务器(因此,如果请求发送到服务器 A,则列为 外部IP地址,节点A上的nginx应该处理它)

问题:

  • 这怎么可能?
  • 没有节点端口是否可行? 可以使用自定义 --service-node-port-range 启动控制平面,这样我就可以为 80 添加节点端口和 443,但它看起来有点像 hack(在阅读了关于节点端口的预期用途)

我正在考虑使用metallb,但是layer2 配置会导致瓶颈(集群上的高流量)。我不确定 BGP 是否会解决这个问题。

  • Kubernetes v15
  • 裸机
  • Ubuntu 18.04
  • Docker (18.9) 和 WeaveNet (2.6)

最佳答案

您可以通过将 externalTrafficPolicy 设置为 local 来保留客户端的源 IP,这会将请求代理到本地端点。这在 Source IP for Services with Type=NodePort 上有解释.

还可以看看Using Source IP .

如果是MetalLB :

MetalLB respects the service’s externalTrafficPolicy option, and implements two different announcement modes depending on what policy you select. If you’re familiar with Google Cloud’s Kubernetes load balancers, you can probably skip this section: MetalLB’s behaviors and tradeoffs are identical.

“Local” traffic policy

With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.

This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.

The downside of this policy is that it treats each cluster node as one “unit” of load-balancing, regardless of how many of the service’s pods are running on that node. This may result in traffic imbalances to your pods.

For example, if your service has 2 pods running on node A and one pod running on node B, the Local traffic policy will send 50% of the service’s traffic to each node. Node A will split the traffic it receives evenly between its two pods, so the final per-pod load distribution is 25% for each of node A’s pods, and 50% for node B’s pod. In contrast, if you used the Cluster traffic policy, each pod would receive 33% of the overall traffic.

In general, when using the Local traffic policy, it’s recommended to finely control the mapping of your pods to nodes, for example using node anti-affinity, so that an even traffic split across nodes translates to an even traffic split across pods.

您需要考虑 BGP routing protocol 的限制用于 MetalLB。

另请查看此博文 Using MetalLb with Kind .

关于kubernetes - 裸机 K8s : How to preserve source IP of client and direct traffic to nginx replica on current server on,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57170956/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com