gpt4 book ai didi

kubernetes-helm - 允许pod通过ingress访问service的域名

转载 作者:行者123 更新时间:2023-12-05 02:13:48 26 4
gpt4 key购买 nike

我的 minikube kubernetes 集群上有一个服务和入口设置,它暴露了域名 hello.life.com现在我需要从另一个 pod 中访问这个域 curl http://hello.life.com这应该返回正确的 html

我的服务如下:

apiVersion: v1
kind: Service
metadata:
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-svc
namespace: abc
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
name: bulging-zorse-key
type: ClusterIP
status:
loadBalancer: {}

我的入口如下:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-ingress
namespace: dev
spec:
rules:
- host: hello.life.com
http:
paths:
- backend:
serviceName: bulging-zorse-key-svc
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- {}

有人可以帮我看看我需要做哪些改变才能让它工作吗?

提前致谢!!!

最佳答案

我在 Custom DNS Entries For Kubernetes 中找到了对您的问题和解决方案的很好解释文章:

Suppose we have a service, foo.default.svc.cluster.local that is available to outside clients as foo.example.com. That is, when looked up outside the cluster, foo.example.com will resolve to the load balancer VIP - the external IP address for the service. Inside the cluster, it will resolve to the same thing, and so using this name internally will cause traffic to hairpin - travel out of the cluster and then back in via the external IP.

解决方法是:

Instead, we want foo.example.com to resolve to the internal ClusterIP, avoiding the hairpin.

To do this in CoreDNS, we make use of the rewrite plugin. This plugin can modify a query before it is sent down the chain to whatever backend is going to answer it.

To get the behavior we want, we just need to add a rewrite rule mapping foo.example.com to foo.default.svc.cluster.local:

apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name foo.example.com foo.default.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-09T15:02:52Z"
name: coredns
namespace: kube-system
resourceVersion: "8309112"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: a2ef5ff1-141f-11e9-9043-42010a9c0003

注意:在您的情况下,您必须将入口服务名称作为别名的目的地。
(例如:rewrite name hello.life.com ingress-service-name.ingress-namespace.svc.cluster.local)确保您使用的是正确的服务名称和命名空间名称。

Once we add that to the ConfigMap via kubectl edit configmap coredns -n kube-system or kubectl apply -f patched-coredns-deployment.yaml -n kube-system, we have to wait 10-15 minutes. Recent CoreDNS versions includes reload plugin.


reload

Name

reload - allows automatic reload of a changed Corefile.

Description

This plugin periodically checks if the Corefile has changed by reading it and calculating its MD5 checksum. If the file has changed, it reloads CoreDNS with the new Corefile. This eliminates the need to send a SIGHUP or SIGUSR1 after changing the Corefile.

The reloads are graceful - you should not see any loss of service when the reload happens. Even if the new Corefile has an error, CoreDNS will continue to run the old config and an error message will be printed to the log. But see the Bugs section for failure modes.

In some environments (for example, Kubernetes), there may be many CoreDNS instances that started very near the same time and all share a common Corefile. To prevent these all from reloading at the same time, some jitter is added to the reload check interval. This is jitter from the perspective of multiple CoreDNS instances; each instance still checks on a regular interval, but all of these instances will have their reloads spread out across the jitter duration. This isn't strictly necessary given that the reloads are graceful, and can be disabled by setting the jitter to 0s.

Jitter is re-calculated whenever the Corefile is reloaded.


Running our test pod, we can see this works:

$ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
If you don't see a command prompt, try pressing enter.
/ # host foo
foo.default.svc.cluster.local has address 10.0.0.72
/ # host foo.example.com
foo.example.com has address 10.0.0.72
/ # host bar.example.com
Host bar.example.com not found: 3(NXDOMAIN)
/ #

关于kubernetes-helm - 允许pod通过ingress访问service的域名,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54566758/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com