gpt4 book ai didi

Kubernetes 中的 DNS 不工作

转载 作者:行者123 更新时间:2023-12-02 20:11:35 24 4
gpt4 key购买 nike

我按照 https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns 中的示例进行操作

但是我无法获得 nslookup 输出作为示例。

执行时

kubectl exec busybox -- nslookup kubernetes

应该会返回

Server:    10.0.0.10
Address 1: 10.0.0.10

Name: kubernetes
Address 1: 10.0.0.1

但我只得到

nslookup: can't resolve 'kubernetes'
Server: 10.0.2.3
Address 1: 10.0.2.3

error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1

我的 Kubernetes 运行在虚拟机上,其 ifconfig 输出如下:

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:50 errors:0 dropped:0 overruns:0 frame:0
TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2899 (2.8 KB) TX bytes:2343 (2.3 KB)

eth0 Link encap:Ethernet HWaddr 08:00:27:ed:09:81
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4735 errors:0 dropped:0 overruns:0 frame:0
TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:367445 (367.4 KB) TX bytes:280749 (280.7 KB)

eth1 Link encap:Ethernet HWaddr 08:00:27:1f:0d:84
inet addr:192.168.144.17 Bcast:192.168.144.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:330 (330.0 B) TX bytes:1746 (1.7 KB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:127976 errors:0 dropped:0 overruns:0 frame:0
TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13742978 (13.7 MB) TX bytes:13742978 (13.7 MB)

veth142cdac Link encap:Ethernet HWaddr e2:b6:29:d1:f5:dc
inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1336 (1.3 KB) TX bytes:1336 (1.3 KB)

以下是我尝试启动 Kubernetes 的步骤:

vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh 
+++ [0623 11:18:47] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/hyperkube
cmd/kubernetes
plugin/cmd/kube-scheduler
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
examples/k8petstore/web-server
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0623 11:18:52] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
API SERVER port is free, proceeding...
Starting etcd

etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null

Waiting for etcd to come up.
+++ [0623 11:18:53] etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [0623 11:18:55] apiserver:
{
"kind":
"PodList",
"apiVersion":
"v1beta3",
"metadata":
{
"selfLink":
"/api/v1beta3/pods",
"resourceVersion":
"11"
},
"items":
[]
}
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log
/tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log

To start using your cluster, open up another terminal/tab and run:

cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
cluster/kubectl.sh

然后在一个新的终端窗口中,我执行:

cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local

之后,我将 busybox Pod 创建为

kubectl create -f busybox.yaml

busybox.yaml的内容来自https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md

最佳答案

似乎没有local-cluster-up.sh支持开箱即用的 DNS。为了使 DNS 工作,需要向 kubelet 传递标志 --cluster_dns=<ip-of-dns-service>--cluster_domain=cluster.local在启动时。此标志不包含在 the set of flags passed to the kubelet 中,因此 kubelet 不会尝试联系您为名称解析服务创建的 DNS pod。

要解决此问题,您可以修改脚本以将这两个标志添加到 kubelet,然后在创建 DNS 服务时,需要确保设置与传递给 --cluster_dns 的 IP 地址相同的 IP 地址。标记为portalIP服务规范字段(参见示例 here )。

关于Kubernetes 中的 DNS 不工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30992961/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com