- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我正在尝试让 Heapster 在我的 Kubernetes 集群上运行。我正在使用 Kube-DNS 进行 DNS 解析。
我的 Kube-DNS 似乎设置正确:
Name: kube-dns-v20-z2dd2
Namespace: kube-system
Node: 172.31.48.201/172.31.48.201
Start Time: Mon, 22 Jan 2018 09:21:49 +0000
Labels: k8s-app=kube-dns
version=v20
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Running
IP: 172.17.29.4
Controlled By: ReplicationController/kube-dns-v20
Containers:
kubedns:
Container ID: docker://13f95bdf8dee273ca18a2eee1b99fe00e5fff41279776cdef5d7e567472a39dc
Image: gcr.io/google_containers/kubedns-amd64:1.8
Image ID: docker-pullable://gcr.io/google_containers/kubedns-amd64@sha256:39264fd3c998798acdf4fe91c556a6b44f281b6c5797f464f92c3b561c8c808c
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local.
--dns-port=10053
State: Running
Started: Mon, 22 Jan 2018 09:22:05 +0000
Ready: True
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
dnsmasq:
Container ID: docker://576ebc30e8f7aae13000a2d06541c165a3302376ad04c604b12803463380d9b5
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
Image ID: docker-pullable://gcr.io/google_containers/kube-dnsmasq-amd64@sha256:a722df15c0cf87779aad8ba2468cf072dd208cb5d7cfcaedd90e66b3da9ea9d2
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
State: Running
Started: Mon, 22 Jan 2018 09:22:20 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
healthz:
Container ID: docker://3367d05fb0e13c892243a4c86c74a170b0a9a2042387a70f6690ed946afda4d2
Image: gcr.io/google_containers/exechealthz-amd64:1.2
Image ID: docker-pullable://gcr.io/google_containers/exechealthz-amd64@sha256:503e158c3f65ed7399f54010571c7c977ade7fe59010695f48d9650d83488c0a
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
State: Running
Started: Mon, 22 Jan 2018 09:22:32 +0000
Ready: True
Restart Count: 0
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-9zxzd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9zxzd
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned kube-dns-v20-z2dd2 to 172.31.48.201
Normal SuccessfulMountVolume 43m kubelet, 172.31.48.201 MountVolume.SetUp succeeded for volume "default-token-9zxzd"
Normal Pulling 43m kubelet, 172.31.48.201 pulling image "gcr.io/google_containers/kubedns-amd64:1.8"
Normal Pulled 43m kubelet, 172.31.48.201 Successfully pulled image "gcr.io/google_containers/kubedns-amd64:1.8"
Normal Created 43m kubelet, 172.31.48.201 Created container
Normal Started 43m kubelet, 172.31.48.201 Started container
Normal Pulling 43m kubelet, 172.31.48.201 pulling image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
Normal Pulled 42m kubelet, 172.31.48.201 Successfully pulled image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
Normal Created 42m kubelet, 172.31.48.201 Created container
Normal Started 42m kubelet, 172.31.48.201 Started container
Normal Pulling 42m kubelet, 172.31.48.201 pulling image "gcr.io/google_containers/exechealthz-amd64:1.2"
Normal Pulled 42m kubelet, 172.31.48.201 Successfully pulled image "gcr.io/google_containers/exechealthz-amd64:1.2"
Normal Created 42m kubelet, 172.31.48.201 Created container
Normal Started 42m kubelet, 172.31.48.201 Started container
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.254.0.2
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 172.17.29.4:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 172.17.29.4:53
Session Affinity: None
Events: <none>
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Subsets:
Addresses: 172.17.29.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
dns 53 UDP
dns-tcp 53 TCP
Events: <none>
Server: 10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local
但是,如果我试图解决 http://monitoring-influxdb在 busybox 容器(在 kube-system 命名空间之外)上无法解析:
Server: (null)
Address 1: 127.0.0.1 localhost
Address 2: ::1 localhost
nslookup: can't resolve 'http://monitoring-influxdb': Try again
command terminated with exit code 1
nameserver 10.254.0.2
search kube-system.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
Server: 10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'http://monitoring-influxdb'
command terminated with exit code 1
nameserver 10.254.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
最后是来自 heapster pod 的日志。我在 dns pod 日志中找不到任何错误:
E0122 09:22:46.966896 1 influxdb.go:217] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp: lookup monitoring-influxdb on 10.254.0.2:53: server misbehaving, will retry on use
非常感谢任何指点。
monitoring-influxdb 位于与 heapster (kube-system) 相同的命名空间中。
Server: (null)
Address 1: 127.0.0.1 localhost
Address 2: ::1 localhost
nslookup: can't resolve 'monitoring-influxdb.kube-system': Name does not resolve
command terminated with exit code 1
但无论出于何种原因,busybox 都能够解析服务器。
Server: 10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local
Name: monitoring-influxdb.kube-system
Address 1: 10.254.48.109 monitoring-influxdb.kube-system.svc.cluster.local
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.254.193.208 <none> 80/TCP 1h
kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard NodePort 10.254.89.241 <none> 80:32431/TCP 1h
monitoring-grafana ClusterIP 10.254.176.96 <none> 80/TCP 1h
monitoring-influxdb ClusterIP 10.254.48.109 <none> 8083/TCP,8086/TCP 1h
NAME ENDPOINTS AGE
heapster 172.17.29.7:8082 1h
kube-controller-manager <none> 1h
kube-dns 172.17.29.6:53,172.17.29.6:53 1h
kubernetes-dashboard 172.17.29.5:9090 1h
monitoring-grafana 172.17.29.3:3000 1h
monitoring-influxdb 172.17.29.3:8086,172.17.29.3:8083 1h
最佳答案
在 kubernetes 中,您可以单独通过名称解析服务,但前提是您在同一个命名空间内。
也可以通过以下形式的 DNS 名称访问服务:
<service name>.<namespace>
从你的问题来看,你在哪个命名空间中部署了 influxdb 并不清楚,但请尝试以上建议。
关于linux - 无法使用 heapster 和 kube-dns 在 Kubernetes 上解析 monitoring-influxdb,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48379169/
尝试部署heapster-controller以使Heapster + Graphana + InfluxDB适用于Kubernetes。尝试使用heapster-controller.yaml文件进
我按照 heapster-influxDB guide 使用 influxdb 和 grafana 部署了 heapster .当访问 grafana 实例时,我在图表中看不到任何数据(grafana
我已经基于 Running Kubernetes Locally via Docker 在 Ubuntu (trusty) 上建立了一个 Kubernetes 集群。指导、部署 DNS 并使用 Inf
如何在 k8s 集群中使用 heapster 获取“文件系统/使用情况”?我使用heapster monitor k8s,但是我无法获取节点磁盘使用情况。有人帮忙吗? 最佳答案 curl -L 10.
我想根据资源使用情况扩大/缩小 kubernetes 集群使用的 kubelet 数量。我一直在查看代码,并对如何在高层次上实现它有一些想法。 我被困在两件事上: 什么是访问集群指标的好方法(通过 H
GKE 上新创建的 Kubernetes 集群未将其指标推送到 Stackdriver。 kubectl cluster-info 的输出是: Kubernetes master is running
我无法安装 cluster/addons/cluster-monitoring/google/heapster-controller.yaml由于以下错误,在 CoreOS 991.1.0/GCE 上
Kubernetes 无法找到 metric-server api。我在 Mac 上使用 Kubernetes 和 Docker。我试图从以下示例中进行 HPA [ https://kubernete
我是 Kubernetes 的新手 目标是让 Kubernetes 集群仪表板工作 Kubernetes 集群是使用 Kubespray 部署的:github.com/kubernetes-incub
我想完全控制我对我的单节点集群所做的事情(节省...大声笑),但不知何故,即使我删除了它重新生成的部署,我也无法做到这一点...... 最佳答案 正如另一个答案中提到的,您不能通过 Kubernete
我正在尝试让 Heapster 在我的 Kubernetes 集群上运行。我正在使用 Kube-DNS 进行 DNS 解析。 我的 Kube-DNS 似乎设置正确: kubectl describe
在使用 kubeadm 安装 Kubernetes 后,我试图创建一个水平 pod 自动缩放。 主要症状是kubectl get hpa返回列 TARGETS 中的 CPU 指标作为“未定义”: $
我是 kubernetes 世界的新手,如果我写错了,请原谅我。我正在尝试部署 kubernetes 仪表板 我的集群包含三个主节点和 3 个已耗尽且不可调度的工作节点,以便将仪表板安装到主节点: [
我是一名优秀的程序员,十分优秀!