gpt4 book ai didi

kubernetes - 无法访问Kubernetes仪表板

转载 作者:行者123 更新时间:2023-12-02 12:08:56 24 4
gpt4 key购买 nike

我已经基于contrib repo在CoreOS上创建了Kubernetes v1.3.3集群。我的群集看起来正常,我想使用仪表板,但是即使禁用了所有身份验证,也无法访问UI。以下是kubernetes-dashboard组件以及一些API服务器配置/输出的详细信息。我在这里想念什么?

仪表板组件

core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "345970"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
- ip: 172.16.100.9
targetRef:
kind: Pod
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
uid: d4791722-5908-11e6-9697-02b43b6aa639
ports:
- port: 9090
protocol: TCP

core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "109199"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
clusterIP: 172.20.164.194
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --

namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 172.20.164.194
Port: <unset> 80/TCP
Endpoints: 172.16.100.9:9090
Session Affinity: None
No events.

core@ip-10-178-153-240 ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
creationTimestamp: 2016-08-02T23:28:34Z
generateName: kubernetes-dashboard-v1.1.0-
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
version: v1.1.0
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
containers:
- image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-lvmnw
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-178-153-57.us-west-2.compute.internal
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-lvmnw
secret:
secretName: default-token-lvmnw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
lastState: {}
name: kubernetes-dashboard
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-02T23:28:34Z
hostIP: 10.178.153.57
phase: Running
podIP: 172.16.100.9
startTime: 2016-08-02T23:28:34Z

API服务器配置
/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws

可从远程主机(笔记本电脑)访问apit_rstrong
$ curl http://10.178.153.240:8080/
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]

无法远程访问UI
$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'

可以从Minion Node
访问 UI
core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
<!doctype html> <html ng-app="kubernetesDashboard">...

API服务器路由表
core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1

奴才( pod 所在的地方)路线表
core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1

绒布日志
似乎这条路线与Flannel行为不符。我在日志中收到这些错误,但是重新启动守护程序似乎无法解决它。
...Watch subnets: client: etcd cluster is unavailable or misconfigured

... L3 miss: 172.16.100.9

... calling NeighSet: 172.16.100.9

最佳答案

您必须使用上一个答案中提到的NodePort类型的服务将服务公开到群集之外,或者如果您在API服务器上启用了基本身份验证,则可以使用以下URL来访问服务:
http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name
另请:http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls

关于kubernetes - 无法访问Kubernetes仪表板,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38733801/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com