- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在CentOS机器上部署etcd + flanneld + kubernetes集群。 etcd和flanneld运行正常。但不是kubernetes。
我的环境:
coreos05: CentOS7 - 192.168.0.114
coreos08: CentOS7 - 192.168.2.57
[root@coreos05 ~]# etcdctl -C 192.168.0.114:4001 member list
e83ffc60b9b71862: name=coreos05 peerURLs=http://coreos05:2380,http://coreos05:7001 clientURLs=http://192.168.0.114:2379,http://192.168.0.114:4001
f877fb31ab0f7105: name=coreos08 peerURLs=http://coreos08:2380,http://coreos08:7001 clientURLs=http://192.168.2.57:2379,http://192.168.2.57:4001
[root@coreos05 ~]# etcdctl -C 192.168.2.57:4001 member list
e83ffc60b9b71862: name=coreos05 peerURLs=http://coreos05:2380,http://coreos05:7001 clientURLs=http://192.168.0.114:2379,http://192.168.0.114:4001
f877fb31ab0f7105: name=coreos08 peerURLs=http://coreos08:2380,http://coreos08:7001 clientURLs=http://192.168.2.57:2379,http://192.168.2.57:4001
[root@coreos05 ~]# netstat -putona | egrep 'etcd|flanneld' |grep 2.57
tcp 0 0 192.168.0.114:4001 192.168.2.57:42996 ESTABLISHED 16288/etcd keepalive (14,65/0/0)
tcp 0 0 192.168.0.114:2380 192.168.2.57:32817 ESTABLISHED 16288/etcd off (0.00/0/0)
[root@coreos05 ~]#
[root@coreos05 ~]# for SERVICES in etcd flanneld kube-apiserver kube-controller-manager kube-scheduler; do systemctl status $SERVICES ; done
etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled)
Active: active (running) since mar 2015-05-12 11:54:16 CEST; 33min ago
Main PID: 16590 (etcd)
CGroup: /system.slice/etcd.service
└─16590 /usr/bin/etcd
may 12 11:54:16 coreos05 etcd[16590]: 2015/05/12 11:54:16 raft: e83ffc60b9b71862 became follower at term 46
may 12 11:54:16 coreos05 etcd[16590]: 2015/05/12 11:54:16 raft: newRaft e83ffc60b9b71862 [peers: [], term: 46, commit: 5235, applied: 0, lastindex: 5235, lastterm: 46]
may 12 11:54:16 coreos05 etcd[16590]: 2015/05/12 11:54:16 etcdserver: added local member e83ffc60b9b71862 [http://coreos05:2380 http://coreos05:7001] to cluster 85bb0f76f652d0f6
may 12 11:54:16 coreos05 etcd[16590]: 2015/05/12 11:54:16 etcdserver: added member f877fb31ab0f7105 [http://coreos08:2380 http://coreos08:7001] to cluster 85bb0f76f652d0f6
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 raft: e83ffc60b9b71862 [term: 46] received a MsgVote message with higher term from f877fb31ab0f7105 [term: 47]
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 raft: e83ffc60b9b71862 became follower at term 47
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 raft: e83ffc60b9b71862 [logterm: 46, index: 5235, vote: 0] voted for f877fb31ab0f7105 [logterm: 46, index: 5235] at term 47
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 raft.node: e83ffc60b9b71862 elected leader f877fb31ab0f7105 at term 47
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 rafthttp: starting client stream to f877fb31ab0f7105 at term 47
may 12 11:54:17 coreos05 etcd[16590]: 2015/05/12 11:54:17 etcdserver: published {Name:coreos05 ClientURLs:[http://192.168.0.114:2379 http://192.168.0.114:4001]} to cluster 85bb0f76f652d0f6
flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled)
Active: active (running) since mar 2015-05-12 11:54:17 CEST; 33min ago
Main PID: 16611 (flanneld)
CGroup: /system.slice/flanneld.service
└─16611 /usr/bin/flanneld -etcd-endpoints=http://192.168.0.114:4001 -etcd-prefix=/kuberdock/network/ --iface=enp3s0
may 12 11:54:17 coreos05 systemd[1]: Starting Flanneld overlay address etcd agent...
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.024119 16611 main.go:247] Installing signal handlers
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.025078 16611 main.go:205] Using 192.168.0.114 as external interface
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.868493 16611 subnet.go:83] Subnet lease acquired: 10.10.93.0/24
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.869081 16611 main.go:215] UDP mode initialized
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.869106 16611 udp.go:239] Watching for new subnet leases
may 12 11:54:17 coreos05 flanneld[16611]: I0512 11:54:17.871602 16611 udp.go:264] Subnet added: 10.10.65.0/24
may 12 11:54:17 coreos05 systemd[1]: Started Flanneld overlay address etcd agent.
kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled)
Drop-In: /etc/systemd/system/kube-apiserver.service.d
└─pre-start.conf
Active: active (running) since mar 2015-05-12 11:54:17 CEST; 33min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 16690 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─16690 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://coreos05:4001 --address=0.0.0.0 --port=8080 --kubelet_port=10250 --allow_privileged=false --portal_net=10.10.0.0/16 --admission_control=Namespac...
may 12 11:54:17 coreos05 kube-apiserver[16690]: E0512 11:54:17.985524 16690 reflector.go:123] Failed to list *api.Namespace: Get http://0.0.0.0:8080/api/v1beta3/namespaces: dial tcp 0.0.0.0:8080: connection refused
may 12 11:54:17 coreos05 kube-apiserver[16690]: I0512 11:54:17.986149 16690 master.go:236] Will report 192.168.0.114 as public IP address.
may 12 11:54:17 coreos05 kube-apiserver[16690]: E0512 11:54:17.987132 16690 reflector.go:123] Failed to list *api.LimitRange: Get http://0.0.0.0:8080/api/v1beta3/limitranges: dial tcp 0.0.0.0:8080: connection refused
may 12 11:54:17 coreos05 kube-apiserver[16690]: E0512 11:54:17.987437 16690 reflector.go:123] Failed to list *api.ResourceQuota: Get http://0.0.0.0:8080/api/v1beta3/resourcequotas: dial tcp 0.0.0.0:8080: connection refused
may 12 11:54:18 coreos05 kube-apiserver[16690]: [restful] 2015/05/12 11:54:18 log.go:30: [restful/swagger] listing is available at https://192.168.0.114:6443/swaggerapi/
may 12 11:54:18 coreos05 kube-apiserver[16690]: [restful] 2015/05/12 11:54:18 log.go:30: [restful/swagger] https://192.168.0.114:6443/swaggerui/ is mapped to folder /swagger-ui/
may 12 11:54:18 coreos05 kube-apiserver[16690]: I0512 11:54:18.093361 16690 server.go:353] Serving read-only insecurely on 0.0.0.0:7080
may 12 11:54:18 coreos05 kube-apiserver[16690]: I0512 11:54:18.093784 16690 server.go:390] Serving securely on 0.0.0.0:6443
may 12 11:54:18 coreos05 kube-apiserver[16690]: I0512 11:54:18.100679 16690 server.go:418] Serving insecurely on 0.0.0.0:8080
may 12 11:54:18 coreos05 kube-apiserver[16690]: I0512 11:54:18.925329 16690 server.go:400] Using self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled)
Active: active (running) since mar 2015-05-12 11:54:18 CEST; 33min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 16714 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─16714 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --machines=coreos08
may 12 12:26:48 coreos05 kube-controller-manager[16714]: I0512 12:26:48.282325 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:26:48.282313291 +0200 CEST is later than 2015-05-12 12:26:48.282311109 +0200 CEST + 4m20s
may 12 12:26:53 coreos05 kube-controller-manager[16714]: I0512 12:26:53.468254 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:26:53.468242266 +0200 CEST is later than 2015-05-12 12:26:53.468240541 +0200 CEST + 4m20s
may 12 12:26:58 coreos05 kube-controller-manager[16714]: I0512 12:26:58.677179 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:26:58.677166286 +0200 CEST is later than 2015-05-12 12:26:58.67716449 +0200 CEST + 4m20s
may 12 12:27:03 coreos05 kube-controller-manager[16714]: I0512 12:27:03.778387 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:03.778376111 +0200 CEST is later than 2015-05-12 12:27:03.778374466 +0200 CEST + 4m20s
may 12 12:27:08 coreos05 kube-controller-manager[16714]: I0512 12:27:08.879548 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:08.879537205 +0200 CEST is later than 2015-05-12 12:27:08.879535608 +0200 CEST + 4m20s
may 12 12:27:13 coreos05 kube-controller-manager[16714]: I0512 12:27:13.980986 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:13.980974374 +0200 CEST is later than 2015-05-12 12:27:13.980972639 +0200 CEST + 4m20s
may 12 12:27:19 coreos05 kube-controller-manager[16714]: I0512 12:27:19.574960 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:19.574947254 +0200 CEST is later than 2015-05-12 12:27:19.574945586 +0200 CEST + 4m20s
may 12 12:27:24 coreos05 kube-controller-manager[16714]: I0512 12:27:24.699798 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:24.699787548 +0200 CEST is later than 2015-05-12 12:27:24.699785704 +0200 CEST + 4m20s
may 12 12:27:29 coreos05 kube-controller-manager[16714]: I0512 12:27:29.876981 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:29.876968588 +0200 CEST is later than 2015-05-12 12:27:29.876966413 +0200 CEST + 4m20s
may 12 12:27:34 coreos05 kube-controller-manager[16714]: I0512 12:27:34.988483 16714 nodecontroller.go:504] Evicting pods2: 2015-05-12 12:27:34.988471519 +0200 CEST is later than 2015-05-12 12:27:34.988469853 +0200 CEST + 4m20s
kube-scheduler.service - Kubernetes Scheduler Plugin
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
Active: active (running) since mar 2015-05-12 11:54:18 CEST; 33min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 16734 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─16734 /usr/bin/kube-scheduler --logtostderr=true --v=0
may 12 11:54:18 coreos05 systemd[1]: kube-scheduler.service: main process exited, code=exited, status=2/INVALIDARGUMENT
may 12 11:54:18 coreos05 systemd[1]: Unit kube-scheduler.service entered failed state.
may 12 11:54:18 coreos05 systemd[1]: Starting Kubernetes Scheduler Plugin...
may 12 11:54:18 coreos05 systemd[1]: Started Kubernetes Scheduler Plugin.
may 12 11:54:18 coreos05 kube-scheduler[16734]: W0512 11:54:18.139880 16734 server.go:83] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
may 12 12:09:18 coreos05 kube-scheduler[16734]: E0512 12:09:18.150197 16734 reflector.go:158] watch of *api.Service ended with: very short watch
may 12 12:09:18 coreos05 kube-scheduler[16734]: E0512 12:09:18.156710 16734 reflector.go:158] watch of *api.Node ended with: very short watch
may 12 12:24:19 coreos05 kube-scheduler[16734]: E0512 12:24:19.154734 16734 reflector.go:158] watch of *api.Service ended with: very short watch
may 12 12:24:19 coreos05 kube-scheduler[16734]: E0512 12:24:19.160947 16734 reflector.go:158] watch of *api.Node ended with: very short watch
Failed to list *api.Namespace: Get http://0.0.0.0:8080/api/v1beta3/namespaces: dial tcp 0.0.0.0:8080: connection refused
[root@coreos05 ~]# kubectl get node
NAME LABELS STATUS
coreos08 <none> NotReady
最佳答案
我可以想到三种可能的情况,您可能正面临此问题。每当我看到一个/多个节点处于“未就绪”状态时,就会发现DNS配置不正确,网络插件未运行或kubelet未运行。
因为我看到您正在使用法兰绒进行网络连接并且运行正常,所以可能由于其他两个原因而导致出现问题。
请尝试以下操作查明问题所在:
关于kubernetes - Kubernetes:kubectl node01还没有准备好,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30190086/
core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetesServer: 10.100.0.10Address 1: 1
我有一个节点错误地注册在集群 B 上,而它实际上为集群 A 服务。 这里“在集群 B 上注册”意味着我可以从 kubectl get node 看到节点来自集群 B。 我想从集群 B 中取消注册这个节
据我所知,Kubernetes 是一个用于部署和管理容器的编排框架。另一方面,Kubernetes Engine 负责集群的伸缩,以及管理容器镜像。 从上面看,它们似乎是同一件事或非常相似。从上面的定
我正在学习 Kubernetes 和 Docker,以启动一个简单的 Python 网络应用程序。我对上述所有技术都不熟悉。 下面是我计划的方法: 安装 Kubernetes。 在本地启动并运行集群。
我了解如何在 kubernetes 中设置就绪探测器,但是是否有任何关于在调用就绪探测器时微服务应实际检查哪些内容的最佳实践?两个具体例子: 一个面向数据库的微服务,如果没有有效的数据库连接,几乎所有
Kubernetes 调度程序是仅根据请求的资源和节点在服务器当前快照中的可用资源将 Pod 放置在节点上,还是同时考虑节点的历史资源利用率? 最佳答案 在官方Kubernetes documenta
我们有多个环境,如 dev、qa、prepod 等。我们有基于环境的命名空间。现在我们将服务命名为 environment 作为后缀。例如。, apiVersion: apps/v1
我有一个关于命名空间的问题,并寻求您的专业知识来消除我的疑虑。 我对命名空间的理解是,它们用于在团队和项目之间引入逻辑边界。 当然,我在某处读到命名空间可用于在同一集群中引入/定义不同的环境。 例如测
我知道角色用于授予用户或服务帐户在特定命名空间中执行操作的权限。 一个典型的角色定义可能是这样的 kind: Role apiVersion: rbac.authorization.k8s.io/v1
我正在学习 Kubernetes,目前正在深入研究高可用性,虽然我知道我可以使用本地(或远程)etcd 以及一组高可用性的控制平面(API 服务器、 Controller 、调度程序)来设置minio
两者之间有什么实际区别?我什么时候应该选择一个? 例如,如果我想让我的项目中的开发人员仅查看 pod 的日志。似乎可以通过 RoleBinding 为服务帐户或上下文分配这些权限。 最佳答案 什么是服
根据基于时间的计划执行容器或 Pod 的推荐方法是什么?例如,每天凌晨 2 点运行 10 分钟的任务。 在传统的 linux 服务器上,crontab 很容易工作,而且显然在容器内部仍然是可能的。然而
有人可以帮助我了解服务网格本身是否是一种入口,或者服务网格和入口之间是否有任何区别? 最佳答案 “入口”负责将流量路由到集群中(来自 Docs:管理对集群中服务的外部访问的 API 对象,通常是 HT
我是 kubernetes 集群的新手。我有一个简单的问题。 我在多个 kubernetes 集群中。 kubernetes 中似乎有多个集群可用。所以 kubernetes 中的“多集群”意味着:
我目前正在使用Deployments管理我的K8S集群中的Pod。 我的某些部署需要2个Pod /副本,一些部署需要3个Pod /副本,而有些部署只需要1个Pod /副本。我遇到的问题是只有一个 po
我看过官方文档:https://kubernetes.io/docs/tasks/setup-konnectivity/setup-konnectivity/但我还是没明白它的意思。 我有几个问题:
这里的任何人都有在 kubernetes 上进行批处理(例如 spring 批处理)的经验?这是个好主意吗?如果我们使用 kubernetes 自动缩放功能,如何防止批处理处理相同的数据?谢谢你。 最
我有一个具有 4 个节点和一个主节点的 Kubernetes 集群。我正在尝试在所有节点中运行 5 个 nginx pod。目前,调度程序有时在一台机器上运行所有 pod,有时在不同的机器上运行。 如
我在运行 Raspbian Stretch 的 Raspberry PI 3 上使用以下命令安装最新版本的 Kubernetes。 $ curl -s https://packages.cloud.g
container port 与 Kubernetes 容器中的 targetports 有何不同? 它们是否可以互换使用,如果可以,为什么? 我遇到了下面的代码片段,其中 containerPort
我是一名优秀的程序员,十分优秀!