- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我是 Kubernetes 的菜鸟。我正在尝试按照一些方法来启动并运行一个小集群,但是我遇到了麻烦......
我有一个主节点和 (4) 个节点,都运行 Ubuntu 16.04
在所有节点上安装 docker:
$ sudo apt-get update
$ sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
$ sudo add-apt-repository \
"deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
$ sudo apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
$ sudo docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:17:40 2018
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:16:13 2018
OS/Arch: linux/amd64
Experimental: false
$ sudo swapoff -a
$ sudo vi /etc/fstab
$ mount -a
$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubeadm kubectl
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4",
GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean",
BuildDate:"2018-03-12T16:21:35Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
$ sudo groupadd --system etcd
$ sudo useradd --home-dir "/var/lib/etcd" \
--system \
--shell /bin/false \
-g etcd \
etcd
$ sudo mkdir -p /etc/etcd
$ sudo chown etcd:etcd /etc/etcd
$ sudo mkdir -p /var/lib/etcd
$ sudo chown etcd:etcd /var/lib/etcd
$ sudo rm -rf /tmp/etcd && mkdir -p /tmp/etcd
$ sudo curl -L https://github.com/coreos/etcd/releases/download/v3.3.0/etcd- v3.3.0-linux-amd64.tar.gz -o /tmp/etcd-3.3.0-linux-amd64.tar.gz
$ sudo tar xzvf /tmp/etcd-3.3.0-linux-amd64.tar.gz -C /tmp/etcd --strip-components=1
$ sudo cp /tmp/etcd/etcd /usr/bin/etcd
$ sudo cp /tmp/etcd/etcdctl /usr/bin/etcdctl
$ sudo ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr 1e:00:51:00:00:28
inet addr:172.20.43.30 Bcast:172.20.43.255 Mask:255.255.254.0
inet6 addr: fe80::27b5:3d06:94c9:9d0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3194023 errors:0 dropped:0 overruns:0 frame:0
TX packets:3306456 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:338523846 (338.5 MB) TX bytes:3682444019 (3.6 GB)
$ sudo kubeadm init --pod-network-cidr=172.20.43.0/16 \
--apiserver-advertise-address=172.20.43.30 \
--ignore-preflight-errors=cri \
--kubernetes-version stable-1.9
[init] Using Kubernetes version: v1.9.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [jenkins-kube- master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.20.43.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 37.502640 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node jenkins-kube-master as master by adding a label and a taint
[markmaster] Master jenkins-kube-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 6be4b1.9a8dacf89f71e53c
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 6be4b1.9a8dacf89f71e53c 172.20.43.30:6443 --discovery-token-ca-cert-hash sha256:524d29b032d7bfd319b147ab03a936bd429805258425bccca749de71bcb1efaf
$ sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=$HOME/.kube/config
$ echo "export KUBECONFIG=$HOME/.kube/config" | tee -a ~/.bashrc
$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
clusterrole "flannel" configured
clusterrolebinding "flannel" configured
$ sudo kubeadm join --token 6be4b1.9a8dacf89f71e53c 172.20.43.30:6443 \
--discovery-token-ca-cert-hash sha256:524d29b032d7bfd319b147ab03a936bd429805258425bccca749de71bcb1efaf \
--ignore-preflight-errors=cri
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes- dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default guids-74487d79cf-zsj8q 1/1 Running 0 4h
kube-system etcd-jenkins-kube-master 1/1 Running 1 21h
kube-system kube-apiserver-jenkins-kube-master 1/1 Running 1 21h
kube-system kube-controller-manager-jenkins-kube-master 1/1 Running 2 21h
kube-system kube-dns-6f4fd4bdf-7pr9q 3/3 Running 0 1d
kube-system kube-flannel-ds-pvk8m 1/1 Running 0 4h
kube-system kube-flannel-ds-q4fsl 1/1 Running 0 4h
kube-system kube-flannel-ds-qhxn6 1/1 Running 0 21h
kube-system kube-flannel-ds-tkspz 1/1 Running 0 4h
kube-system kube-flannel-ds-vgqsb 1/1 Running 0 4h
kube-system kube-proxy-7np9b 1/1 Running 0 4h
kube-system kube-proxy-9lx8h 1/1 Running 1 1d
kube-system kube-proxy-f46d8 1/1 Running 0 4h
kube-system kube-proxy-fdtx9 1/1 Running 0 4h
kube-system kube-proxy-kmnjf 1/1 Running 0 4h
kube-system kube-scheduler-jenkins-kube-master 1/1 Running 1 21h
kube-system kubernetes-dashboard-5bd6f767c7-xf42n 0/1 CrashLoopBackOff 53 4h
$ sudo kubectl logs kubernetes-dashboard-5bd6f767c7-xf42n --namespace=kube-system
2018/03/20 17:56:25 Starting overwatch
2018/03/20 17:56:25 Using in-cluster config to connect to apiserver
2018/03/20 17:56:25 Using service account token for csrf signing
2018/03/20 17:56:25 No request provided. Skipping authorization
2018/03/20 17:56:55 Error while initializing connection to Kubernetes apiserver.
This most likely means that the cluster is misconfigured (e.g., it has invalid
apiserver certificates or service accounts configuration) or the
--apiserver-host param points to a server that does not exist.
Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
sudo kubectl describe pod --namespace=kube-system
的输出放在 pastebin 上:
最佳答案
--service-cluster-ip-range=10.96.0.0/12
.1
在服务 CIDR 中始终是 kubernetes(IIRC
kube-dns
也获得了相当低的 IP 分配,但我不记得它是否总是像 kubernetes 那样固定)
ConfigMap
(错误,现在网络已被推送到
etcd
中,您将面临危险)以与为您的
apiserver
指定的服务和 Pod CIDR 保持一致。 .
关于docker - kubernetes-dashboard 上的 CrashLoopBackOff,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49457053/
我正在使用以下dockerfile: FROM ubuntu:14.04 MAINTAINER xxx xxx # SSH RUN apt-get update && apt-get install
我运行了docker-compose build celery,(经过数小时的尝试,我的连接不良)成功了。 app Dockerfile的前80%是相同的,但不会重复使用缓存。从我可以浏览的内容来看,
我可以使用以下命令成功创建 Docker 注册表 v2 服务:docker service create 然后我使用 docker Push 将一些图像推送到该服务。 当我通过 curl localh
我正在尝试使用 gitlab 构建 CI,我从 docker 的 docker 镜像开始,我的前端存储库没有任何问题,但现在使用相同的 gitlab-ci 配置文件,我有此守护程序错误。 这是构建的输
用例: 我们在 Jenkins 中有几个“发布作业”build 和 push 应用程序的 Docker 镜像到 docker registry,更新各种文件中的项目版本,最后将发布标签推送到相应的 G
当我尝试构建我的 docker 文件时,docker 返回以下错误: [+] Building 0.0s (1/2)
docker-in-docker 的作者在此博客中建议不要将此图像用于 CI 目的: jpetazzo/Using Docker-in-Docker for your CI or testing en
我创建了一个 Dockerfile 来在 Docker 中运行 Docker: FROM ubuntu:16.04 RUN apt-get update && \ apt-get in
我尝试为 Docker 镜像定位一个特定标签。我怎样才能在命令行上做到这一点?我想避免下载所有图像,然后删除不需要的图像。 在 Ubuntu 官方版本中,https://registry.hub.do
我正在尝试在docker中运行docker。唯一的目的是实验性的,我绝不尝试实现任何功能,我只想检查docker从另一个docker运行时的性能。 我通过Mac上的boot2docker启动docke
docker-compose.yml version: "3" services: daggr: image: "docker.pvt.com/test/daggr:stable"
我有一个非常具体的开发环境用例。在一些代码中,我启动了一个容器来抓取页面并检索在容器中运行的服务(Gitlab)的 token 。 现在,我希望 Dockerize 运行它的代码。具体来说,类似: o
之前已经问过这个问题,但我不确定当时是否可以使用docker-compose文件完成docker堆栈部署。 由于最新版本支持使用compose将服务部署到堆栈,因此,我无法理解dab文件的值。 我检查
我在一次采访中被问到这个问题,但无法回答。也没有找到任何相关信息。 最佳答案 正如 Docker 文档中所述,Docker 注册表是: [...] a hosted service containin
有没有一种方法可以将具有给定扩展名的所有文件复制到Docker中的主机?就像是 docker cp container_name:path/to/file/in/docker/*.png path/o
我的日志驱动程序设置为journald。使用日志记录驱动程序时,daemon.json文件中的日志级别配置会影响日志吗?使用docker logs 时仅会影响容器日志? 例如,docker和journ
我最近开始使用Docker + Celery。我还共享了full sample codes for this example on github,以下是其中的一些代码段,以帮助解释我的观点。 就上下文
运行docker build .命令后,尝试提交构建的镜像,但收到以下错误 Step 12 : CMD activator run ---> Using cache ---> efc82ff1ca
我们有docker-compose.yml,其中包含Kafka,zookeeper和schema registry的配置 当我们启动docker compose时,出现以下错误 docker-comp
我是Docker的新手。是否可以在Docker Hub外部建立Docker基本镜像存储库?假设将它们存储在您的云中,而不是拥有DH帐户?谢谢。 最佳答案 您可以根据需要托管自己的注册表。可以在Depl
我是一名优秀的程序员,十分优秀!