- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我在microk8s
中使用ubuntu
我正在尝试运行一个简单的hello world程序,但是在创建pod
时出现错误。
kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy
apiVersion: v1
kind: Service
metadata:
name: grpc-hello
spec:
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app: grpc-hello
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-hello
spec:
replicas: 1
selector:
matchLabels:
app: grpc-hello
template:
metadata:
labels:
app: grpc-hello
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- name: python-grpc-hello
image: gcr.io/octa-test-123/python-grpc-hello:1.0
ports:
- containerPort: 50051
describe
时得到的
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
kube-dns
,但不知道为什么它仍然无法工作。这些kube-dn正在运行。 kube-dns在
kube-system
命名空间中。
NAME READY STATUS RESTARTS AGE
kube-dns-6dbd676f7-dfbjq 3/3 Running 0 22m
kube-dns
的内容
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.152.183.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
最佳答案
您尚未指定如何部署kube dns,但建议使用microk8s来使用核心dns。您不应该自己部署kube dns或核心dns,而是需要使用此命令microk8s.enable dns
启用dns,该命令将部署核心DNS并设置DNS。
关于kubernetes - kubelet没有在Microk8s中配置ClusterDNS IP,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59550564/
我正在使用 Gunicorn 为 Django 应用程序提供服务,它工作正常,直到我将其超时时间从 30 秒更改为 900000 秒,我不得不这样做,因为我有一个用例需要上传和处理一个巨大的文件(过程
我有一个带有非常基本的管道的Jenkinsfile,它可以旋转docker容器: pipeline { agent { dockerfile { args '-u root' } } stag
在学习 MEAN 堆栈的过程中,我遇到了一个问题。每当我尝试使用 Passport 验证方法时,它都不会返回任何响应。我总是收到“localhost没有发送任何数据。ERR_EMPTY_RESPONS
在当今的大多数企业堆栈中,数据库是我们存储所有秘密的地方。它是安全屋,是待命室,也是用于存储可能非常私密或极具价值的物品的集散地。对于依赖它的数据库管理员、程序员和DevOps团队来说,保护它免受所
是否可以创建像图片上那样的边框?只需使用 css 边框属性。最终结果将是没 Angular 盒子。我不想添加额外的 html 元素。我只想为每个 li 元素添加 css 边框信息。 假设这是一个 ul
我是一名优秀的程序员,十分优秀!