gpt4 book ai didi

docker 微服务应用程序在 kubernetes 中一遍又一遍地重新启动

转载 作者:行者123 更新时间:2023-12-02 04:34:04 26 4
gpt4 key购买 nike

我正在尝试使用 kubernetes 运行微服务应用程序。我在 kubernetes 上运行rabbitmq、elasticsearch 和 eureka 发现服务。除此之外,我还有三个微服务应用程序。当我运行其中两个时,效果很好;然而,当我运行第三个时,它们都开始一遍又一遍地重新启动,没有任何原因。

我的配置文件之一:

apiVersion: v1
kind: Service
metadata:
name: hrm
labels:
app: suite
spec:
type: NodePort
ports:
- port: 8086
nodePort: 30001
selector:
app: suite
tier: hrm-core
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hrm
spec:
replicas: 1
template:
metadata:
labels:
app: suite
tier: hrm-core
spec:
containers:
- image: privaterepo/hrm-core
name: hrm
ports:
- containerPort: 8086
imagePullSecrets:
- name: regsecret

kubectl 描述 pod hrm 的结果:

 State:     Running
Started: Mon, 12 Jun 2017 12:08:28 +0300
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 12 Jun 2017 12:07:05 +0300
Ready: True
Restart Count: 5
18m 18m 1 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)"

kubectl 获取 Pod:

NAME                        READY     STATUS    RESTARTS   AGE
discserv-189146465-s599x 1/1 Running 0 2d
esearch-3913228203-9sm72 1/1 Running 0 2d
hrm-3288407936-cwvgz 1/1 Running 6 46m
parabot-1262887100-6098j 1/1 Running 9 2d
rabbitmq-279796448-9qls3 1/1 Running 0 2d
suite-ui-1725964700-clvbd 1/1 Running 3 2d

kubectl版本:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}

minikube 版本:

minikube version: v0.18.0

当我查看 pod 日志时,没有错误。看起来启动起来没有任何问题。这里可能有什么问题?

编辑:kubectl get 事件的输出:

19m        19m         1         discserv-189146465-lk3sm    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Pulling kubelet, minikube pulling image "private repo"
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Created kubelet, minikube Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Started kubelet, minikube Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m 19m 1 esearch-3913228203-6l3t7 Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Pulled kubelet, minikube Container image "elasticsearch:2.4" already present on machine
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Created kubelet, minikube Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Started kubelet, minikube Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
18m 18m 1 hrm-3288407936-d2vhh Pod Normal Scheduled default-scheduler Successfully assigned hrm-3288407936-d2vhh to minikube
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Pulling kubelet, minikube pulling image "private repo"
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Created kubelet, minikube Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Started kubelet, minikube Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m 18m 1 hrm-3288407936 ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: hrm-3288407936-d2vhh
18m 18m 1 hrm Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set hrm-3288407936 to 1
19m 19m 1 minikube Node Normal RegisteredNode controllermanager Node minikube event: Registered Node minikube in NodeController
19m 19m 1 minikube Node Normal Starting kubelet, minikube Starting kubelet.
19m 19m 1 minikube Node Warning ImageGCFailed kubelet, minikube unable to find data for container /
19m 19m 1 minikube Node Normal NodeAllocatableEnforced kubelet, minikube Updated Node Allocatable limit across pods
19m 19m 1 minikube Node Normal NodeHasSufficientDisk kubelet, minikube Node minikube status is now: NodeHasSufficientDisk
19m 19m 1 minikube Node Normal NodeHasSufficientMemory kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
19m 19m 1 minikube Node Normal NodeHasNoDiskPressure kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
19m 19m 1 minikube Node Warning Rebooted kubelet, minikube Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300
19m 19m 1 minikube Node Normal NodeNotReady kubelet, minikube Node minikube status is now: NodeNotReady
19m 19m 1 minikube Node Normal Starting kube-proxy, minikube Starting kube-proxy.
19m 19m 1 minikube Node Normal NodeReady kubelet, minikube Node minikube status is now: NodeReady
8m 8m 1 minikube Node Warning SystemOOM kubelet, minikube System OOM encountered
18m 18m 1 parabot-1262887100-r84kf Pod Normal Scheduled default-scheduler Successfully assigned parabot-1262887100-r84kf to minikube
8m 18m 2 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Pulling kubelet, minikube pulling image "private repo"
8m 18m 2 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
18m 18m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Created kubelet, minikube Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
18m 18m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Started kubelet, minikube Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
8m 8m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Created kubelet, minikube Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
8m 8m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Started kubelet, minikube Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
18m 18m 1 parabot-1262887100 ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: parabot-1262887100-r84kf
18m 18m 1 parabot Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set parabot-1262887100 to 1
19m 19m 1 rabbitmq-279796448-pcqqh Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Pulling kubelet, minikube pulling image "rabbitmq"
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Pulled kubelet, minikube Successfully pulled image "rabbitmq"
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Created kubelet, minikube Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Started kubelet, minikube Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m 19m 1 suite-ui-1725964700-ssshn Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Pulling kubelet, minikube pulling image "private repo"
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Created kubelet, minikube Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Started kubelet, minikube Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a

最佳答案

查看 kubectl get 日志以了解任何明显的错误。在这种情况下,正如怀疑的那样,看起来是资源不足的问题(或者是存在资源泄漏的服务)。如果可能的话,尝试增加资源看看是否有帮助。

关于docker 微服务应用程序在 kubernetes 中一遍又一遍地重新启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44495957/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com