gpt4 book ai didi

hadoop - Docker 容器独立运行但在 kubernetes 中失败

转载 作者:可可西里 更新时间:2023-11-01 16:38:35 25 4
gpt4 key购买 nike

我有 docker 容器(Hadoop 安装 https://github.com/kiwenlau/hadoop-cluster-docker),我可以使用 sudo docker run -itd -p 50070:50070 -p 8088:8088 --name hadoop-master kiwenlau/hadoop:1.0 命令没有任何问题,但是当尝试将相同的图像部署到 kubernetes 时,pod 无法启动。为了创建部署,我使用了 kubectl run hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070 命令

这里是describe pod命令日志

Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 default-scheduler Normal Scheduled Successfully assigned hadoop-master-2828539450-rnwsd to gke-mtd-cluster-default-pool-6b97d4d0-hcbt
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37
6m 6m 2 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 10s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f
6m 6m 2 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44
5m 5m 3 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e
5m 3m 7 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

3m 3m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f
3m 3m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f
3m 1m 12 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

6m 50s 7 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Pulled Container image "kiwenlau/hadoop:1.0" already present on machine
50s 50s 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d
50s 50s 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d
6m 0s 31 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Warning BackOff Back-off restarting failed container
49s 0s 5 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"

kubectl 日志输出:

kubectl logs hadoop-master-2828539450-rnwsd
* Starting OpenBSD Secure Shell server sshd
...done.

请注意,docker 容器本身并未启动 hadoop。为了启动它,我必须连接到容器并手动启动 hadoop,但是我现在希望能够在 K8s 中简单地运行容器。

谢谢

最佳答案

kubernetes 中的等效命令是

kubectl run -it hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070

其实不是kubernetes的问题,Dockerfile有问题

A docker container exits when its main process finishes.

使用 CMD [ "sh", "-c", "service ssh start; bash"],SSH 服务在后台启动,然后容器在作业完成后停止。

它应该是一个始终在前台运行的可执行脚本/程序,如~/start-hadoop.sh

仅供引用,通常我们不需要通过 ssh 连接到容器,因为 docker exec -it some_container bash 应该足够了。

关于hadoop - Docker 容器独立运行但在 kubernetes 中失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46523406/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com