gpt4 book ai didi

升级后 Kubernetes 停止工作

转载 作者:行者123 更新时间:2023-12-02 03:01:41 24 4
gpt4 key购买 nike

我在ubuntu上,今天升级了kubernetes服务使用 apt-get 升级;易于获取更新。从那时起,kubernetes 停止工作,没有任何服务在运行,api-server 没有启动。当我运行任何命令时出现以下错误

root:~# kubectl get pods
The connection to the server 172.31.139.86:6443 was refused - did you specify the right host or port?

root:~# kubelet --version
Kubernetes v1.7.3

root:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 172.31.139.86:6443 was refused - did you specify the right host or port?

当运行 kubelet 时,我得到以下错误,它报告没有运行 api 服务器和其他不同的错误

root:~# kubelet
I0803 15:27:47.289047 20182 feature_gate.go:144] feature gates: map[]
W0803 15:27:47.289162 20182 server.go:496] No API client: no api servers specified
I0803 15:27:47.289208 20182 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I0803 15:27:47.289221 20182 client.go:92] Start docker client with request timeout=2m0s
I0803 15:27:47.310348 20182 manager.go:143] cAdvisor running in container: "/user.slice/user-0.slice/session-1.scope"
W0803 15:27:47.342781 20182 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0803 15:27:47.370809 20182 fs.go:117] Filesystem partitions: map[/dev/mapper/noiro--server1--vg-root:{mountpoint:/var/lib/docker/aufs major:253 minor:0 fsType:ext4 blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext2 blockSize:0}]
I0803 15:27:47.375524 20182 manager.go:198] Machine: {NumCores:32 CpuFrequency:3600000 MemoryCapacity:270376665088 MachineID:95972d262b4e4cc64d46557758b0c9ea SystemUUID:36E3B4D4-D196-FC41-AC15-EABB4D086392 BootID:223527cc-aaef-482f-8871-726f48e853e7 Filesystems:[{Device:/dev/mapper/noiro--server1--vg-root DeviceMajor:253 DeviceMinor:0 Capacity:712231378944 Type:vfs Inodes:44179456 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:494512128 Type:vfs Inodes:124928 HasInodes:true}] DiskMap:map[253:1:{Name:dm-1 Major:253 Minor:1 Size:274760466432 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:998999326720 Scheduler:cfq} 253:0:{Name:dm-0 Major:253 Minor:0 Size:723722960896 Scheduler:none}] NetworkDevices:[{Name:enp1s0f0 MacAddress:28:6f:7f:31:8d:06 Speed:1000 Mtu:1500} {Name:enp1s0f1 MacAddress:28:6f:7f:31:8d:07 Speed:-1 Mtu:1500} {Name:enp6s0f0 MacAddress:90:e2:ba:d4:af:94 Speed:-1 Mtu:1500} {Name:enp6s0f1 MacAddress:90:e2:ba:d4:af:95 Speed:10000 Mtu:1500} {Name:virbr0 MacAddress:00:00:00:00:00:00 Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:1e:95:dd Speed:0 Mtu:1500} {Name:virbr1 MacAddress:00:00:00:00:00:00 Speed:0 Mtu:1500} {Name:virbr1-nic MacAddress:52:54:00:80:4d:52 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:135105630208 Cores:[{Id:0 Threads:[0 16] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 17] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 18] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 19] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:4 Threads:[4 20] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:5 Threads:[5 21] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:6 Threads:[6 22] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:7 Threads:[7 23] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:20971520 Type:Unified Level:3}]} {Id:1 Memory:135271034880 Cores:[{Id:0 Threads:[8 24] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[9 25] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[10 26] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[11 27] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:4 Threads:[12 28] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:5 Threads:[13 29] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:6 Threads:[14 30] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:7 Threads:[15 31] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:20971520 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0803 15:27:47.376993 20182 manager.go:204] Version: {KernelVersion:4.10.0-28-generic ContainerOsVersion:Ubuntu 16.04.3 LTS DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:}
W0803 15:27:47.377969 20182 server.go:356] No api server defined - no events will be sent to API server.
I0803 15:27:47.377997 20182 server.go:536] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
W0803 15:27:47.380752 20182 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I0803 15:27:47.380845 20182 container_manager_linux.go:246] container manager verified user specified cgroup-root exists: /
I0803 15:27:47.380871 20182 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
W0803 15:27:47.386826 20182 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0803 15:27:47.386880 20182 kubelet.go:508] Hairpin mode set to "hairpin-veth"
I0803 15:27:47.414780 20182 docker_service.go:208] Docker cri networking managed by kubernetes.io/no-op
I0803 15:27:47.443874 20182 docker_service.go:225] Setting cgroupDriver to cgroupfs
I0803 15:27:47.483153 20182 remote_runtime.go:42] Connecting to runtime service unix:///var/run/dockershim.sock
I0803 15:27:47.485332 20182 kuberuntime_manager.go:166] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0
I0803 15:27:47.487348 20182 server.go:943] Started kubelet v1.7.3
E0803 15:27:47.487409 20182 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
I0803 15:27:47.487496 20182 server.go:132] Starting to listen on 0.0.0.0:10250
W0803 15:27:47.487532 20182 kubelet.go:1313] No api server defined - no node status update will be sent.
I0803 15:27:47.487837 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
I0803 15:27:47.489587 20182 server.go:310] Adding debug handlers to kubelet server.
E0803 15:27:47.491669 20182 kubelet.go:1729] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0803 15:27:47.491703 20182 kubelet.go:1737] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0803 15:27:47.492891 20182 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0803 15:27:47.492966 20182 status_manager.go:136] Kubernetes client is nil, not starting status manager.
I0803 15:27:47.492971 20182 volume_manager.go:245] Starting Kubelet Volume Manager
E0803 15:27:47.492981 20182 container_manager_linux.go:543] [ContainerManager]: Fail to get rootfs information unable to find data for container /
I0803 15:27:47.492981 20182 kubelet.go:1809] Starting kubelet main sync loop.
I0803 15:27:47.493066 20182 kubelet.go:1820] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
I0803 15:27:47.549343 20182 factory.go:351] Registering Docker factory
W0803 15:27:47.549379 20182 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0803 15:27:47.549389 20182 factory.go:54] Registering systemd factory
I0803 15:27:47.549675 20182 factory.go:86] Registering Raw factory
I0803 15:27:47.549959 20182 manager.go:1121] Started watching for new ooms in manager
I0803 15:27:47.552051 20182 oomparser.go:185] oomparser using systemd
I0803 15:27:47.552852 20182 manager.go:288] Starting recovery of all containers
I0803 15:27:47.641407 20182 manager.go:293] Recovery completed
I0803 15:27:47.800854 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:27:47.821579 20182 helpers.go:771] Could not find capacity information for resource storage.kubernetes.io/scratch
W0803 15:27:47.821613 20182 helpers.go:782] eviction manager: no observation found for eviction signal allocatableNodeFs.available
E0803 15:27:52.501201 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:53.499008 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:55.500440 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:57.498812 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:27:57.889786 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:27:59.501288 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:01.499147 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:03.500537 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:05.500421 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:07.500412 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:28:07.976617 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:28:09.501419 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:11.498850 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:13.500340 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:15.498716 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:17.500026 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:28:18.058552 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:28:19.501567 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them

最佳答案

我看到你已经升级到 DockerVersion:17.06.0-ce。我认为此版本未根据此页面使用 kubernetes 进行测试 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information

不确定这是导致问题的原因。

这些是需要在主节点上运行的进程。看到你可以先启动etcd和API服务器。

 etcd
kubelet
kube-controller-manager
kube-scheduler
kube-apiserver
kube-proxy

关于升级后 Kubernetes 停止工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45495771/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com