gpt4 book ai didi

google-cloud-platform - 如果您使用区域集群和永久性磁盘,则引用磁盘的 pod 不会自动调度到与磁盘相同的区域

转载 作者:行者123 更新时间:2023-12-05 05:10:23 25 4
gpt4 key购买 nike

根据 https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters#pd“一旦配置了永久性磁盘,任何引用该磁盘的 Pod 都会被安排到与该磁盘相同的区域。”但是我测试过,不是这样。

创建磁盘的过程:

gcloud compute disks create mongodb --size=1GB --zone=asia-east1-c
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/ornate-ensign-234106/zones/asia-east1-c/disks/mongodb].
NAME ZONE SIZE_GB TYPE STATUS
mongodb asia-east1-c 1 pd-standard READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

集群条件:

Name    Zone    Recommendation  In use by   Internal IP External IP Connect 
gke-kubia-default-pool-08dd2133-qbz6 asia-east1-a k8s-ig--c4addd497b1e0a6d, gke-kubia-default-pool-08dd2133-grp 10.140.0.17 (nic0)
35.201.224.238


gke-kubia-default-pool-183639fa-18vr asia-east1-c gke-kubia-default-pool-183639fa-grp, k8s-ig--c4addd497b1e0a6d 10.140.0.18 (nic0)
35.229.152.12


gke-kubia-default-pool-42725220-43q8 asia-east1-b gke-kubia-default-pool-42725220-grp, k8s-ig--c4addd497b1e0a6d 10.140.0.16 (nic0)
34.80.225.6

创建 pod 的 yaml:

apiVersion: v1
kind: Pod
metadata:
name: mongodb
spec:
volumes:
- name: mongodb-data
gcePersistentDisk:
pdName: mongodb
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP

pod 预计将安排在 gke-kubia-default-pool-183639fa-18vr 上,它位于区域 asia-east1-c。但是:

C:\kube>kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
fortune 2/2 Running 0 4h9m 10.56.3.5 gke-kubia-default-pool-42725220-43q8 <none>
kubia-4jmzg 1/1 Running 0 9d 10.56.1.6 gke-kubia-default-pool-183639fa-18vr <none>
kubia-j2lnr 1/1 Running 0 9d 10.56.3.4 gke-kubia-default-pool-42725220-43q8 <none>
kubia-lrt9x 1/1 Running 0 9d 10.56.0.14 gke-kubia-default-pool-08dd2133-qbz6 <none>
mongodb 0/1 ContainerCreating 0 55s <none> gke-kubia-default-pool-42725220-43q8 <none>

C:\kube>kubectl describe pod mongodb
Name: mongodb
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-kubia-default-pool-42725220-43q8/10.140.0.16
Start Time: Thu, 20 Jun 2019 15:39:13 +0800
Labels: <none>
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mongodb
Status: Pending
IP:
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/data/db from mongodb-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sd57s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongodb-data:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: mongodb
FSType: ext4
Partition: 0
ReadOnly: false
default-token-sd57s:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sd57s
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/mongodb to gke-kubia-default-pool-42725220-43q8
Warning FailedMount 106s (x4 over 8m36s) kubelet, gke-kubia-default-pool-42725220-43q8 Unable to mount volumes for pod "mongodb_default(7fe9c096-932e-11e9-bb3d-42010a8c00de)": timeout expired waiting for volumes to attach or mount for pod "default"/"mongodb". list of unmounted volumes=[mongodb-data]. list of unattached volumes=[mongodb-data default-token-sd57s]
Warning FailedAttachVolume 9s (x13 over 10m) attachdetach-controller AttachVolume.Attach failed for volume "mongodb-data" : GCE persistent disk not found: diskName="mongodb" zone="asia-east1-b"

C:\kube>

有人知道为什么吗?

最佳答案

这里的问题是 pod 正尝试在 asia-east1-b 节点上配置,而磁盘未安装,因为它是在 asia-east1-c 中配置的。

你在这里可以做的是使用 nodeSelector这将向您的节点添加一个标签,然后您在您的 yaml 中为 pod 指定该标签。这样它将选择 asia-east1-c 中的节点并挂载磁盘。

关于google-cloud-platform - 如果您使用区域集群和永久性磁盘,则引用磁盘的 pod 不会自动调度到与磁盘相同的区域,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56681433/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com