gpt4 book ai didi

kubernetes - 在多个 Pod 中将持久卷挂载为 ReadOnlyMany 时出现问题

转载 作者:行者123 更新时间:2023-12-03 20:48:21 25 4
gpt4 key购买 nike

我在让 ReadOnlyMany 持久卷挂载到 GKE 上的多个 pod 时遇到了一些问题。现在它仅安装在一个 pod 上,而无法安装在任何其他 pod 上(由于第一个 pod 上正在使用该卷),导致部署仅限于一个 pod。
我怀疑该问题与从卷快照填充的卷有关。
查看相关问题,我已经检查过
spec.containers.volumeMounts.readOnly = true

spec.containers.volumes.persistentVolumeClaim.readOnly = true
这似乎是相关问题最常见的修复方法。
我在下面包含了相关的 yaml。任何帮助将不胜感激!
这是(大部分)部署规范:

spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
image: eu.gcr.io/myimage
imagePullPolicy: IfNotPresent
name: monsoon-server-sha256-1
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/sample-ssd
name: sample-ssd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-cluster-1-default-pool-3d6123cf-kcjo
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 29
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: sample-ssd
persistentVolumeClaim:
claimName: sample-ssd-read-snapshot-pvc-snapshot-5
readOnly: true
存储类(也是这个集群的默认存储类):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sample-ssd
provisioner: pd.csi.storage.gke.io
volumeBindingMode: Immediate
parameters:
type: pd-ssd
聚氯乙烯:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-ssd-read-snapshot-pvc-snapshot-5
spec:
storageClassName: sample-ssd
dataSource:
name: sample-snapshot-5
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi

最佳答案

Google 工程师已意识到此问题。
您可以在 issue report 中找到有关此问题的更多详细信息和 pull request在 GitHub 上。
有一个 临时解决方法 如果您尝试从快照配置 PD 并将其设为 ROX:

  • 提供数据源为 RWO 的 PVC;

  • It will create a new Compute Disk with the content of the source disk
    2. Take the PV that was provisioned and copy it to a new PV that's ROX according to the docs


    您可以使用以下命令执行它:
    第1步

    Provision a PVC with datasource as RWO;

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: workaround-pvc
    spec:
    storageClassName: ''
    dataSource:
    name: sample-ss
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 20Gi
    您可以查看 磁盘名称 使用命令: kubectl get pvc并检查 VOLUME柱子。这是 disk_name第2步

    Take the PV that was provisioned and copy it to a new PV that's ROX


    docs 中所述您需要使用前一个磁盘(在步骤 1 中创建)作为源创建另一个磁盘:
    # Create a disk snapshot:
    gcloud compute disks snapshot <disk_name>

    # Create a new disk using snapshot as source
    gcloud compute disks create pvc-rox --source-snapshot=<snapshot_name>
    创建一个新的 PV 和 PVC ReadOnlyMany
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: my-readonly-pv
    spec:
    storageClassName: ''
    capacity:
    storage: 20Gi
    accessModes:
    - ReadOnlyMany
    claimRef:
    namespace: default
    name: my-readonly-pvc
    gcePersistentDisk:
    pdName: pvc-rox
    fsType: ext4
    readOnly: true
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: my-readonly-pvc
    spec:
    storageClassName: ''
    accessModes:
    - ReadOnlyMany
    resources:
    requests:
    storage: 20Gi
    添加 readOnly: true在您的 volumesvolumeMounts如前所述 here
    readOnly: true

    关于kubernetes - 在多个 Pod 中将持久卷挂载为 ReadOnlyMany 时出现问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64393551/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com