gpt4 book ai didi

kubernetes - 当 heketi 端点与 pv 和 PVC 不在同一个命名空间中时,glusterfs 如何创建卷

转载 作者:行者123 更新时间:2023-12-04 17:01:23 25 4
gpt4 key购买 nike

我有两个命名空间“runsdata”和“monitoring”。 heketi pod 和 glusterfs 的 daemonSet pod 都在“runsdata”命名空间下。现在我想在 'monitoring' 命名空间下创建 Prometheus 监视器。因为我需要存储来存储我的 Prometheus 数据。所以我创建了PVC(在'monitoring' ns下)和pv,并且在PVC yaml中我声明了storageclass来创建相应的卷,以便为Prometheus提供存储。但是当我创建与 pv 绑定(bind)的 pvc 并应用 prometheus-server.yaml 时。我收到错误:

  Warning  FailedMount       18m (x3 over 43m)     kubelet, 172.16.5.151  Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-rules-volume prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume]: timed out waiting for the condition
Warning FailedMount 13m (x5 over 50m) kubelet, 172.16.5.151 Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume prometheus-rules-volume]: timed out waiting for the condition
Warning FailedMount 3m58s (x35 over 59m) kubelet, 172.16.5.151 MountVolume.NewMounter initialization failed for volume "data-prometheus-pv" : endpoints "heketi-storage-endpoints" not found

从上面的日志不难看出,storageClass 找不到 heketi 端点来创建卷。因为 heketi 端点位于“runsdata”下。我怎么解决这个问题?

其他信息:
1. pv 和 pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-prometheus-pv
labels:
pv: data-prometheus-pv
release: stable
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: runsdata-static-class
glusterfs:
endpoints: "heketi-storage-endpoints"
path: "runsdata-glusterfs-static-class"
readOnly: true

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-prometheus-claim
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: runsdata-static-class
selector:
matchLabels:
pv: data-prometheus-pv
release: stable
[root@localhost online-prometheus]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-config-pv 1Gi RWX Retain Bound runsdata/data-config-claim runsdata-static-class 5d22h
data-mongo-pv 1Gi RWX Retain Bound runsdata/data-mongo-claim runsdata-static-class 4d4h
data-prometheus-pv 2Gi RWX Recycle Bound monitoring/data-prometheus-claim runsdata-static-class 151m
data-static-pv 1Gi RWX Retain Bound runsdata/data-static-claim runsdata-static-class 7d15h
pvc-02f5ce74-db7c-40ba-b0e1-ac3bf3ba1b37 3Gi RWX Delete Bound runsdata/data-test-claim runsdata-static-class 3d5h
pvc-085ec0f1-6429-4612-9f71-309b94a94463 1Gi RWX Delete Bound runsdata/data-file-claim runsdata-static-class 3d17h
[root@localhost online-prometheus]# kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-prometheus-claim Bound data-prometheus-pv 2Gi RWX runsdata-static-class 151m
[root@localhost online-prometheus]#
  • heketi 和 glusterfs
  • [root@localhost online-prometheus]# kubectl get pods -n runsdata|egrep "heketi|gluster"
    glusterfs-5btbl 1/1 Running 1 11d
    glusterfs-7gmbh 1/1 Running 3 11d
    glusterfs-rmx7k 1/1 Running 7 11d
    heketi-78ccdb6fd-97tkv 1/1 Running 2 10d
    [root@localhost online-prometheus]#
  • storageClass 定义
  • ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: runsdata-static-class
    provisioner: kubernetes.io/glusterfs
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    parameters:
    resturl: "http://10.10.11.181:8080"
    volumetype: "replicate:3"
    restauthenabled: "true"
    restuser: "admin"
    restuserkey: "runsdata-gf-admin"
    #secretNamespace: "runsdata"
    #secretName: "heketi-secret"

    最佳答案

    解决方案是在当前命名空间下创建端点和服务。然后我们可以在 pv yaml 中使用该服务,如下所示:
    enter image description here

    [root@localhost gluster]# cat glusterfs-endpoints.yaml 
    ---
    kind: Endpoints
    apiVersion: v1
    metadata:
    name: glusterfs-cluster
    namespace: monitoring
    subsets:
    - addresses:
    - ip: 172.16.5.150
    - ip: 172.16.5.151
    - ip: 172.16.5.152
    ports:
    - port: 1
    protocol: TCP
    [root@localhost gluster]# cat glusterfs-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: glusterfs-cluster
    namespace: monitoring
    spec:
    ports:
    - port: 1
    [root@localhost gluster]#

    关于kubernetes - 当 heketi 端点与 pv 和 PVC 不在同一个命名空间中时,glusterfs 如何创建卷,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60585732/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com