gpt4 book ai didi

mongodb - 使用持久卷运行 mongo 会引发错误 - Kubernetes

转载 作者:行者123 更新时间:2023-12-02 11:39:24 25 4
gpt4 key购买 nike

语境

我想创建一个 MongoDB 共享主机本地目录的有状态部署 /mnt/nfs/data/myproject/production/permastore/mogno (网络文件系统目录),所有 mongodb pod 位于 /data/db .我在三个 上运行我的 kubernetes 集群虚拟机 .

问题

当我不使用持久卷声明时,我可以毫无问题地启动 mongo!但是,当我使用持久卷声明启动 mongodb 时,我收到此错误。

Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :

问题

有谁知道为什么mongo无法启动,当 /data/db是具有持久卷的挂载端吗?如何解决?

代码

由于路径不同,以下配置文件将无法在您的环境中运行。但是,您应该能够了解我的设置背后的想法。

持久卷 pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-mongo
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /mnt/nfs/data/phenex/production/permastore/mongo
claimRef:
name: phenex-mongo
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem

持久卷声明 pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-mongo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi

部署 deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-mongo
mountPath: /data/db
volumes:
- name: phenex-mongo
persistentVolumeClaim:
claimName: phenex-mongo

应用配置

$ kubectl apply -f pv.yaml
$ kubectl apply -f pc.yaml
$ kubectl apply -f deployment.yaml

检查集群状态

$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 38m mongo mongo:4.2.0-bionic run=mongo

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-59f669657d-fpkgv 1/1 Running 0 35m 10.44.0.2 web01 <none> <none>

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-mongo 1Gi RWO Retain Bound phenex/phenex-mongo manual 124m Filesystem

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-mongo Bound phenex-mongo 1Gi RWO manual 122m Filesystem


运行 mongo pod

$ kubectl exec -it mongo-59f669657d-fpkgv mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-08-14T14:25:25.452+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2019-08-14T14:25:25.453+0000 F - [main] exception: connect failed
2019-08-14T14:25:25.453+0000 E - [main] exiting with code 1
command terminated with exit code 1

日志

$ kubectl logs mongo-59f669657d-fpkgv 
2019-08-14T14:00:32.287+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-59f669657d-fpkgv
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] db version v4.2.0
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] modules: none
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] build environment:
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] options: { net: { bindIp: "*" } }

root@mongo-59f669657d-fpkgv:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mongodb 1 0.0 2.7 208324 27920 ? Dsl 14:00 0:00 mongod --bind_ip_all
root 67 0.0 0.2 18496 2060 pts/1 Ss 15:12 0:00 bash
root 81 0.0 0.1 34388 1536 pts/1 R+ 15:13 0:00 ps aux

最佳答案

我找到了原因和解决方案!在我的设置中,我使用 NFS 通过网络共享一个目录。 .这样,我的所有集群节点(minions)都可以访问位于 /mnt/nfs/data/ 的公共(public)目录。 .

原因

原因mongo无法启动是由于无效 Persistent Volumes .也就是说,我使用的是持久卷 主机路径 type - 这将适用于单个节点测试,或者如果您在所有集群节点上手动创建目录结构,例如/tmp/your_pod_data_dir/ .但是,如果您尝试将 nfs 目录挂载为 hostPath,则会导致问题 - 我遇到了这样的问题!

解决方案

对于通过 共享的目录网络文件系统使用 NFS 持久卷类型 (NFS Example)!您将在下面找到我的设置和两个解决方案。

设置

/etc/hosts - 我的集群节点。

# Cluster nodes
192.168.123.130 master
192.168.123.131 web01
192.168.123.132 compute01
192.168.123.133 compute02

导出的 NFS 目录列表 .

[vagrant@master]$ showmount -e
Export list for master:
/nfs/data compute*,web*
/nfs/www compute*,web*

第一个解决方案

此解决方案显示通过 挂载 nfs 目录的部署卷 -看看 volumesvolumeMounts部分。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data/phenex/production/permastore/mongo

第二种解决方案

此解决方案显示通过 挂载 nfs 目录的部署数量 claim -看看 persistentVolumeClaim , 持久卷持久卷声明 定义如下。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
persistentVolumeClaim:
claimName: phenex-nfs

持久卷 - NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data
claimRef:
name: phenex-nfs
persistentVolumeReclaimPolicy: Retain

持久卷声明
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi

预期产出

# Checking cluster state
[vagrant@master ~]$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 18s mongo mongo:4.2.0-bionic run=mongo

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-65b7d6fb9f-mcmvj 1/1 Running 0 18s 10.44.0.2 web01 <none> <none>

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-nfs 1Gi RWO Retain Bound /phenex-nfs 27s Filesystem

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-nfs Bound phenex-nfs 1Gi RWO 27s Filesystem

# Attaching to pod and checking network bindings
[vagrant@master ~]$ kubectl exec -it mongo-65b7d6fb9f-mcmvj -- bash
root@mongo-65b7d6fb9f-mcmvj:/$ apt update
root@mongo-65b7d6fb9f-mcmvj:/$ apt install net-tools
root@mongo-65b7d6fb9f-mcmvj:/$ netstat -tunlp tcp 0 0 0.0.0.0:27017
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -

# Running mongo clinet
root@mongo-65b7d6fb9f-mcmvj:/$ mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("45287a0e-7d41-4484-a267-5101bd20fad3") }
MongoDB server version: 4.2.0
Server has startup warnings:
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

>

关于mongodb - 使用持久卷运行 mongo 会引发错误 - Kubernetes,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57497077/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com