gpt4 book ai didi

cluster-computing - 使用 pacemaker 在 Redhat 7.4 集群上配置 LVM 资源

转载 作者:行者123 更新时间:2023-12-02 02:56:19 24 4
gpt4 key购买 nike

我正在使用 pacemaker 配置 Red Hat 集群,我想添加一个 LVM 资源。我已经安装了以下软件包,

操作系统:红帽 7.4

已安装的软件包:lvm2-cluster、pacemaker、corosync、pcs、fence-agents-all

但是我的 LVM 资源处于失败状态并出现以下错误:

[root@node1 ~]# pcs status
Cluster name: jcluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: node2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Sat Mar 10 11:54:41 2018
Last change: Sat Mar 10 11:17:13 2018 by hacluster via cibadmin on node1

2 nodes configured
3 resources configured (2 DISABLED)

Online: [ node1 node2 ]

Full list of resources:

Clone Set: juris-clvmd-clone [juris-clvmd]
Stopped (disabled): [ node1 node2 ]
juris-lvm (ocf::heartbeat:LVM): FAILED node1

Failed Actions:
* juris-lvm_monitor_0 on node1 'unknown error' (1): call=15, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"',
last-rc-change='Fri Mar 9 20:38:50 2018', queued=0ms, exec=255ms
* juris-lvm_monitor_10000 on node1 'unknown error' (1): call=16, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"',
last-rc-change='Sat Mar 10 10:24:55 2018', queued=0ms, exec=0ms


Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

我正在使用 iscsi 为我的两个节点共享磁盘。在将共享磁盘呈现给节点后,我为新呈现的磁盘创建了一个 pvcreate、vgcreate、lvcreate

之后,我更改了新的 vg,并使用以下命令创建了集群属性。

[root@node1 ~]# vgchange -cy jurisvg
/dev/jurisvg/ha_lv: read failed after 0 of 4096 at 0: Input/output error
/dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687025664: Input/output error
/dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687083008: Input/output error
/dev/jurisvg/ha_lv: read failed after 0 of 4096 at 4096: Input/output error
LVM cluster daemon (clvmd) is not running. Make volume group "jurisvg" clustered anyway? [y/n]: y
Volume group "jurisvg" successfully changed

为了配置 LVM 资源,我们需要运行 clvmd 服务吗?然后对于起搏器,我可以找到/usr/sbin/clvmd 服务但无法启动它。

[root@node1 ~]# /usr/sbin/clvmd
clvmd could not connect to cluster manager
Consult syslog for more information

有没有人知道为什么我的 LVM 资源有这样的错误和失败?

最佳答案

我已通过以下步骤解决我的问题以创建 LVM 资源。 sdb 是我从 iscsi 主机代表的共享磁盘。

[root@rhel-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
sr0 11:0 1 3.8G 0 rom /mnt

然后我为 sdb 创建了一个新分区。

[root@rhel-1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xf8a80986.

Command (m for help): p

Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33550336 bytes
Disk label type: dos
Disk identifier: 0xf8a80986

Device Boot Start End Blocks Id System

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (65528-104857599, default 65528):
Using default value 65528
Last sector, +sectors or +size{K,M,G} (65528-104857599, default 104857599):
Using default value 104857599
Partition 1 of type Linux and of size 50 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@rhel-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
└─sdb1 8:17 0 50G 0 part
sr0 11:0 1 3.8G 0 rom /mnt

然后我创建了物理卷、卷组和逻辑卷。

[root@rhel-1 ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@rhel-1 ~]# vgcreate cluster_vg /dev/sdb1
Volume group "cluster_vg" successfully created
[root@rhel-1 ~]# lvcreate -L 40G -n cluster_lv cluster_vg
Logical volume "cluster_lv" created.

在逻辑卷 cluster_lv 上创建一个 ext4 文件系统。

[root@rhel-1 ~]# mkfs.ext4 /dev/mapper/cluster_vg-cluster_lv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=8191 blocks
2621440 inodes, 10485760 blocks
524288 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

之后我需要对集​​群中的卷组进行独占激活,但在此之前我需要确保将 locking_type 设置为 1/strong> 和 use_lvmetad/etc/lvm/lvm.conf 中设置为 0 文件。我使用以下命令对 lvm.conf 文件进行更改以应用于两个节点。

[root@rhel-1 ~]# lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by lvm2-lvmetad.socket
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket.

之后,我需要确保将集群 vg 以外的卷组作为条目添加到 /etc/lvm/lvm.conf 中的 volume_list。我已经在我的两个节点上进行了此更改。

[root@rhel-1 ~]# grep "volume_list = " /etc/lvm/lvm.conf
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
volume_list = [ "rhel" ]
# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]

重建 initramfs 引导镜像以确保引导镜像不会尝试激活由集群控制的卷组。重建 initramfs 后也需要重新启动。

[root@rhel-1 ~]# dracut -f -v
[root@rhel-1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
[root@rhel-1 ~]# init 6

创建 LVM 资源

[root@rhel-1 ~]# pcs resource create db2inst1_lvm LVM volgrpname=cluster_vg exclusive=true
Assumed agent name 'ocf:heartbeat:LVM' (deduced from 'LVM')

电脑状态

[root@rhel-1 ~]# pcs status
Cluster name: juriscluster
Stack: corosync
Current DC: rhel-1 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Thu Mar 15 14:27:16 2018
Last change: Thu Mar 15 14:14:33 2018 by root via cibadmin on rhel-1

2 nodes configured
2 resources configured

Online: [ rhel-1 rhel-2 ]

Full list of resources:

db2inst1_scsi (stonith:fence_scsi): Started rhel-1
db2inst1_lvm (ocf::heartbeat:LVM): Started rhel-2

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

关于cluster-computing - 使用 pacemaker 在 Redhat 7.4 集群上配置 LVM 资源,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49188003/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com