today my cluster suddenly complained about 38 scrub errors. ceph pg repair helped to fix
the inconsistency, but ceph -s still reports a warning
今天,我的集群突然抱怨了38个擦除错误。塞夫-PG维修帮助修复了不一致,但塞夫-S仍报告警告
ceph -s
cluster:
id: 86bbd6c5-ae96-4c78-8a5e-50623f0ae524
health: HEALTH_WARN
Too many repaired reads on 1 OSDs
services:
mon: 4 daemons, quorum s0,mbox,s1,r0 (age 35m)
mgr: s0(active, since 10d), standbys: s1, r0
mds: fs:1 {0=s0=up:active} 3 up:standby
osd: 10 osds: 10 up, 10 in
data:
pools: 6 pools, 289 pgs
objects: 1.29M objects, 1.6 TiB
usage: 3.3 TiB used, 7.4 TiB / 11 TiB avail
pgs: 289 active+clean
After reading the docs I tried:
在阅读了这些文档后,我尝试了:
ceph tell osd.8 clear_shards_repaired
no valid command found; 10 closest matches:
0
1
2
abort
assert
bench [<count:int>] [<size:int>] [<object_size:int>] [<object_num:int>]
bluefs stats
bluestore allocator dump block
bluestore allocator dump bluefs-db
bluestore allocator fragmentation block
Error EINVAL: invalid command
As you can see, there is a problem. My ceph version is:
正如你所看到的,这是一个问题。我的Cave版本是:
ceph version
ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)
ceph health detail
HEALTH_WARN Too many repaired reads on 1 OSDs
[WRN] OSD_TOO_MANY_REPAIRS: Too many repaired reads on 1 OSDs
osd.8 had 38 reads repaired
How do I get rid of the warning and how do I find out, what the problem really was.
All disks are healthy. Nothin in the journal. smartctl -t short /dev/sdd is happy.
我如何消除警告,如何找出真正的问题所在。所有磁盘都运行状况良好。日记里什么都没有。Smartctl-t Short/dev/sdd很开心。
Any help apreciated.
有没有收到任何帮助。
Magnus
马格努斯
更多回答
优秀答案推荐
I stumbled over this post with the same issue and reached out to the mailing list. The simple answer is: restart the OSD having the issue. That has been enough for me on Pacific to get rid of the warning.
我偶然发现了这篇有同样问题的帖子,并联系了邮件列表。简单的答案是:重新启动有问题的OSD。这足以让我在太平洋航空摆脱这一警告。
Greetings
.sascha
问候.sascha
更多回答
我是一名优秀的程序员,十分优秀!