gpt4 book ai didi

cassandra - cassandra是否删除在replication_factor 1的新节点上重复的数据

转载 作者:行者123 更新时间:2023-12-02 23:36:36 27 4
gpt4 key购买 nike

我将replication_factor 设置为1,并且我有一个节点N1 集群托管所有数据(100%,1G)。当我向集群添加新节点N2以获取一半数据时,我看到的是N1(50%,1G),N2(50%,0.5G)。

看起来节点 N1 仍然托管所有数据,即使一半数据已在 N2 上复制。当集群中只有一份副本(replication_factor=1)时,为什么会发生这种情况?

最佳答案

您是否在 N1 节点上运行了 nodetool cleanup?通读 Nodetool's cleanup command 上的文档:

Use this command to remove unwanted data after adding a new node to the cluster. Cassandra does not automatically remove data from nodes that lose part of their partition range to a newly added node. Run nodetool cleanup on the source node and on neighboring nodes that shared the same subrange after the new node is up and running. Failure to run this command after adding a node causes Cassandra to include the old data to rebalance the load on that node.

关于cassandra - cassandra是否删除在replication_factor 1的新节点上重复的数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27971836/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com