gpt4 book ai didi

exception - Cassandra 中的 TombstoneOverwhelmingException

转载 作者:行者123 更新时间:2023-12-02 00:06:25 24 4
gpt4 key购买 nike

因此,我在从表中查询数据时遇到此异常。我在网上阅读了很多内容,据我了解,发生这种情况是因为我有很多空行。但有什么办法可以解决这个问题呢?我可以轻松地摆脱所有这些空值吗?

更新:我运行了 nodetool Compact 并尝试了清理。在这两种情况下我都明白这一点。

Exception in thread "main" java.lang.AssertionError: [SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-538-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-710-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-627-Data.db'), SSTableReader(path='/var/lib/cassandra/data/bitcoin/okcoin_order_book_btc_usd/bitcoin-okcoin_order_book_btc_usd-jb-437-Data.db')]
at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2132)
at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2129)
at org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2111)
at org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2142)
at org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getMaximalTask(SizeTieredCompactionStrategy.java:254)
at org.apache.cassandra.db.compaction.CompactionManager.submitMaximal(CompactionManager.java:290)
at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:282)
at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1941)
at org.apache.cassandra.service.StorageService.forceKeyspaceCompaction(StorageService.java:2182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

这些是 system.log 的最后几行

INFO [CompactionExecutor:1888] 2015-01-03 07:22:54,272 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:1972-05 (225021398 bytes) incrementally
INFO [CompactionExecutor:1888] 2015-01-03 07:23:07,528 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:1972-06 (217772702 bytes) incrementally
INFO [CompactionExecutor:1888] 2015-01-03 07:23:20,508 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-05 (121911398 bytes) incrementally
INFO [ScheduledTasks:1] 2015-01-03 07:23:30,941 GCInspector.java (line 116) GC for ParNew: 223 ms for 1 collections, 5642103584 used; max is 8375238656
INFO [CompactionExecutor:1888] 2015-01-03 07:23:33,436 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-07 (106408526 bytes) incrementally
INFO [CompactionExecutor:1888] 2015-01-03 07:23:38,787 CompactionController.java (line 192) Compacting large row bitcoin/okcoin_trade_btc_cny:2014-02 (112031822 bytes) incrementally
INFO [CompactionExecutor:1888] 2015-01-03 07:23:46,055 ColumnFamilyStore.java (line 794) Enqueuing flush of Memtable-compactions_in_progress@582986122(0/0 serialized/live bytes, 1 ops)
INFO [FlushWriter:62] 2015-01-03 07:23:46,055 Memtable.java (line 355) Writing Memtable-compactions_in_progress@582986122(0/0 serialized/live bytes, 1 ops)
INFO [FlushWriter:62] 2015-01-03 07:23:46,268 Memtable.java (line 395) Completed flushing /var/lib/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-jb-22-Data.db (42 bytes) for commitlog position ReplayPosition(segmentId=1420135510457, position=14938165)
INFO [CompactionExecutor:1888] 2015-01-03 07:23:46,354 CompactionTask.java (line 287) Compacted 2 sstables to [/var/lib/cassandra/data/bitcoin/okcoin_trade_btc_cny/bitcoin-okcoin_trade_btc_cny-jb-554,]. 881,267,752 bytes to 881,266,793 (~99% of original) in 162,878ms = 5.159945MB/s. 24 total partitions merged to 23. Partition merge counts were {1:22, 2:1, }
WARN [RMI TCP Connection(39)-128.31.5.27] 2015-01-03 07:24:46,452 ColumnFamilyStore.java (line 2103) Unable to cancel in-progress compactions for okcoin_order_book_btc_usd. Probably there is an unusually large row in progress somewhere. It is also possible that buggy code left some sstables compacting after it was done with them

我不确定最后一行是什么意思。似乎没有很大的行(我不知道如何找到是否有)。请注意,压缩率仍然停留在 60.33%,并且停留在 okcoin_order_book_btc_usd。我正在运行 Cassandra 2.0.11

最佳答案

当您删除行或 Cassandra 中的行过期时,就会创建墓碑。在该行的 gc_grace_seconds 过去后,它们将在 SSTables 压缩时被删除。

我可以想到一些方法来帮助减少墓碑的数量:

  1. 为具有大量逻辑删除的表设置较低的 gc_grace_seconds - gc_grace_seconds 通常应比执行修复的频率大 1 天。如果您进行修复的频率高于此,则可以考虑降低 gc_grace_seconds。
  2. 看看你的压缩进展如何。您是否有很多待处理的压缩? (每个节点上的nodetool -h localhost compactionstats 将显示这一点)。您可能在压缩方面落后了,并且数据没有得到应有的尽快清理。如果合适的话,也可能值得考虑改变你的压缩策略。例如,如果您正在使用 SizeTieredCompactionStrategy,则可能值得研究 LeveledCompactionStrategy,此策略通常会导致更多的压缩事件(因此请确保您有 SSD),这可以更快地清理您的逻辑删除。
  3. 查看您的数据模型和您正在执行的查询。您是否经常删除经常读取的分区中的数据或存在过期数据?考虑更改分区(主键)策略,以便已删除或过期的行不太可能出现在“实时”数据中。一个很好的例子就是将时间/日期添加到主键中。
  4. 在 cassandra.yaml 中调整 tombstone_failure_threshold - 可能不会考虑这样做,因为这很好地表明您需要查看数据。

关于exception - Cassandra 中的 TombstoneOverwhelmingException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27734745/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com