gpt4 book ai didi

java - Cassandra 中的超时异常

转载 作者:行者123 更新时间:2023-11-30 06:26:06 27 4
gpt4 key购买 nike

我使用 Cassandra 数据库来获取一些频繁请求的数据。以下是我的代码

 public Map<String,String> loadObject(ArrayList<Integer> tradigAccountList){

com.datastax.driver.core.Session session;
Map<String,String> orderListMap = new HashMap<>();
List<ResultSetFuture> futures = new ArrayList<>();
List<ListenableFuture<ResultSet>> Future;


try {
session =jdbcUtils.getCassandraSession();
PreparedStatement statement = jdbcUtils.getCassandraPS(CassandraPS.LOAD_ORDER_LIST);

for (Integer tradingAccount:tradigAccountList){
futures.add(session.executeAsync(statement.bind(tradingAccount).setFetchSize(3000)));
}


for (ResultSetFuture future : futures){

for ( Row row : future.get().all()){
orderListMap.put(row.getString("cliordid"),row.getString("ordermsg"));
}
}

}catch (Exception e){
}finally {
}
return orderListMap;
}

我同时发送大约 30 个请求,我的查询如下:

“从 omsks_v1.ordersStringV1 中选择 cliordid、ordermsg,其中 tradacntid = ?”

每次运行此查询时,它大约会获取至少 30000 行。但是当我同时发送多个请求时,这将引发超时异常。

我的 Cassandra 集群有 2 个节点,每个节点有 32 个并发读写线程。请问谁能为我提供解决方案吗?

最佳答案

CREATE TABLE omsks_v1.ordersstringv1_copy1 (
tradacntid int,
cliordid text,
ordermsg text,
PRIMARY KEY (tradacntid, cliordid)
) WITH bloom_filter_fp_chance = 0.01
AND comment = ''
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE'
AND caching = {
'keys' : 'ALL',
'rows_per_partition' : 'NONE'
}
AND compression = {
'sstable_compression' : 'LZ4Compressor'
}
AND compaction = {
'class' : 'SizeTieredCompactionStrategy'
};

这是表架构

关于java - Cassandra 中的超时异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47179286/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com