gpt4 book ai didi

java - 如何安全地将昂贵的数据库操作带到同步块(synchronized block)之外以避免争用

转载 作者:行者123 更新时间:2023-12-01 09:45:41 26 4
gpt4 key购买 nike

我的要求如下

  1. 维护全局线程安全缓存,以便所有写入都应该是线程安全的。
  2. 缓存应该有一个过期时间。
  3. 当缓存过期时,它应该将缓存数据刷新到数据库中,但还要确保刷新时不应将任何数据写入缓存。

因此,缓存写入操作可以由多个线程同时执行,但是刷新缓存到数据库应该定期发生 1 个线程(每 1 秒)。下面是我的实现

 static Map<String, Long> cacheTimer = new ConcurrentHashMap<String, Long>();
static Map<String, Date> cache = new ConcurrentHashMap<String, Date>();
private Map<String, Object> bucketTimeUpdatedLockMap = new ConcurrentHashMap<String, Object>();


public void updateCacheLastDequeuedTime(String queueName,
int bucketId, Date lastDequeued, boolean force) {

//Below is non-expensive cache write opration
String queueBucketId = queueName + bucketId;
// If the cache structures don't exist yet, set them up.
if (!cacheTimer.containsKey(queueBucketId)
|| !cache.containsKey(queueBucketId)) {
log.debug(
"Setting up bucketmapPendingLastDequeuedWrites cache for queue {}",
queueName);
cacheTimer.putIfAbsent(queueBucketId,0L);
cache.putIfAbsent(queueBucketId, new Date());
}

if (!cache.containsKey(queueBucketId)) {
cache.put(queueBucketId,lastDequeued);
} else {

Date tempDate = cache.get(queueBucketId);
if (tempDate != null && tempDate.equals(lastDequeued)) {
cache.put(queueBucketId,
QueueServiceUtils.incrementDateByMilliSeconds(
lastDequeued, 1));
} else {
cache.put(queueBucketId,lastDequeued);
}
}

//Below is cache expensive flush operation
if (force
|| System.currentTimeMillis()
- cacheTimer
.get(queueBucketId) > 1000)) {
flushCache(queueName, bucketId);
}
}

private void flushCache(String queueName, int bucketId) {
String queuebucketId = queueName + bucketId;
Object bucketTimeUpdatedLock = getBucketTimeUpdatedLock(queuebucketId); //taking a lock over an object
synchronized(bucketTimeUpdatedLock){
//rechecking
if (cacheTimer.containsKey(queuebucketId) && System.currentTimeMillis()
- cacheTimer.get(queuebucketId) > 1000)){

// why setting timer here? To keep a track when the last time this cache got flushed
cacheTimer.put(queuebucketId,
System.currentTimeMillis());
if (cache.containsKey(queuebucketId)
&& cache.get(queuebucketId) != null) {
Date lastDequeuedTime = cache.get(queuebucketId);
//below is an expensive operation.
queueServiceMetaDao.updateLastDequed(lastDequeuedTime, queueName, bucketId);
cache.remove(queuebucketId); //reset cache
}

}
}
}
//THis method will help in lock splitting instead of taking lock on queue take lock on bucket of queue (1 queue has 100 buckets)
private Object getBucketTimeUpdatedLock(String queueName) {
Object readBucketAssignLock = bucketTimeUpdatedLockMap.get(queueName);
if (readBucketAssignLock == null) {
log.debug(
"getting bucketTimeUpdatedLock for newly created queue {}",
queueName);
Object lock = new Object();
readBucketAssignLock = bucketTimeUpdatedLockMap.putIfAbsent(
queueName, lock);
if (readBucketAssignLock == null)
readBucketAssignLock = lock;
}
return readBucketAssignLock;
}

以下是我的疑问:

1.我的上述实现没有完全满足第三个要求“确保刷新时不应将数据写入缓存”。

  • 如何将我的数据库操作安全地带到同步块(synchronized block)之外。

  • 我应该使用 ReentrantReadWriteLock 而不是同步块(synchronized block)吗?因为我将进行并行写入缓存操作(每秒 15-50 个,因此分配读锁?),但是只有 1 个刷新操作(分配写锁?)

  • 最佳答案

    我强烈考虑使用 Guava Cache具有基于时间的到期时间,并且如果确实需要防止条目在刷新到数据库时写入缓存,请使用同步删除监听器。

    考虑一下同步监听器是否绝对必要,因为它可能会显着降低缓存的性能。

    来自文档:

    Warning: removal listener operations are executed synchronously by default, and since cache maintenance is normally performed during normal cache operations, expensive removal listeners can slow down normal cache function! If you have an expensive removal listener, use RemovalListeners.asynchronous(RemovalListener, Executor) to decorate a RemovalListener to operate asynchronously.

    更多详情,请参阅 https://github.com/google/guava/wiki/CachesExplained .

    关于java - 如何安全地将昂贵的数据库操作带到同步块(synchronized block)之外以避免争用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38062763/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com