gpt4 book ai didi

java - 实现不同步读取的双缓冲java HashMap

转载 作者:行者123 更新时间:2023-12-01 09:29:59 25 4
gpt4 key购买 nike

所以我认为我有这个天才的想法来解决一个非常具体的问题,但我无法摆脱最后一个潜在的线程安全问题。我想知道你们是否有办法解决这个问题。

问题:

大量线程需要从很少更新的 HashMap 中读取数据。问题在于,在 ConcurrentHashMap(即线程安全版本)中,读取方法仍然有可能命中互斥锁,因为写入方法仍然锁定 bin(即映射的部分)。

想法:

有 2 个隐藏的 HashMap,充当一个...一个用于线程在不同步的情况下读取,另一个用于线程写入,当然需要同步,并且每隔一段时间翻转它们。

明显的警告是, map 只是最终一致,但我们假设这对于其预期目的来说已经足够好了。

但是出现的问题是,即使使用 AtomicInteger 等,它仍然会留下一个竞争条件,因为就在翻转发生时,我不能确定读者没有滑进去......问题出在startRead()方法的第262-272行和flip()方法的第241-242行之间。

<小时/>

显然 ConcurrentHashMap 是一个非常非常好的类来解决这个问题,我只是想看看我是否可以将这个想法推得更远。

大家有什么想法吗?

<小时/>

这是该类的完整代码。 (尚未完全调试/测试,但您明白了...)

    package org.nectarframework.base.tools;

import java.util.Collection;

import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;

/**
*
* This map is intended to be both thread safe, and have (mostly) non mutex'd
* reads.
*
* HOWEVER, if you insert something into this map, and immediately try to read
* the same key from the map, it probably won't give you the result you expect.
*
* The idea is that this map is in fact 2 maps, one that handles writes, the
* other reads, and every so often the two maps switch places.
*
* As a result, this map will be eventually consistent, and while writes are
* still synchronized, reads are not.
*
* This map can be very effective if handling a massive number of reads per unit
* time vs a small number of writes per unit time, especially in a massively
* multithreaded use case.
*
* This class isn't such a good idea because it's possible that between
* readAllowed.get() and readCounter.increment(), the flip() happens,
* potentially sending one or more threads on the Map that flip() is about to
* update. The solution would be an
* AtomicInteger.compareGreaterThanAndIncrement(), but that doesn't exist.
*
*
* @author schuttek
*
*/

public class DoubleBufferHashMap<K, V> implements Map<K, V> {

private Map<K, V> readMap = new HashMap<>();
private Map<K, V> writeMap = new HashMap<>();
private LinkedList<Triple<Operation, Object, V>> operationList = new LinkedList<>();

private AtomicBoolean readAllowed = new AtomicBoolean(true);
private AtomicInteger readCounter = new AtomicInteger(0);

private long lastFlipTime = System.currentTimeMillis();
private long flipTimer = 3000; // 3 seconds

private enum Operation {
Put, Delete;
}

@Override
public int size() {
startRead();
RuntimeException rethrow = null;
int n = 0;
try {
n = readMap.size();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return n;
}

@Override
public boolean isEmpty() {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.isEmpty();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}

@Override
public boolean containsKey(Object key) {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.containsKey(key);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}

@Override
public boolean containsValue(Object value) {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.containsValue(value);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}

@Override
public V get(Object key) {
startRead();
RuntimeException rethrow = null;
V v = null;
try {
v = readMap.get(key);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return v;
}

@Override
public synchronized V put(K key, V value) {
operationList.add(new Triple<>(Operation.Put, key, value));
writeMap.put(key, value);
return value;
}

@Override
public synchronized V remove(Object key) {
// Not entirely sure if we should return the value from the read map or
// the write map...
operationList.add(new Triple<>(Operation.Delete, key, null));
V v = writeMap.remove(key);
endRead();
return v;
}

@Override
public synchronized void putAll(Map<? extends K, ? extends V> m) {
for (K k : m.keySet()) {
V v = m.get(k);
operationList.add(new Triple<>(Operation.Put, k, v));
writeMap.put(k, v);
}
checkFlipTimer();
}

@Override
public synchronized void clear() {
writeMap.clear();
checkFlipTimer();
}

@Override
public Set<K> keySet() {
startRead();
RuntimeException rethrow = null;
Set<K> sk = null;
try {
sk = readMap.keySet();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return sk;
}

@Override
public Collection<V> values() {
startRead();
RuntimeException rethrow = null;
Collection<V> cv = null;
try {
cv = readMap.values();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return cv;
}

@Override
public Set<java.util.Map.Entry<K, V>> entrySet() {
startRead();
RuntimeException rethrow = null;
Set<java.util.Map.Entry<K, V>> se = null;
try {
se = readMap.entrySet();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
endRead();
return se;
}

private void checkFlipTimer() {
long now = System.currentTimeMillis();
if (this.flipTimer > 0 && now > this.lastFlipTime + this.flipTimer) {
flip();
this.lastFlipTime = now;
}
}

/**
* Flips the two maps, and updates the map that was being read from to the
* latest state.
*/
@SuppressWarnings("unchecked")
private synchronized void flip() {
readAllowed.set(false);
while (readCounter.get() != 0) {
Thread.yield();
}

Map<K, V> temp = readMap;
readMap = writeMap;
writeMap = temp;

readAllowed.set(true);
this.notifyAll();

for (Triple<Operation, Object, V> t : operationList) {
switch (t.getLeft()) {
case Delete:
writeMap.remove(t.getMiddle());
break;
case Put:
writeMap.put((K) t.getMiddle(), t.getRight());
break;
}
}
}

private void startRead() {
if (!readAllowed.get()) {
synchronized (this) {
try {
wait();
} catch (InterruptedException e) {
}
}
}
readCounter.incrementAndGet();
}

private void endRead() {
readCounter.decrementAndGet();
}

}

最佳答案

强烈建议您学习如何使用JMH ,这是在优化算法和数据结构的道路上你应该学习的第一件事。

例如,如果您知道如何使用它,您很快就会发现,当只有 10% 的写入时,ConcurrentHashMap 的性能非常接近不同步的 HashMap

4 个线程(10% 写入):

Benchmark                      Mode  Cnt   Score   Error  Units
SO_Benchmark.concurrentMap thrpt 2 69,275 ops/s
SO_Benchmark.usualMap thrpt 2 78,490 ops/s

8 个线程(10% 写入):

Benchmark                      Mode  Cnt    Score   Error  Units
SO_Benchmark.concurrentMap thrpt 2 93,721 ops/s
SO_Benchmark.usualMap thrpt 2 100,725 ops/s

写入百分比较小时,ConcurrentHashMap 的性能往往更接近 HashMap 的性能。

现在我修改了您的 startReadendRead,并使它们不起作用,但非常简单:

private void startRead() {
readCounter.incrementAndGet();
readAllowed.compareAndSet(false, true);
}

private void endRead() {
readCounter.decrementAndGet();
readAllowed.compareAndSet(true, false);
}

让我们看看性能:

Benchmark                      Mode  Cnt    Score   Error  Units
SO_Benchmark.concurrentMap thrpt 10 98,275 ? 2,018 ops/s
SO_Benchmark.doubleBufferMap thrpt 10 80,224 ? 8,993 ops/s
SO_Benchmark.usualMap thrpt 10 106,224 ? 4,205 ops/s

这些结果向我们表明,通过对每个操作进行一个原子计数器和一个原子 boolean 修改,我们无法获得比 ConcurrentHashMap 更好的性能。 (我尝试过 30,10 和 5% 的写入,但使用 DoubleBufferHashMap 从未获得更好的性能)

Pastebin如果您有兴趣,请使用基准。

关于java - 实现不同步读取的双缓冲java HashMap,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39519986/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com