gpt4 book ai didi

c++ - Facebook folly::AccessSpreader 是如何工作的?

转载 作者:行者123 更新时间:2023-11-30 05:08:07 32 4
gpt4 key购买 nike

这是来自 Facebook Folly 库的 AccessSpreader 代码: https://github.com/facebook/folly/blob/master/folly/concurrency/CacheLocality.h#L212

/// AccessSpreader arranges access to a striped data structure in such a
/// way that concurrently executing threads are likely to be accessing
/// different stripes. It does NOT guarantee uncontended access.
/// Your underlying algorithm must be thread-safe without spreading, this
/// is merely an optimization. AccessSpreader::current(n) is typically
/// much faster than a cache miss (12 nanos on my dev box, tested fast
/// in both 2.6 and 3.2 kernels).
///
/// If available (and not using the deterministic testing implementation)
/// AccessSpreader uses the getcpu system call via VDSO and the
/// precise locality information retrieved from sysfs by CacheLocality.
/// This provides optimal anti-sharing at a fraction of the cost of a
/// cache miss.
///
/// When there are not as many stripes as processors, we try to optimally
/// place the cache sharing boundaries. This means that if you have 2
/// stripes and run on a dual-socket system, your 2 stripes will each get
/// all of the cores from a single socket. If you have 16 stripes on a
/// 16 core system plus hyperthreading (32 cpus), each core will get its
/// own stripe and there will be no cache sharing at all.
///
/// AccessSpreader has a fallback mechanism for when __vdso_getcpu can't be
/// loaded, or for use during deterministic testing. Using sched_getcpu
/// or the getcpu syscall would negate the performance advantages of
/// access spreading, so we use a thread-local value and a shared atomic
/// counter to spread access out. On systems lacking both a fast getcpu()
/// and TLS, we hash the thread id to spread accesses.
///
/// AccessSpreader is templated on the template type that is used
/// to implement atomics, as a way to instantiate the underlying
/// heuristics differently for production use and deterministic unit
/// testing. See DeterministicScheduler for more. If you aren't using
/// DeterministicScheduler, you can just use the default template parameter
/// all of the time.
template <template <typename> class Atom = std::atomic>
struct AccessSpreader {
/// Returns the stripe associated with the current CPU. The returned
/// value will be < numStripes.
static size_t current(size_t numStripes) {
// widthAndCpuToStripe[0] will actually work okay (all zeros), but
// something's wrong with the caller
assert(numStripes > 0);

unsigned cpu;
getcpuFunc(&cpu, nullptr, nullptr);
return widthAndCpuToStripe[std::min(size_t(kMaxCpus), numStripes)]
[cpu % kMaxCpus];
}

private:
/// If there are more cpus than this nothing will crash, but there
/// might be unnecessary sharing
enum { kMaxCpus = 128 };

typedef uint8_t CompactStripe;

static_assert(
(kMaxCpus & (kMaxCpus - 1)) == 0,
"kMaxCpus should be a power of two so modulo is fast");
static_assert(
kMaxCpus - 1 <= std::numeric_limits<CompactStripe>::max(),
"stripeByCpu element type isn't wide enough");

/// Points to the getcpu-like function we are using to obtain the
/// current cpu. It should not be assumed that the returned cpu value
/// is in range. We use a static for this so that we can prearrange a
/// valid value in the pre-constructed state and avoid the need for a
/// conditional on every subsequent invocation (not normally a big win,
/// but 20% on some inner loops here).
static Getcpu::Func getcpuFunc;

/// For each level of splitting up to kMaxCpus, maps the cpu (mod
/// kMaxCpus) to the stripe. Rather than performing any inequalities
/// or modulo on the actual number of cpus, we just fill in the entire
/// array.
static CompactStripe widthAndCpuToStripe[kMaxCpus + 1][kMaxCpus];

static bool initialized;

/// Returns the best getcpu implementation for Atom
static Getcpu::Func pickGetcpuFunc() {
auto best = Getcpu::resolveVdsoFunc();
return best ? best : &FallbackGetcpuType::getcpu;
}

/// Always claims to be on CPU zero, node zero
static int degenerateGetcpu(unsigned* cpu, unsigned* node, void*) {
if (cpu != nullptr) {
*cpu = 0;
}
if (node != nullptr) {
*node = 0;
}
return 0;
}

// The function to call for fast lookup of getcpu is a singleton, as
// is the precomputed table of locality information. AccessSpreader
// is used in very tight loops, however (we're trying to race an L1
// cache miss!), so the normal singleton mechanisms are noticeably
// expensive. Even a not-taken branch guarding access to getcpuFunc
// slows AccessSpreader::current from 12 nanos to 14. As a result, we
// populate the static members with simple (but valid) values that can
// be filled in by the linker, and then follow up with a normal static
// initializer call that puts in the proper version. This means that
// when there are initialization order issues we will just observe a
// zero stripe. Once a sanitizer gets smart enough to detect this as
// a race or undefined behavior, we can annotate it.

static bool initialize() {
getcpuFunc = pickGetcpuFunc();

auto& cacheLocality = CacheLocality::system<Atom>();
auto n = cacheLocality.numCpus;
for (size_t width = 0; width <= kMaxCpus; ++width) {
auto numStripes = std::max(size_t{1}, width);
for (size_t cpu = 0; cpu < kMaxCpus && cpu < n; ++cpu) {
auto index = cacheLocality.localityIndexByCpu[cpu];
assert(index < n);
// as index goes from 0..n, post-transform value goes from
// 0..numStripes
widthAndCpuToStripe[width][cpu] =
CompactStripe((index * numStripes) / n);
assert(widthAndCpuToStripe[width][cpu] < numStripes);
}
for (size_t cpu = n; cpu < kMaxCpus; ++cpu) {
widthAndCpuToStripe[width][cpu] = widthAndCpuToStripe[width][cpu - n];
}
}
return true;
}
};

template <template <typename> class Atom>
Getcpu::Func AccessSpreader<Atom>::getcpuFunc =
AccessSpreader<Atom>::degenerateGetcpu;

template <template <typename> class Atom>
typename AccessSpreader<Atom>::CompactStripe
AccessSpreader<Atom>::widthAndCpuToStripe[kMaxCpus + 1][kMaxCpus] = {};

template <template <typename> class Atom>
bool AccessSpreader<Atom>::initialized = AccessSpreader<Atom>::initialize();

// Suppress this instantiation in other translation units. It is
// instantiated in CacheLocality.cpp
extern template struct AccessSpreader<std::atomic>;

据我所知,它应该将一些数据包装在一个原子类中,当它被多个线程访问时,它应该减少错误的缓存共享?与 Folly 合作过的人能否详细说明它是如何工作的?我已经看了一段时间,但我什至没有看到他们把原子变量成员放在哪里。

最佳答案

不,这个类并没有按照你的想法去做。

总体思路是,当你有多个等效的资源/数据结构,并希望不同的线程访问不同的实例以最小化争用和最大化数据局部性时,你使用AccessSpreader来建议最好的用于当前核心/线程的资源/数据。

有关示例,请参见例如https://github.com/facebook/folly/blob/master/folly/IndexedMemPool.h .这种内存池的实现维护了一些空闲对象列表,以减少分配/释放时的线程争用。以下是 AccessSpreader 的使用方式:

AtomicStruct<TaggedPtr,Atom>& localHead() {
auto stripe = AccessSpreader<Atom>::current(NumLocalLists);
return local_[stripe].head;
}

即它给出了推荐由当前线程使用的元素(在某些数组或 vector 等中)的索引。

更新(回应评论):并非总是可以将不同的索引分配给不同的线程 - 例如如果可能的索引( strip )数量小于 CPU 数量;并且评论明确指出“它不保证无竞争访问”。该类不仅可用于最小化争用,还可用于最大化数据局部性;例如,您可能希望在具有公共(public)缓存的线程之间共享一些数据实例。因此,推荐的索引是两个变量的函数:当前 CPU(通过 getCpuFunc 在内部获得)和 strip 数(作为参数传递 numStripes)- 这是为什么需要二维数组。该数组在程序初始化时使用系统特定信息(通过类 CacheLocality)填充,因此推荐的索引会考虑数据局部性。

至于 std::atomic,它仅用于具有单独的 AccessSpreader 实例以供测试和生产使用,如类声明之前的注释中所述.该类没有(也不需要)任何原子成员变量。

关于c++ - Facebook folly::AccessSpreader 是如何工作的?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47006451/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com