gpt4 book ai didi

c++ - 这个简单的哈希函数背后的逻辑是什么?

转载 作者:行者123 更新时间:2023-11-30 03:45:46 25 4
gpt4 key购买 nike

谁能告诉我这个简单的哈希函数背后的数学逻辑是什么。

    #define HASHSIZE 101 
unsigned hash_function(char *s) {
unsigned hashval;
for (hashval = 0; *s != '\0'; s++)
// THIS NEXT LINE HOW DOES IT WORK WHY 31
hashval = *s + 31 * hashval;
return hashval % HASHSIZE;
}

在这里,我不是问指针和编程。我只是问下面的语句是如何工作的。

    hashval = *s + 31 * hashval

最佳答案

假设您有一个带位的值 x...

            b7 b6 b5 b4 b3 b2 b1 b0

当你乘以 31 时,你实际上是将那些左移的位相加(这与乘以 2 具有相同的效果,就像在十进制乘以 10 时添加一个试验零),乘以 2(即 4x ) , 三个 ( 8x ), 四个 ( 16x ):

            b7 b6 b5 b4 b3 b2 b1 b0 +     // 1x
b7 b6 b5 b4 b3 b2 b1 b0 0 + // + 2x = 3x
b7 b6 b5 b4 b3 b2 b1 b0 0 0 + // + 4x = 7x
b7 b6 b5 b4 b3 b2 b1 b0 0 0 0 + // + 8x = 15x
b7 b6 b5 b4 b3 b2 b1 b0 0 0 0 0 + // + 16x = 31x

许多单独的输出位会受到许多输入 but 的值的影响,直接影响和来自不太重要列的“进位”;例如如果 b1 = 1 且 b0 = 1,则第二个最低有效输出位(“10”列)将为 0,但将 1 带入左侧的“100”列。

这种输出位受许多输入位影响的特性有助于“混合”输出哈希值,从而提高质量。

尽管如此,虽然它可能比乘以 17 (16+1) 或 12 (8+4) 更好,因为它们只添加了原始值的几个移位拷贝而不是五个,但它是 与执行不止一次运行乘法的哈希函数相比,哈希非常弱,我将使用一些统计分析来说明...

作为哈希质量的样本,我对四个 可打印 ASCII 字符的所有组合进行了哈希处理,并查看了产生相同哈希值的次数。我选择了四个,因为它在合理的时间范围内(几十秒)是最可行的。可用的代码 here(不确定它是否会在那里超时 - 只在本地运行它)和这篇文章的底部。

通过下面的一两行输出来解释每行格式可能会有所帮助:

  • 只有 62 个输入(输入的 0.000076%)散列为唯一值
  • 还有 62 次 2 个输入散列为相同的值(即 124 个输入被这个 2-way 冲突组所占,即 0.000152%)
    • 这个和低碰撞组累计占输入的 0.000228%

完整输出:

81450625 4-character printable ASCII combinations
#collisions 1 with #times 62 0.000076% 0.000076%
#collisions 2 with #times 62 0.000152% 0.000228%
#collisions 3 with #times 1686 0.006210% 0.006438%
#collisions 4 with #times 170 0.000835% 0.007273%
#collisions 5 with #times 62 0.000381% 0.007654%
#collisions 6 with #times 1686 0.012420% 0.020074%
#collisions 7 with #times 62 0.000533% 0.020606%
#collisions 8 with #times 170 0.001670% 0.022276%
#collisions 9 with #times 45534 0.503134% 0.525410%
#collisions 10 with #times 3252 0.039926% 0.565336%
#collisions 11 with #times 3310 0.044702% 0.610038%
#collisions 12 with #times 4590 0.067624% 0.677662%
#collisions 13 with #times 340 0.005427% 0.683089%
#collisions 14 with #times 456 0.007838% 0.690927%
#collisions 15 with #times 1566 0.028840% 0.719766%
#collisions 16 with #times 224 0.004400% 0.724166%
#collisions 17 with #times 124 0.002588% 0.726754%
#collisions 18 with #times 45422 1.003793% 1.730548%
#collisions 19 with #times 116 0.002706% 1.733254%
#collisions 20 with #times 3414 0.083830% 1.817084%
#collisions 21 with #times 1632 0.042077% 1.859161%
#collisions 22 with #times 3256 0.087945% 1.947106%
#collisions 23 with #times 58 0.001638% 1.948744%
#collisions 24 with #times 4702 0.138548% 2.087292%
#collisions 25 with #times 66 0.002026% 2.089317%
#collisions 26 with #times 286 0.009129% 2.098447%
#collisions 27 with #times 1969365 65.282317% 67.380763%
#collisions 28 with #times 498 0.017120% 67.397883%
#collisions 29 with #times 58 0.002065% 67.399948%
#collisions 30 with #times 284614 10.482940% 77.882888%
#collisions 31 with #times 5402 0.205599% 78.088487%
#collisions 32 with #times 108 0.004243% 78.092730%
#collisions 33 with #times 289884 11.744750% 89.837480%
#collisions 34 with #times 5344 0.223075% 90.060555%
#collisions 35 with #times 5344 0.229636% 90.290191%
#collisions 36 with #times 146792 6.487994% 96.778186%
#collisions 38 with #times 5344 0.249319% 97.027505%
#collisions 39 with #times 20364 0.975064% 98.002569%
#collisions 40 with #times 9940 0.488148% 98.490718%
#collisions 42 with #times 14532 0.749342% 99.240060%
#collisions 43 with #times 368 0.019428% 99.259488%
#collisions 44 with #times 10304 0.556627% 99.816114%
#collisions 45 with #times 368 0.020331% 99.836446%
#collisions 46 with #times 368 0.020783% 99.857229%
#collisions 47 with #times 736 0.042470% 99.899699%
#collisions 48 with #times 368 0.021687% 99.921386%
#collisions 49 with #times 368 0.022139% 99.943524%
#collisions 50 with #times 920 0.056476% 100.000000%

总体观察:

  • 65.3% 的输入以 27 向碰撞告终; 33路11.7%; 30 向碰撞中为 10.5%; 36 路 6.5%...
  • 不到 2.1% 的输入避免了 27 向或更糟的碰撞

尽管输入仅占哈希空间的 1.9%(计算为 2^32 中的 81450625,因为我们正在哈希为 32 位值)。太糟糕了。

为了说明它有多糟糕,让我们比较一下将 4 个可打印的 ASCII 字符放入 GCC std::stringstd::hash<std::string> 中,我相信从内存中它使用了 MURMUR32 哈希:

81450625 4-character printable ASCII combinations
#collisions 1 with #times 79921222 98.122294% 98.122294%
#collisions 2 with #times 757434 1.859860% 99.982155%
#collisions 3 with #times 4809 0.017713% 99.999867%
#collisions 4 with #times 27 0.000133% 100.000000%

所以 - 回到为什么 + 31 * previous 的问题 - 你必须与其他同样简单的哈希函数进行比较,看看这是否比生成哈希的 CPU 工作量的平均水平更好,尽管如此,这还是有可能的从绝对意义上来说很糟糕,但考虑到一个大量更好的哈希的额外成本非常小,我建议使用一个并完全忘记“*31”。

代码:

#include <map>
#include <iostream>
#include <iomanip>

int main()
{
std::map<unsigned, unsigned> histogram;

for (int i = ' '; i <= '~'; ++i)
for (int j = ' '; j <= '~'; ++j)
for (int k = ' '; k <= '~'; ++k)
for (int l = ' '; l <= '~'; ++l)
{
unsigned hv = ((i * 31 + j) * 31 + k) * 31 + l;
/*
// use "31*" hash above OR std::hash<std::string> below...
char c[] = { i, j, k, l, '\0' };
unsigned hv = std::hash<std::string>()(c); */
++histogram[hv];
}
std::map<unsigned, unsigned> histohisto;
for (auto& hv_freq : histogram)
++histohisto[hv_freq.second];
unsigned n = '~' - ' ' + 1; n *= n; n *= n;
std::cout << n << " 4-character printable ASCII combinations\n";
double cumulative_percentage = 0;
for (auto& freq_n : histohisto)
{
double percent = (double)freq_n.first * freq_n.second / n * 100;
cumulative_percentage += percent;
std::cout << "#collisions " << freq_n.first << " with #times " << freq_n.second << "\t\t" << std::fixed << percent << "% " << cumulative_percentage << "%\n";
}
}

关于c++ - 这个简单的哈希函数背后的逻辑是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34523702/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com