gpt4 book ai didi

c - karger Minimum Cut 算法因大输入而崩溃

转载 作者:太空宇宙 更新时间:2023-11-04 07:33:06 26 4
gpt4 key购买 nike

我尝试实现 karger Minimum Cut 算法 ( Karger wiki page )

到目前为止,我已经在小示例(大小为 10 的输入)上尝试了我的算法,它似乎有效。但是当我尝试输入更大的输入时,比如 200。它就崩溃了。

为了存储最小切割数据,我创建了一个二维数组:GraphCut[SIZE_ARRAY][SIZE_ARRAY_2]在本例中为 SIZE_ARRAY = 200,但我找不到适合 SIZE_ARRAY_2 的长度。

问题是,SIZE_ARRAY_2 必须很大,因为我修改了初始数组以合并不同的顶点。

如果我声明 SIZE_ARRAY_2 = 200,大小将不够,但如果我声明 SIZE_ARRAY_2 = 1000,程序就会崩溃。

问题是,我必须执行该算法 100000 次。

部分代码如下:

#define ARRAY_SIZE 200
#define ARRAY_SIZE_2 200

int main()
{
int minCut,minMinCut;

for (int k = 0; k < ARRAY_SIZE * ARRAY_SIZE * 4;k++) {
minCut = kargerMinCut(k);
if (k == 0)
minMinCut = minCut;
else if (minMinCut > minCut)
minMinCut = minCut;
}
printf("\n minMinCut = %d\n", minMinCut);

return 0;
}

int kargerMinCut(int k) {
// 1st dimension: each different node
// 2nd dimension: vertices
long graphCut[ARRAY_SIZE + 1][ARRAY_SIZE_2] = {0};
populateIntegerArray(graphCut); // import data from a file

int nodeToMain[ARRAY_SIZE + 1];
int sizeOfMainNode, indexToMerge,initialRand,i,j,m,nodeToMerge,nodeRemaining = ARRAY_SIZE;
for (m = 0;m<ARRAY_SIZE + 1;m++) // initialization of nodeToMain
nodeToMain[m] = m;

while (nodeRemaining > 2) {
i = 0;
j = 0;
srand(time(NULL) + nodeRemaining);// initialise rand
initialRand = nodeToMain[rand()%(ARRAY_SIZE) + 1]; // pick a random initial node, but not a merged one
sizeOfMainNode = sizeOfArray(graphCut[initialRand]); // size of the initial node

srand(time(NULL) + k); // initialise rand
indexToMerge = rand()%sizeOfMainNode;// pick another random node in the linked nodes (its index to be precise)
nodeToMerge = nodeToMain[graphCut[initialRand][indexToMerge]];

for (m = 0;m<ARRAY_SIZE + 1;m++) // update the nodeToMain array, initialRand is now the main node for nodeToMerge
if (nodeToMain[m] == nodeToMerge)
nodeToMain[m] = initialRand;

// remove the nodeToMerge numbers from the graphCut[initialRand] (as they are going to be merged)
while(graphCut[initialRand][j] > 0 && j < sizeOfMainNode) {
if (initialRand == nodeToMain[graphCut[initialRand][j]]) {
// if this is the last element, do nothing
while(nodeToMain[graphCut[initialRand][sizeOfMainNode - 1]] == initialRand && j < sizeOfMainNode - 1) {
graphCut[initialRand][sizeOfMainNode - 1] = 0;
sizeOfMainNode--;
}
graphCut[initialRand][j] = nodeToMain[graphCut[initialRand][sizeOfMainNode - 1]];
graphCut[initialRand][sizeOfMainNode - 1] = 0;
sizeOfMainNode--;
}
j++;
}

i = 0;
while (graphCut[nodeToMerge][i] > 0 && sizeOfMainNode < ARRAY_SIZE_2 && i < ARRAY_SIZE_2) { // add each vextex of the nodeTomerge to the merged nodes
if (nodeToMain[graphCut[nodeToMerge][i]] != initialRand) {
graphCut[initialRand][sizeOfMainNode] = nodeToMain[graphCut[nodeToMerge][i]];
sizeOfMainNode++;
}
i++;
}

nodeRemaining--;
}

return sizeOfArray(graphCut[nodeToMain[1]]);
}

我确信代码不是很干净,甚至可能很糟糕(C 语言的初学者)。所以我欢迎任何其他建议。

我在调试器中遇到的错误似乎是随机的。错误是:

Impossible to divide by 0

它在第 62 行的 time64.c 中停止

tim = (__time64_t)((nt_time.ft_scalar - EPOCH_BIAS) / 10000000i64);

最佳答案

数组大小的变化可能导致堆栈溢出。堆栈的常见默认大小为 1MB(1048576 字节)。如果你有:

long graphCut[200][1000];

4 == sizeof(long) graphCut 数组占用了 200 * 1000 * 4 = 800000 字节,剩下 248576 字节,这可能不足以用于 populateIntegerArray() 函数中的堆栈变量(我没有看到该函数)。如果 8 == sizeof(long),则数组需要 1600000 字节,大于 1MB。

如果需要该大小的数组,则在堆而不是堆栈上分配(全部或部分)。例如:

long* graphCut[ARRAY_SIZE_1];
int i;
for (i = 0; i < sizeof(graphCut)/sizeof(graphCut[0]); i++)
{
graphCut[i] = malloc(ARRAY_SIZE_2 * sizeof(graphCut[0][0]));
memset(graphCut[i], 0, ARRAY_SIZE_2 * sizeof(graphCut[0][0]));
}

for (i = 0; i < sizeof(graphCut)/sizeof(graphCut[0]); i++)
{
free(graphCut[i]);
}

关于c - karger Minimum Cut 算法因大输入而崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11578549/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com