gpt4 book ai didi

c++ - 在 C++ 中分割字符串需要越来越多的时间,而行的长度大致相似

转载 作者:行者123 更新时间:2023-12-02 10:32:11 25 4
gpt4 key购买 nike

对于我的项目,我需要读取和处理包含地震接收器能量的大文件。出于多功能性的目的,它必须能够处理 .dat 和 .segy 文件。我的问题是 .dat 文件。我当前的实现在 '\t' 字符处拆分字符串,将匹配项放入子字符串中,并将该值作为 float 推送到 std::vector<float> 。然后从该行中删除子字符串和制表符,并搜索下一个值。见下文:

std::vector<float> parseLine(std::string& number, std::ifstream& file)
{
getline(file, number); // read the number
std::vector<float> datalist = selectData(number);

//for (auto y : datalist) std::cout << y << " ";
//std::cout << std::endl;
return datalist;

}


std::vector<float> selectData(std::string& line)
{
std::vector<float> returnVec;
//auto parsing_start = std::chrono::high_resolution_clock::now();

// The question is about this part
while (true)
{
int index = line.find_first_of("\t");
std::string match = line.substr(0, index);
if (!line.empty()) {
returnVec.push_back(std::stof(match));
line.erase(0, match.length());
}
if (line[0] == '\t') line.erase(0,1);
if (line.empty()) {
//std::cout << "line is empty" << std::endl;
break;
}

}
return returnVec;
}

每隔 100 行,我会打印自前 100 行间隔以来耗时。这告诉我,程序前 100 行只需要 1.3 秒,但最后 100 行则稳定增加到 40 秒以上(见下图)。考虑到我的文件有 6000 行,大约 4000 个数据点,仅仅读取文件就花费了太长的时间(当我计时时大约需要 38 分钟)。这些线条的长度和构图都很相似,我不明白为什么这次时间增加了这么多。线条如下(前 2 列是坐标):

400 1   200.0   205.1   80.1    44.5
400 2 250.0 209.1 70.1 40.0

当然是 4000 列而不是 6 列。

这是主要函数,以及我如何测量时间和 #include :

#include <stdio.h>
#include <fstream>
#include <string>
#include <iostream>
#define _SILENCE_EXPERIMENTAL_FILESYSTEM_DEPRECATION_WARNING
#include <experimental/filesystem>
#include <regex>
#include <iterator>
#include <chrono>
#include <Eigen/Dense>
#include "readSeis.h"

MatrixXf extractSeismics(std::string file)
{
MatrixXf M;

auto start = std::chrono::high_resolution_clock::now();
auto interstart = std::chrono::high_resolution_clock::now();
checkExistence(file);
std::ifstream myfile(file);
if (!myfile)
{
std::cout << "Could not open file " << file << std::endl;
exit(1);
}
int skipCols = 2; // I don't need the coordinates now
size_t linecount = 0;
size_t colcount = 0;
while (!myfile.eof()) // while not at End Of File (eof)
{
std::string number;
std::vector<float> data = parseLine(number, myfile);
if (linecount == 0) colcount = data.size() - skipCols;
//auto resize_start = std::chrono::high_resolution_clock::now();
M.conservativeResize(linecount + 1, colcount); // preserves old values :)
//printElapsedTime(resize_start);
for (int i = skipCols; i < data.size(); i++)
{
M(linecount, i - skipCols) = data[i];

}
linecount++;
// Measure interval time
if (linecount % 100 == 0)
{
std::cout << "Parsing line " << linecount << ", ";
printElapsedTime(interstart);
interstart = std::chrono::high_resolution_clock::now();
}
}
myfile.close();
printElapsedTime(start);
return M;

}

作为旁注,我还尝试使用正则表达式解析该行,这导致每行的恒定时间为 300 毫秒(该文件为 30 分钟)。分割方法在开始时要快得多(每行 12 毫秒),但在结束时要慢得多(每行 440 毫秒)。时间增加是线性的。

Time needed to read 100 lines

为了完整起见,输出如下:

testSeis1500_1510_290_832.dat exists, continuing program
Parsing line 100, Execution time : 1204968 Microseconds
Parsing line 200, Execution time : 1971723 Microseconds
Parsing line 300, Execution time : 2727474 Microseconds
Parsing line 400, Execution time : 3640131 Microseconds
Parsing line 500, Execution time : 4392584 Microseconds
Parsing line 600, Execution time : 5150465 Microseconds
Parsing line 700, Execution time : 5944256 Microseconds
Parsing line 800, Execution time : 6680841 Microseconds
Parsing line 900, Execution time : 7456237 Microseconds
Parsing line 1000, Execution time : 8201579 Microseconds
Parsing line 1100, Execution time : 8999075 Microseconds
Parsing line 1200, Execution time : 9860883 Microseconds
Parsing line 1300, Execution time : 10524525 Microseconds
Parsing line 1400, Execution time : 11286452 Microseconds
Parsing line 1500, Execution time : 12134566 Microseconds
Parsing line 1600, Execution time : 12872876 Microseconds
Parsing line 1700, Execution time : 13815265 Microseconds
Parsing line 1800, Execution time : 14528233 Microseconds
Parsing line 1900, Execution time : 15221609 Microseconds
Parsing line 2000, Execution time : 15989419 Microseconds
Parsing line 2100, Execution time : 16850944 Microseconds
Parsing line 2200, Execution time : 17717721 Microseconds
Parsing line 2300, Execution time : 18318276 Microseconds
Parsing line 2400, Execution time : 19286148 Microseconds
Parsing line 2500, Execution time : 19828358 Microseconds
Parsing line 2600, Execution time : 20678683 Microseconds
Parsing line 2700, Execution time : 21648089 Microseconds
Parsing line 2800, Execution time : 22229266 Microseconds
Parsing line 2900, Execution time : 23398151 Microseconds
Parsing line 3000, Execution time : 23915173 Microseconds
Parsing line 3100, Execution time : 24523879 Microseconds
Parsing line 3200, Execution time : 25547811 Microseconds
Parsing line 3300, Execution time : 26087140 Microseconds
Parsing line 3400, Execution time : 26991734 Microseconds
Parsing line 3500, Execution time : 27795577 Microseconds
Parsing line 3600, Execution time : 28367321 Microseconds
Parsing line 3700, Execution time : 29127089 Microseconds
Parsing line 3800, Execution time : 29998775 Microseconds
Parsing line 3900, Execution time : 30788170 Microseconds
Parsing line 4000, Execution time : 31456488 Microseconds
Parsing line 4100, Execution time : 32458102 Microseconds
Parsing line 4200, Execution time : 33345031 Microseconds
Parsing line 4300, Execution time : 33853183 Microseconds
Parsing line 4400, Execution time : 34676522 Microseconds
Parsing line 4500, Execution time : 35593187 Microseconds
Parsing line 4600, Execution time : 37059032 Microseconds
Parsing line 4700, Execution time : 37118954 Microseconds
Parsing line 4800, Execution time : 37824417 Microseconds
Parsing line 4900, Execution time : 38756924 Microseconds
Parsing line 5000, Execution time : 39446184 Microseconds
Parsing line 5100, Execution time : 40194553 Microseconds
Parsing line 5200, Execution time : 41051359 Microseconds
Parsing line 5300, Execution time : 41498345 Microseconds
Parsing line 5400, Execution time : 42524946 Microseconds
Parsing line 5500, Execution time : 43252436 Microseconds
Parsing line 5600, Execution time : 44145627 Microseconds
Parsing line 5700, Execution time : 45081208 Microseconds
Parsing line 5800, Execution time : 46072319 Microseconds
Parsing line 5900, Execution time : 46603417 Microseconds
Execution time : 1442777428 Microseconds

有人能明白为什么会发生这种情况吗?我们将不胜感激。 :)

最佳答案

这里有一些代码可以大致按照您的描述读取文件。它一次读取一行,解析出一行中的 float ,跳过前 N 列,然后将其余的放入 vector<float> 中。 。 main 函数将每一行存储到 vector<vector<float>> 中,并且(以确保其余部分不会被优化)将其读取的所有值相加,并在最后将它们打印出来。

#include <iostream>
#include <sstream>
#include <vector>
#include <iterator>
#include <numeric>
#include <fstream>

std::vector<float> selectData(std::string const &line, int skip_count) {
std::istringstream buffer(line);

float ignore;
for (int i=0; i<skip_count; i++)
buffer >> ignore;

return std::vector<float>{std::istream_iterator<float>(buffer), {}};
}

int main(int argc, char **argv) {
std::string line;
float total = 0.0f;

if (argc != 2) {
std::cerr << "Usage: accum <infile>\n";
return EXIT_FAILURE;
}

std::vector<std::vector<float>> matrix;

std::ifstream infile(argv[1]);
while (std::getline(infile, line)) {
auto vals = selectData(line, 2);
matrix.push_back(vals);
total += std::accumulate(vals.begin(), vals.end(), 0.0f);
}
std::cout << "total: " <<total << "\n";
}

在我的机器上,这会在大约 22 秒内读取 6000 行、每行 4000 个数字的文件(但是,正如他们所说,您的里程可能会有所不同 - 我的机器相当旧;在较新的机器上,将此速度加倍我根本不会感到惊讶)。这还有待进一步改进,但据猜测,将阅读时间从 38 分钟减少到 22 秒(左右)可能就足够了,进一步改进并不是一个高优先级。

关于c++ - 在 C++ 中分割字符串需要越来越多的时间,而行的长度大致相似,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58342613/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com