gpt4 book ai didi

azure - 下载大文件并将其拆分为 blob 存储中的 100 MB block

转载 作者:行者123 更新时间:2023-12-04 16:09:52 24 4
gpt4 key购买 nike

我在 blob 存储中有一个 2GB 文件,正在构建一个控制台应用程序,将该文件下载到桌面中。要求是分割成 100MB 的 block 并在文件名中附加一个数字。我不需要再次重新组合这些文件。我需要的只是文件 block 。

我目前有来自 Azure download blob part 的代码

但是我不知道如何在文件大小已经达到 100MB 时停止下载并创建一个新文件。

任何帮助将不胜感激。

更新:这是我的代码

CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
var blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference(containerName);
var file = uri;
var blob = container.GetBlockBlobReference(file);
//First fetch the size of the blob. We use this to create an empty file with size = blob's size
blob.FetchAttributes();
var blobSize = blob.Properties.Length;
long blockSize = (1 * 1024 * 1024);//1 MB chunk;
blockSize = Math.Min(blobSize, blockSize);
//Create an empty file of blob size
using (FileStream fs = new FileStream(file, FileMode.Create))//Create empty file.
{
fs.SetLength(blobSize);//Set its size
}
var blobRequestOptions = new BlobRequestOptions
{
RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(5), 3),
MaximumExecutionTime = TimeSpan.FromMinutes(60),
ServerTimeout = TimeSpan.FromMinutes(60)
};
long startPosition = 0;
long currentPointer = 0;
long bytesRemaining = blobSize;
do
{
var bytesToFetch = Math.Min(blockSize, bytesRemaining);
using (MemoryStream ms = new MemoryStream())
{
//Download range (by default 1 MB)
blob.DownloadRangeToStream(ms, currentPointer, bytesToFetch, null, blobRequestOptions);
ms.Position = 0;
var contents = ms.ToArray();
using (var fs = new FileStream(file, FileMode.Open))//Open that file
{
fs.Position = currentPointer;//Move the cursor to the end of file.
fs.Write(contents, 0, contents.Length);//Write the contents to the end of file.
}
startPosition += blockSize;
currentPointer += contents.Length;//Update pointer
bytesRemaining -= contents.Length;//Update bytes to fetch

Console.WriteLine(fileName + dateTimeStamp + ".csv " + (startPosition / 1024 / 1024) + "/" + (blob.Properties.Length / 1024 / 1024) + " MB downloaded...");
}
}
while (bytesRemaining > 0);

最佳答案

根据我的理解,您可以将 blob 文件分成预期的部分(100MB),然后利用 CloudBlockBlob.DownloadRangeToStream下载每个文件 block 。这是我的代码片段,你可以引用:

并行下载Blob

private static void ParallelDownloadBlob(Stream outPutStream, CloudBlockBlob blob,long startRange,long endRange)
{
blob.FetchAttributes();
int bufferLength = 1 * 1024 * 1024;//1 MB chunk for download
long blobRemainingLength = endRange-startRange;
Queue<KeyValuePair<long, long>> queues = new Queue<KeyValuePair<long, long>>();
long offset = startRange;
while (blobRemainingLength > 0)
{
long chunkLength = (long)Math.Min(bufferLength, blobRemainingLength);
queues.Enqueue(new KeyValuePair<long, long>(offset, chunkLength));
offset += chunkLength;
blobRemainingLength -= chunkLength;
}
Parallel.ForEach(queues,
new ParallelOptions()
{
MaxDegreeOfParallelism = 5
}, (queue) =>
{
using (var ms = new MemoryStream())
{
blob.DownloadRangeToStream(ms, queue.Key, queue.Value);
lock (outPutStream)
{
outPutStream.Position = queue.Key- startRange;
var bytes = ms.ToArray();
outPutStream.Write(bytes, 0, bytes.Length);
}
}
});
}

程序主

var container = storageAccount.CreateCloudBlobClient().GetContainerReference(defaultContainerName);
var blob = container.GetBlockBlobReference("code.txt");
blob.FetchAttributes();
long blobTotalLength = blob.Properties.Length;
long chunkLength = 10 * 1024; //divide blob file into each file with 10KB in size
for (long i = 0; i <= blobTotalLength; i += chunkLength)
{

long startRange = i;
long endRange = (i + chunkLength) > blobTotalLength ? blobTotalLength : (i + chunkLength);

using (var fs = new FileStream(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, $"resources\\code_[{startRange}]_[{endRange}].txt"), FileMode.Create))
{
Console.WriteLine($"\nParallelDownloadBlob from range [{startRange}] to [{endRange}] start...");
Stopwatch sp = new Stopwatch();
sp.Start();

ParallelDownloadBlob(fs, blob, startRange, endRange);
sp.Stop();
Console.WriteLine($"download done, time cost:{sp.ElapsedMilliseconds / 1000.0}s");
}
}

结果 enter image description here

enter image description here

更新:

根据您的要求,我建议您可以将 blob 下载到单个文件中,然后利用 LumenWorks.Framework.IO逐行读取大文件记录,然后检查已读取的字节大小并保存到新的 csv 文件中,大小最大为 100MB。这是一个代码片段,您可以引用:

using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
Console.Write(string.Format("{0} = {1};",
headers[i],
csv[i] == null ? "MISSING" : csv[i]));
//TODO:
//1.Read the current record, check the total bytes you have read;
//2.Create a new csv file if the current total bytes up to 100MB, then save the current record to the current CSV file.
}
}

另外,您可以引用A Fast CSV ReaderCsvHelper了解更多详情。

更新2

用于将大型 CSV 文件分解为具有固定字节的小型 CSV 文件的代码示例,我使用 CsvHelper 2.16.3以下代码片段,您可以引用:

string[] headers = new string[0];
using (var sr = new StreamReader(@"C:\Users\v-brucch\Desktop\BlobHourMetrics.csv")) //83.9KB
{
using (CsvHelper.CsvReader csvReader = new CsvHelper.CsvReader(sr,
new CsvHelper.Configuration.CsvConfiguration()
{
Delimiter = ",",
Encoding = Encoding.UTF8
}))
{
//check header
if (csvReader.ReadHeader())
{
headers = csvReader.FieldHeaders;
}

TextWriter writer = null;
CsvWriter csvWriter = null;
long readBytesCount = 0;
long chunkSize = 30 * 1024; //divide CSV file into each CSV file with byte size up to 30KB

while (csvReader.Read())
{
var curRecord = csvReader.CurrentRecord;
var curRecordByteCount = curRecord.Sum(r => Encoding.UTF8.GetByteCount(r)) + headers.Count() + 1;
readBytesCount += curRecordByteCount;

//check bytes you have read
if (writer == null || readBytesCount > chunkSize)
{
readBytesCount = curRecordByteCount + headers.Sum(h => Encoding.UTF8.GetByteCount(h)) + headers.Count() + 1;
if (writer != null)
{
writer.Flush();
writer.Close();
}
string fileName = $"BlobHourMetrics_{Guid.NewGuid()}.csv";
writer = new StreamWriter(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fileName), true);
csvWriter = new CsvWriter(writer);
csvWriter.Configuration.Encoding = Encoding.UTF8;
//output header field
foreach (var header in headers)
{
csvWriter.WriteField(header);
}
csvWriter.NextRecord();
}
//output record field
foreach (var field in curRecord)
{
csvWriter.WriteField(field);
}
csvWriter.NextRecord();
}
if (writer != null)
{
writer.Flush();
writer.Close();
}
}
}

结果 enter image description here

关于azure - 下载大文件并将其拆分为 blob 存储中的 100 MB block ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44381097/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com