gpt4 book ai didi

在附加时刷新门户/存储资源管理器时,Azure Blob 存储附加 blob 409/修改错误

转载 作者:行者123 更新时间:2023-12-03 03:44:09 27 4
gpt4 key购买 nike

我在将 block 上传到 Azure 中的追加 blob 时收到错误。单独处理时,该过程工作正常,但当我使用存储资源管理器(最新版本)刷新容器或在上传时刷新 Azure 门户中的页面时,问题就出现了。我的过程抛出以下内容。

An exception of type 'Azure.RequestFailedException' occurred in System.Private.CoreLib.dll but was not handled in user code: 'The blob has been modified while being read.
RequestId:62778adb-001e-011e-29a4-d589bf000000
Time:2021-11-09T20:02:29.3183234Z
Status: 409 (The blob has been modified while being read.)
ErrorCode: BlobModifiedWhileReading

租用文件没有什么区别。

测试代码为

using System;
using System.IO;
using System.Buffers;
using System.Threading.Tasks;
using Azure.Storage.Blobs;
using System.Text;
using Azure.Storage.Blobs.Specialized;

namespace str
{
static class Program
{
static async Task Main(string[] args)
{
const string ContainerName = "files";
const string BlobName = "my.blob";
const int ChunkSize = 4194304; // 4MB

const string connstr = "some-connecting-string-to-your-datalake-gen-2-account";

BlobServiceClient blobClient = new(connstr);

BlobContainerClient containerClient = blobClient.GetBlobContainerClient(ContainerName);
await containerClient.CreateIfNotExistsAsync();

AppendBlobClient appendClient = containerClient.GetAppendBlobClient(BlobName);
await appendClient.CreateIfNotExistsAsync();

using FileStream fs = await FileMaker.CreateNonsenseFileAsync();
using BinaryReader reader = new(fs);

bool readLoop = true;

while (readLoop)
{
byte[] chunk = reader.ReadBytes(ChunkSize);

if (chunk.Length > 0)
await appendClient.AppendBlockAsync(new MemoryStream(chunk));

readLoop = chunk.Length == ChunkSize;
}

fs.Close();
File.Delete(fs.Name);
}
}

public static class FileMaker
{
public static async Task<FileStream> CreateNonsenseFileAsync(int blocks = 30000)
{
string tempFile = Path.GetTempFileName();
FileStream fs = File.OpenWrite(tempFile);
byte[] buffer = ArrayPool<byte>.Shared.Rent(1024);

Random randy = new();

using (StreamWriter writer = new(fs))
{
for (int i = 0; i < blocks; i++)
{
randy.NextBytes(buffer);
await writer.WriteAsync(Encoding.UTF8.GetString(buffer));
}
}

return File.OpenRead(tempFile);
}
}
}

csproj 是

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Azure.Storage.Files.DataLake" Version="12.8.0" />
<PackageReference Include="System.Buffers" Version="4.5.1" />
</ItemGroup>

</Project>

我只能想象刷新页面的行为会以某种方式触发元数据更改,并且服务会热情地放弃,但看起来相当随意,因为您可能不知道在上传到 blob 存储时谁在查看它?

如上所述,在没有人刷新门户或存储资源管理器中的页面的情况下,此代码可以正常工作并以 4MB block 上传垃圾(附加 blob 写入的限制,切换到 2MB block 不会产生任何影响),而无需一个问题。

最佳答案

我尝试在我的系统中重现该场景,但没有遇到您所面临的问题。刷新门户后即可获取附加数据。

enter image description here

输出

刷新 blob 之前

enter image description here

刷新 blob 后

enter image description here

enter image description here

关于在附加时刷新门户/存储资源管理器时,Azure Blob 存储附加 blob 409/修改错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69904460/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com