gpt4 book ai didi

node.js - Azure Blob 存储 - 上传有进度的文件

转载 作者:行者123 更新时间:2023-12-05 02:03:48 24 4
gpt4 key购买 nike

我有以下代码 - 将文件上传到 Azure-Blob-Storage 很正常但是,当我上传文件而不是多次执行 onProgress 时,我只执行(并且总是)执行一次使用 file.size 值(因此它发送 - 缓慢)文件到天蓝色,但进度仅在完成时执行一次。

    const requestOptions = this.mergeWithDefaultOptions(perRequestOptions);
const client = this.getRequestClient(requestOptions);
const containerClient = await client.getContainerClient(this.options.containerName);
const blobClient = await containerClient.getBlockBlobClient(file.name);
const uploadStatus = await blobClient.upload(file.buffer, file.size, {onProgress: progressCallBack});

我很想知道这个结果对于这个库来说是否正常(对于从 azure 下载文件,同样的方法可以正常工作)。

最佳答案

根据我的测试,该方法是一种非并行上传方法,它只发送一个Put Blob。请求到 Azure 存储服务器。更多详情请引用here . enter image description here

所以如果你想多次执行onProgress,我建议你使用方法uploadStream。它使用 Put Block操作和Put Block List上传操作。更多详情请引用here

例如

try {
var creds = new StorageSharedKeyCredential(accountName, accountKey);
var blobServiceClient = new BlobServiceClient(
`https://${accountName}.blob.core.windows.net`,
creds
);
var containerClient = blobServiceClient.getContainerClient("upload");
var blob = containerClient.getBlockBlobClient(
"spark-3.0.1-bin-hadoop3.2.tgz"
);

var maxConcurrency = 20; // max uploading concurrency
var blockSize = 4 * 1024 * 1024; // the block size in the uploaded block blob
var res = await blob.uploadStream(
fs.createReadStream("d:/spark-3.0.1-bin-hadoop3.2.tgz", {
highWaterMark: blockSize,
}),
blockSize,
maxConcurrency,
{ onProgress: (ev) => console.log(ev) }
);
console.log(res._response.status);
} catch (error) {
console.log(error);
}

enter image description here

关于node.js - Azure Blob 存储 - 上传有进度的文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64910728/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com