- xml - AJAX/Jquery XML 解析
- 具有多重继承的 XML 模式
- .net - 枚举序列化 Json 与 XML
- XML 简单类型、简单内容、复杂类型、复杂内容
我想我已经很接近这个了,我有以下 dropzone 配置:
Dropzone.options.myDZ = {
chunking: true,
chunkSize: 500000,
retryChunks: true,
retryChunksLimit: 3,
chunksUploaded: function(file, done) {
done();
}
};
但是由于 done()
命令,它在 1 个 block 后完成。我认为此时我需要检查是否所有 block 都已上传,如果已上传则调用 done()
这是分块的 wiki:https://gitlab.com/meno/dropzone/wikis/faq#chunked-uploads
这里是配置选项:http://www.dropzonejs.com/#configuration
有人用过dropzone吗?
最佳答案
在此工作一段时间后,我可以确认(使用最新版本的 dropzone 5.4.0)chunksUploaded
只会在文件的所有 block 都已上传时调用。调用 done()
将成功处理 file
。您尝试上传的文件大小是多少?如果它低于 chunkSize
那么它实际上不会对该文件进行分块(因为 forceChunking = false;
的默认值)并且 chunksUploaded
将不被调用 ( source )。
下面我包含了我使用 dropzone.js 进行分块的工作前端实现。事先注意几点:myDropzone
和 currentFile
是在 $(document).ready()
之外声明的全局变量,如下所示:
var currentFile = null;
var myDropzone = null;
这是因为当我在 chunksUploaded
函数中对 PUT 请求进行错误处理时,我需要它们在范围内(此处传递的 done()
不接受错误消息作为参数,因此我们必须自己处理它,这需要那些全局变量。如有必要,我可以详细说明)。
$(function () {
myDropzone = new Dropzone("#attachDZ", {
url: "/api/ChunkedUpload",
params: function (files, xhr, chunk) {
if (chunk) {
return {
dzUuid: chunk.file.upload.uuid,
dzChunkIndex: chunk.index,
dzTotalFileSize: chunk.file.size,
dzCurrentChunkSize: chunk.dataBlock.data.size,
dzTotalChunkCount: chunk.file.upload.totalChunkCount,
dzChunkByteOffset: chunk.index * this.options.chunkSize,
dzChunkSize: this.options.chunkSize,
dzFilename: chunk.file.name,
userID: <%= UserID %>,
};
}
},
parallelUploads: 1, // since we're using a global 'currentFile', we could have issues if parallelUploads > 1, so we'll make it = 1
maxFilesize: 1024, // max individual file size 1024 MB
chunking: true, // enable chunking
forceChunking: true, // forces chunking when file.size < chunkSize
parallelChunkUploads: true, // allows chunks to be uploaded in parallel (this is independent of the parallelUploads option)
chunkSize: 1000000, // chunk size 1,000,000 bytes (~1MB)
retryChunks: true, // retry chunks on failure
retryChunksLimit: 3, // retry maximum of 3 times (default is 3)
chunksUploaded: function (file, done) {
// All chunks have been uploaded. Perform any other actions
currentFile = file;
// This calls server-side code to merge all chunks for the currentFile
$.ajax({
type: "PUT",
url: "/api/ChunkedUpload?dzIdentifier=" + currentFile.upload.uuid
+ "&fileName=" + encodeURIComponent(currentFile.name)
+ "&expectedBytes=" + currentFile.size
+ "&totalChunks=" + currentFile.upload.totalChunkCount
+ "&userID=" + <%= UserID %>,
success: function (data) {
// Must call done() if successful
done();
},
error: function (msg) {
currentFile.accepted = false;
myDropzone._errorProcessing([currentFile], msg.responseText);
}
});
},
init: function() {
// This calls server-side code to delete temporary files created if the file failed to upload
// This also gets called if the upload is canceled
this.on('error', function(file, errorMessage) {
$.ajax({
type: "DELETE",
url: "/api/ChunkedUpload?dzIdentifier=" + file.upload.uuid
+ "&fileName=" + encodeURIComponent(file.name)
+ "&expectedBytes=" + file.size
+ "&totalChunks=" + file.upload.totalChunkCount
+ "&userID=" + <%= UserID %>,
success: function (data) {
// nothing
}
});
});
}
});
});
如果有人对我的服务器端代码感兴趣,请告诉我,我会发布。我正在使用 C#/ASP.Net。
编辑:添加服务器端代码
ChunkedUploadController.cs:
public class ChunkedUploadController : ApiController
{
private class DzMeta
{
public int intChunkNumber = 0;
public string dzChunkNumber { get; set; }
public string dzChunkSize { get; set; }
public string dzCurrentChunkSize { get; set; }
public string dzTotalSize { get; set; }
public string dzIdentifier { get; set; }
public string dzFilename { get; set; }
public string dzTotalChunks { get; set; }
public string dzCurrentChunkByteOffset { get; set; }
public string userID { get; set; }
public DzMeta(Dictionary<string, string> values)
{
dzChunkNumber = values["dzChunkIndex"];
dzChunkSize = values["dzChunkSize"];
dzCurrentChunkSize = values["dzCurrentChunkSize"];
dzTotalSize = values["dzTotalFileSize"];
dzIdentifier = values["dzUuid"];
dzFilename = values["dzFileName"];
dzTotalChunks = values["dzTotalChunkCount"];
dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
userID = values["userID"];
int.TryParse(dzChunkNumber, out intChunkNumber);
}
public DzMeta(NameValueCollection values)
{
dzChunkNumber = values["dzChunkIndex"];
dzChunkSize = values["dzChunkSize"];
dzCurrentChunkSize = values["dzCurrentChunkSize"];
dzTotalSize = values["dzTotalFileSize"];
dzIdentifier = values["dzUuid"];
dzFilename = values["dzFileName"];
dzTotalChunks = values["dzTotalChunkCount"];
dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
userID = values["userID"];
int.TryParse(dzChunkNumber, out intChunkNumber);
}
}
[HttpPost]
public async Task<HttpResponseMessage> UploadChunk()
{
HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.Created };
try
{
if (!Request.Content.IsMimeMultipartContent("form-data"))
{
//No Files uploaded
response.StatusCode = HttpStatusCode.BadRequest;
response.Content = new StringContent("No file uploaded or MIME multipart content not as expected!");
throw new HttpResponseException(response);
}
var meta = new DzMeta(HttpContext.Current.Request.Form);
var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
var path = string.Format(@"{0}\{1}", chunkDirBasePath, meta.dzIdentifier);
var filename = string.Format(@"{0}.{1}.{2}.tmp", meta.dzFilename, (meta.intChunkNumber + 1).ToString().PadLeft(4, '0'), meta.dzTotalChunks.PadLeft(4, '0'));
Directory.CreateDirectory(path);
Request.Content.LoadIntoBufferAsync().Wait();
await Request.Content.ReadAsMultipartAsync(new CustomMultipartFormDataStreamProvider(path, filename)).ContinueWith((task) =>
{
if (task.IsFaulted || task.IsCanceled)
{
response.StatusCode = HttpStatusCode.InternalServerError;
response.Content = new StringContent("Chunk upload task is faulted or canceled!");
throw new HttpResponseException(response);
}
});
}
catch (HttpResponseException ex)
{
LogProxy.WriteError(ex.Response.Content.ToString(), ex);
}
catch (Exception ex)
{
LogProxy.WriteError("Error uploading/saving chunk to filesystem", ex);
response.StatusCode = HttpStatusCode.InternalServerError;
response.Content = new StringContent(string.Format("Error uploading/saving chunk to filesystem: {0}", ex.Message));
}
return response;
}
[HttpPut]
public HttpResponseMessage CommitChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
{
HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };
string path = "";
try
{
var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);
var dest = Path.Combine(path, HttpUtility.UrlDecode(fileName));
FileInfo info = null;
// Get all files in directory and combine in filestream
var files = Directory.EnumerateFiles(path).Where(s => !s.Equals(dest)).OrderBy(s => s);
// Check that the number of chunks is as expected
if (files.Count() != totalChunks)
{
response.Content = new StringContent(string.Format("Total number of chunks: {0}. Expected: {1}!", files.Count(), totalChunks));
throw new HttpResponseException(response);
}
// Merge chunks into one file
using (var fStream = new FileStream(dest, FileMode.Create))
{
foreach (var file in files)
{
using (var sourceStream = System.IO.File.OpenRead(file))
{
sourceStream.CopyTo(fStream);
}
}
fStream.Flush();
}
// Check that merged file length is as expected.
info = new FileInfo(dest);
if (info != null)
{
if (info.Length == expectedBytes)
{
// Save the file in the database
tTempAtt file = tTempAtt.NewInstance();
file.ContentType = MimeMapping.GetMimeMapping(info.Name);
file.File = System.IO.File.ReadAllBytes(info.FullName);
file.FileName = info.Name;
file.Title = info.Name;
file.TemporaryID = userID;
file.Description = info.Name;
file.User = userID;
file.Date = SafeDateTime.Now;
file.Insert();
}
else
{
response.Content = new StringContent(string.Format("Total file size: {0}. Expected: {1}!", info.Length, expectedBytes));
throw new HttpResponseException(response);
}
}
else
{
response.Content = new StringContent("Chunks failed to merge and file not saved!");
throw new HttpResponseException(response);
}
}
catch (HttpResponseException ex)
{
LogProxy.WriteError(ex.Response.Content.ToString(), ex);
response.StatusCode = HttpStatusCode.InternalServerError;
}
catch (Exception ex)
{
LogProxy.WriteError("Error merging chunked upload!", ex);
response.StatusCode = HttpStatusCode.InternalServerError;
response.Content = new StringContent(string.Format("Error merging chunked upload: {0}", ex.Message));
}
finally
{
// No matter what happens, we need to delete the temporary files if they exist
if (!path.IsNullOrWS() && Directory.Exists(path))
{
Directory.Delete(path, true);
}
}
return response;
}
[HttpDelete]
public HttpResponseMessage DeleteCanceledChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
{
HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };
try
{
var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
var path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);
// Delete abandoned chunks if they exist
if (!path.IsNullOrWS() && Directory.Exists(path))
{
Directory.Delete(path, true);
}
}
catch (Exception ex)
{
LogProxy.WriteError("Error deleting canceled chunks", ex);
response.StatusCode = HttpStatusCode.InternalServerError;
response.Content = new StringContent(string.Format("Error deleting canceled chunks: {0}", ex.Message));
}
return response;
}
}
最后,CustomMultipartFormDataStreamPrivder.cs:
public class CustomMultipartFormDataStreamProvider : MultipartFormDataStreamProvider
{
public readonly string _filename;
public CustomMultipartFormDataStreamProvider(string path, string filename) : base(path)
{
_filename = filename;
}
public override string GetLocalFileName(HttpContentHeaders headers)
{
return _filename;
}
}
关于javascript - Dropzone JS - 分块,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49769853/
我正在使用 WCF 并希望将大文件从客户端上传到服务器。我已经调查并决定遵循 http://msdn.microsoft.com/en-us/library/aa717050.aspx 中概述的分块方
我试图了解有关 Transfer-Encoding:chunked 的更多信息。引用了一些文章: http://zoompf.com/blog/2012/05/too-chunky和 "Transfe
我们正在评估 HDF5 在分块数据集方面的性能。 特别是我们试图弄清楚是否可以跨不同的连续块进行读取以及这样做会如何影响性能? 例如。我们有一个块大小为 10 的数据集,一个有 100 个值的数据集,
使用 Eloquent,如何根据 chunk 中的条件终止分块函数的关闭?我试过返回,但这似乎只终止当前块而不是所有块。此时,我想停止从数据库中检索记录。 $query->chunk(self::CH
有没有办法在不删除所选文件的情况下重新启动 plupload 上传? plupload.stop() 停止上传,但如果我使用 start() 再次启动上传,它会从上次停止的地方继续。相反,我希望它再次
我有带有“id,名称”的文件1和带有“id,地址”的文件2。我无法加载第一个文件(小于 2Gb):它在 76k 行(带有 block 连接)和只有 2 列后崩溃...我也无法在第二个文件上 read_
我正在尝试从头开始设计一个系统,我想在其中通过 servlet 加载文本行。生产线的生产需要一些时间。因此,我希望能够在它们到达时在我的浏览器中逐步显示它们,一次显示几个。我想从 javascript
能否请您提供一个示例,说明如何在 Android 中读取来自 Web 服务的分块响应 谢谢 编辑:我尝试调用一个 soap 网络服务,它用代表图像的 base64 编码字符串回复我 代码如下: Str
我想制作一个无限平铺 map ,从(-max_int,-max_int)到(max_int,max_int),所以我要制作一个基本结构: chunk,每个 chunk 包含 char tiles[w]
这是一个典型的场景:评估一个页面,并且有一个缓冲区 - 一旦缓冲区已满,评估的页面部分就会发送到浏览器。这使用 HTTP 1.1 分块编码。 但是,其中一个 block 中可能会发生错误(在第一个 b
如何从给定模式的句子中获取所有 block 。例子 NP:{} 标记的句子: [("money", "NN"), ("market", "NN") ("fund", "NN")] 如果我解析我得到 (
我正在使用以下代码将 CSV 文件拆分为多个 block (来自 here) def worker(chunk): print len(chunk) def keyfunc(row):
我想我已经很接近这个了,我有以下 dropzone 配置: Dropzone.options.myDZ = { chunking: true, chunkSize: 500000, ret
因为我在更常规的基础上使用 WebSocket 连接,所以我对事情在幕后的工作方式很感兴趣。因此,我研究了无休止的规范文档一段时间,但到目前为止,我真的找不到任何关于对传输流本身进行分 block 。
我有一个 slice ,其中包含约 210 万个日志字符串,我想创建一个 slice ,字符串尽可能均匀分布。 这是我目前所拥有的: // logs is a slice with ~2.1 mill
问题: 我有一个大约为 [350000, 1] 的向量,我希望计算成对距离。这导致 [350000, 350000] 整数数据类型的矩阵不适合 RAM。我最终想得到一个 bool 值(适合 RAM),
我想将 JSONP 用于具有 x 域脚本编写的项目,但不太关心 IE 中的 2048 个字符限制。 如果字符大小超过 2048,JSONP 是否自动支持“分块”?如果是的话,有人可以分享一些例子吗?
我目前正在开发 2d 角色扮演游戏,例如《最终幻想 1-4》。基本上,我的平铺 map 可以加载, Sprite 可以在 map 上自由行走。 如何处理与平铺 map 的碰撞? 我创建了三个独立的图
Treetagger 可以进行词性标记和文本分块,这意味着提取口头和名词性从句,如这个德语示例所示: $ echo 'Das ist ein Test.' | cmd/tagger-chunker-g
我应该从服务器流式传输端点,该端点返回带有传输编码的 json:分块。 我有以下代码,但无法读取响应。我尝试了 responseBody.streamBytes() 并将输入流转换为字符串,但我不能在
我是一名优秀的程序员,十分优秀!