gpt4 book ai didi

mongodb - mongodb 中大嵌套数据的查询性能问题

转载 作者:可可西里 更新时间:2023-11-01 10:11:17 25 4
gpt4 key购买 nike

我正在尝试从一个名为 “任务” 的大型数据集中查询结果,该数据集包含 187297 个文档,这些文档嵌套在另一个名为 “ worker ” ,它又嵌套到一个名为 'production_units' 的集合中。

production_units -> workers -> tasks

(顺便说一句,这是 production_units 的简化版本):

[{
"_id": ObjectId("5aca27b926974863ed9f01ab"),
"name": "Z",
"workers": [{
"name": "X Y",
"worker_number": 655,
"employed": false,
"_id": ObjectId("5aca27bd26974863ed9f0425"),
"tasks": [{
"_id": ObjectId("5ac9f6c2e1a668d6d39c1fd1"),
"inbound_order_number": 3296,
"task_number": 90,
"minutes_elapsed": 120,
"date": "2004-11-30",
"start": 1101823200,
"pieces_actual": 160,
"pause_from": 1101812400,
"pause_to": 1101814200
}]
}]
}]

为了实现这一点,我使用了以下聚合命令:

db.production_units.aggregate([{
'$project': {
'workers': '$workers'
}
}, {
'$unwind': '$workers'
}, {
'$project': {
'tasks': '$workers.tasks',
'worker_number': '$workers.worker_number'
}
}, {
'$unwind': '$tasks'
}, {
'$project': {
'task_number': '$tasks.task_number',
'pieces_actual': '$tasks.pieces_actual',
'minutes_elapsed': '$tasks.minutes_elapsed',
'worker_number': 1,
'start': '$tasks.start',
'inbound_order_number': '$tasks.inbound_order_number',
'pause_from': '$tasks.pause_from',
'date': '$tasks.date',
'_id': '$tasks._id',
'pause_to': '$tasks.pause_to'
}
}, {
'$match': {
'start': {
'$exists': true
}
}
}, {
'$group': {
'entries_count': {
'$sum': 1
},
'_id': null,
'entries': {
'$push': '$$ROOT'
}
}
}, {
'$project': {
'entries_count': 1,
'_id': 0,
'entries': 1
}
}, {
'$unwind': '$entries'
}, {
'$project': {
'task_number': '$entries.task_number',
'pieces_actual': '$entries.pieces_actual',
'minutes_elapsed': '$entries.minutes_elapsed',
'worker_number': '$entries.worker_number',
'start': '$entries.start',
'inbound_order_number': '$entries.inbound_order_number',
'pause_from': '$entries.pause_from',
'date': '$entries.date',
'entries_count': 1,
'_id': '$entries._id',
'pause_to': '$entries.pause_to'
}
}, {
'$sort': {
'start': 1
}
}, {
'$skip': 187290
}, {
'$limit': 10
}], {
allowDiskUse: true
})

返回的文件是:

{ "entries_count" : 187297, "task_number" : 100, "pieces_actual" : 68, "minutes_elapsed" : 102, "worker_number" : 411, "start" : 1594118400, "inbound_order_number" : 8569, "pause_from" : 1594119600, "date" : "2020-07-07", "_id" : ObjectId("5ac9f6d3e1a668d6d3a06351"), "pause_to" : 1594119600 } { "entries_count" : 187297, "task_number" : 130, "pieces_actual" : 20, "minutes_elapsed" : 30, "worker_number" : 549, "start" : 1596531600, "inbound_order_number" : 7683, "pause_from" : 1596538800, "date" : "2020-08-04", "_id" : ObjectId("5ac9f6cde1a668d6d39f1b26"), "pause_to" : 1596538800 } { "entries_count" : 187297, "task_number" : 210, "pieces_actual" : 84, "minutes_elapsed" : 180, "worker_number" : 734, "start" : 1601276400, "inbound_order_number" : 8330, "pause_from" : 1601290800, "date" : "2020-09-28", "_id" : ObjectId("5ac9f6d0e1a668d6d39fd677"), "pause_to" : 1601290800 } { "entries_count" : 187297, "task_number" : 20, "pieces_actual" : 64, "minutes_elapsed" : 90, "worker_number" : 114, "start" : 1601800200, "inbound_order_number" : 7690, "pause_from" : 1601809200, "date" : "2020-10-04", "_id" : ObjectId("5ac9f6cee1a668d6d39f3032"), "pause_to" : 1601811900 } { "entries_count" : 187297, "task_number" : 140, "pieces_actual" : 70, "minutes_elapsed" : 84, "worker_number" : 49, "start" : 1603721640, "inbound_order_number" : 4592, "pause_from" : 1603710000, "date" : "2020-10-26", "_id" : ObjectId("5ac9f6c8e1a668d6d39df664"), "pause_to" : 1603712700 } { "entries_count" : 187297, "task_number" : 80, "pieces_actual" : 20, "minutes_elapsed" : 30, "worker_number" : 277, "start" : 1796628600, "inbound_order_number" : 4655, "pause_from" : 1796641200, "date" : "2026-12-07", "_id" : ObjectId("5ac9f6c8e1a668d6d39e1fc0"), "pause_to" : 1796643900 } { "entries_count" : 187297, "task_number" : 40, "pieces_actual" : 79, "minutes_elapsed" : 123, "worker_number" : 96, "start" : 3802247580, "inbound_order_number" : 4592, "pause_from" : 3802244400, "date" : "2090-06-27", "_id" : ObjectId("5ac9f6c8e1a668d6d39de218"), "pause_to" : 3802244400 }

但是,查询需要几秒钟才能显示结果,而不是几毫秒。这是分析器返回的结果:

 db.system.profile.findOne().millis 3216

(更新)

即使是以下简化的计数查询也会在 312 毫秒内执行,而不是很少的时间:

db.production_units.aggregate([{
"$unwind": "$workers"
}, {
"$unwind": "$workers.tasks"
},
{
"$count": "entries_count"
}
])

这是 explain() 为上面的查询返回的内容:

{
"stages" : [
{
"$cursor" : {
"query" : {

},
"fields" : {
"workers" : 1,
"_id" : 0
},
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "my_db.production_units",
"indexFilterSet" : false,
"parsedQuery" : {

},
"winningPlan" : {
"stage" : "COLLSCAN",
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 28,
"executionTimeMillis" : 13,
"totalKeysExamined" : 0,
"totalDocsExamined" : 28,
"executionStages" : {
"stage" : "COLLSCAN",
"nReturned" : 28,
"executionTimeMillisEstimate" : 0,
"works" : 30,
"advanced" : 28,
"needTime" : 1,
"needYield" : 0,
"saveState" : 1,
"restoreState" : 1,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 28
},
"allPlansExecution" : [ ]
}
}
},
{
"$unwind" : {
"path" : "$workers"
}
},
{
"$unwind" : {
"path" : "$workers.tasks"
}
},
{
"$group" : {
"_id" : {
"$const" : null
},
"entries_count" : {
"$sum" : {
"$const" : 1
}
}
}
},
{
"$project" : {
"_id" : false,
"entries_count" : true
}
}
],
"ok" : 1
}

我不是经验丰富的 DBA,所以我不知道我的聚合管道中到底缺少什么来解决我面临的性能问题。我也调查过这个问题并进行了研究,但没有找到任何解决方案。

我错过了什么?

最佳答案

如果没有查询的 explain(),就不可能确定查询的瓶颈是什么。但是,这里有一些关于如何改进此查询的建议


在流水线末端使用单个$project阶段

查询包含 5 个 $project 阶段,而实际上只需要一个阶段。这会增加很多开销,尤其是在应用于大量文档时。相反,使用点符号来查询嵌套字段,例如:

{ "$unwind": "$workers.tasks" }

尽早调用$match

$match 允许删除一些文档,所以尽早添加它以在较少数量的文档上应用进一步的聚合阶段

$project之前调用skip$limit

由于查询仅返回 10 个文档,因此无需在 180000 个其他文档上应用 $project 阶段

正确索引用于排序的字段

这很可能是瓶颈。确保字段 workers.tasks.start 已编入索引(有关详细信息,请参阅 MongoDB ensureIndex())

不计算查询中返回的文档的 nb

代替 $group/$unwind 阶段来计算匹配文档,运行另一个查询同样的时间只计算匹配文档的数量


主查询现在看起来像:

db.collection.aggregate([{
"$unwind": "$workers"
}, {
"$unwind": "$workers.tasks"
}, {
"$match": {
"workers.tasks.start": {
"$ne": null
}
}
},
{
"$sort": {
"workers.tasks.start": 1
}
}, {
"$skip": 0
}, {
"$limit": 10
},
{
"$project": {
"task_number": "$workers.tasks.task_number",
"pieces_actual": "$workers.tasks.pieces_actual",
"minutes_elapsed": "$workers.tasks.minutes_elapsed",
"worker_number": "$workers.worker_number",
"start": "$workers.tasks.start",
"inbound_order_number": "$workers.tasks.inbound_order_number",
"pause_from": "$workers.tasks.pause_from",
"date": "$workers.tasks.date",
"_id": "$workers.tasks._id",
"pause_to": "$workers.tasks.pause_to"
}
}
])

您可以在这里尝试:mongoplayground.net/p/yua7qspo2Jj

计数查询将是

db.collection.aggregate([{
"$unwind": "$workers"
}, {
"$unwind": "$workers.tasks"
}, {
"$match": {
"workers.tasks.start": {
"$ne": null
}
}
},
{
"$count": "entries_count"
}
])

计数查询看起来像

关于mongodb - mongodb 中大嵌套数据的查询性能问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49750465/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com