gpt4 book ai didi

database - 随着记录的增长,mongoldb文档更新的性能下降

转载 作者:行者123 更新时间:2023-12-03 11:30:46 25 4
gpt4 key购买 nike

我有一个iOS应用程序,该程序将成批数据发送到API端点,该端点将数据存储到mongodb数据库中。我的数据建模如下:

{
"_id" : ObjectId,
"device_id" : Uuid,
"rtfb_status": bool,
"repetitions" : [
{
"session_id" : Uuid,
"set_id" : Uuid,
"level" : String,
"exercise" : String,
"number" : i32,
"rom" : f64,
"duration" : f64,
"time" : i64
},
...,
],
"imu_data": [
{
"session_id": Uuid,
"data": [
{
"acc" : {
"y" : f64,
"z" : f64,
"x" : f64,
"time" : i64,
},
"gyro" : {
"y" : f64,
"z" : f64,
"x" : f64,
"time" : i64,
}
},
...,
]
},
...,
]
}
我的应用程序只是追加到相关的数组。
async fn append_to_list<S: Serialize + From<I>, I>(
self,
collection: Collection,
source: I,
field_name: &str,
) -> Result<i64, CollectionError> {
let new_records =
bson::to_bson(&S::from(source)).map_err(CollectionError::DbSerializationError)?;

self.update_user_record(
collection,
bson::doc! { "$push": { field_name: new_records } },
)
.await
}

async fn update_user_record(
self,
collection: Collection,
document: bson::Document,
) -> Result<i64, CollectionError> {
let query = self.try_into()?;

let update_options = mongodb::options::UpdateOptions::builder()
.upsert(true)
.build();

let updated_res = collection
.update_one(query, document, update_options)
.await
.map_err(CollectionError::DbError)?;

Ok(updated_res.modified_count)
}

pub async fn add_imu_records(
self,
collection: Collection,
imurecords: JsonImuRecordSet,
) -> Result<i64, CollectionError > {
self.append_to_list::<ImuDataUpdate, _>(collection, imurecords, "imu_data")
.await
}
一切正常,但是随着时间的流逝,写入性能会下降。从我的应用程序的记录器输出中:
有小记录
 INFO  data_server > 127.0.0.1:50789 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 16.78034ms
INFO data_server > 127.0.0.1:50816 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 7.737755ms
INFO data_server > 127.0.0.1:50817 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 7.143721ms
INFO data_server > 127.0.0.1:50789 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 5.021643ms
INFO data_server > 127.0.0.1:50818 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 7.644989ms
INFO data_server > 127.0.0.1:50816 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 4.456604ms
INFO data_server > 127.0.0.1:50817 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 2.822192ms
INFO data_server > 127.0.0.1:50789 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 1.820112ms
INFO data_server > 127.0.0.1:50818 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 1.850234ms
INFO data_server > 127.0.0.1:50816 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 1.801561ms
INFO data_server > 127.0.0.1:50789 "PUT /v1/add_imu_records HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 26.722725ms
注意:add_imu_records调用是一个更大的有效负载,因此我希望它能运行更长的时间。
但是在相对较短的持续时间(可能是10分钟左右)之后,写入操作花费的时间更长
大约10分钟的数据后
INFO  data_server > 127.0.0.1:50816 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 23.000502ms
INFO data_server > 127.0.0.1:50818 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 23.23503ms
INFO data_server > 127.0.0.1:50789 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 114.679434ms
INFO data_server > 127.0.0.1:50817 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 143.392153ms
INFO data_server > 127.0.0.1:50816 "PUT /v1/add_repetition HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 65.101141ms
INFO data_server > 127.0.0.1:50818 "PUT /v1/add_imu_records HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 117.456596ms
难道我做错了什么? mongodb只是错误的工具,我应该使用RDBMS吗?我有一个分支可以在Postgres上运行,并且响应时间在最佳状态下比mongo时间慢,但它们仍然相当稳定。
基于Postgres的服务器日志
INFO  data_server > 172.17.0.1:54918 "PUT /v1/add_repetition HTTP/1.1" 201 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 7.300945ms
INFO data_server > 172.17.0.1:54906 "PUT /v1/add_repetition HTTP/1.1" 201 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 5.927394ms
INFO data_server > 172.17.0.1:54910 "PUT /v1/add_repetition HTTP/1.1" 201 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 6.025674ms
INFO data_server > 172.17.0.1:54914 "PUT /v1/add_imu_records HTTP/1.1" 200 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 45.430983ms
INFO data_server > 172.17.0.1:54906 "PUT /v1/add_repetition HTTP/1.1" 201 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 11.442257ms
INFO data_server > 172.17.0.1:54910 "PUT /v1/add_repetition HTTP/1.1" 201 "-" "client/2.0 (edu.odu.cs.nightly; build:385; iOS 13.7.0) Alamofire/5.2.1" 6.875235ms
Mongo在我的机器上的docker容器中运行。根据 Object.bsonsize,该文件为4484480字节(4.48448 Mb)。

最佳答案

为了更新文档,MongoDB必须从磁盘中获取整个文档(除非它已经在缓存中),然后在内存中对其进行变异,然后将其写回到磁盘中。修改也将写入操作日志中,以复制到辅助节点。
随着文档大小的增加,每个文档将花费更长的时间,并且由于每个文档都占用了越来越大的内存空间,因此高速缓存流失也会开始吞噬无关查询的性能。
MongoDB中的最大文档大小为16MB。如果此文档在10分钟后已经达到4MB,则需要尽快拆分或切断。

关于database - 随着记录的增长,mongoldb文档更新的性能下降,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63877288/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com