gpt4 book ai didi

node.js - 这种情况下如何进行容量控制呢?

转载 作者:太空宇宙 更新时间:2023-11-04 00:18:49 24 4
gpt4 key购买 nike

我的应用程序从 DynamoDB 读取数据,DynamoDB 具有预先配置的读取容量,这限制了读取吞吐量。我想控制我的查询不达到限制,这是我现在的做法:

const READ_CAPACITY = 80

async function query(params) {
const consumed = await getConsumedReadCapacity()
if (consumed > READ_CAPACITY) {
await sleep((consumed-READ_CAPACITY)*1000/READ_CAPACITY)
}
const result = await dynamoDB.query(params).promise()
await addConsumedReadCapacity(result.foo.bar.CapacityUnits)
return result.Items
}

async function getConsumedReadCapacity() {
return redis.get(`read-capacity:${Math.floor(Date.now() / 1000)}`)
}

async function addConsumedReadCapacity(n) {
return redis.incrby(`read-capacity:${Math.floor(Date.now() / 1000)}`, n)
}

如您所见,查询将首先检查当前消耗的读取容量,如果不超过READ_CAPACITY,则执行查询,并将消耗的读取容量相加。

问题是代码在多个服务器上运行,因此存在竞争条件,其中 consumed > READ_CAPACITY 检查通过,并且在执行 dynamoDB.query 之前,dynamodb 通过来自其他服务器上其他进程的查询读取了容量限制。我该如何改进?

最佳答案

尝试一些事情,而不是避免达到容量限制......

尝试,然后退缩

来自DyanmoDB error handling :

ProvisionedThroughputExceededException: The AWS SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests, using Error Retries and Exponential Backoff.

爆发

来自Best Practices for Tables :

DynamoDB provides some flexibility in the per-partition throughput provisioning. When you are not fully utilizing a partition's throughput, DynamoDB retains a portion of your unused capacity for later bursts of throughput usage. DynamoDB currently retains up to five minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed very quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table.

DynamoDB 自动缩放

来自Managing Throughput Capacity Automatically with DynamoDB Auto Scaling :

DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity.

在SQS中缓存

一些 AWS 客户已经实现了一个系统,如果超出吞吐量,他们会将数据存储在 Amazon SQS 队列中。然后,他们有一个进程从队列中检索数据,并在吞吐量需求较少时将其插入表中。这允许根据平均吞吐量而不是峰值吞吐量配置 DynamoDB 表。

关于node.js - 这种情况下如何进行容量控制呢?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45411171/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com