gpt4 book ai didi

redis - 用于高频数据访问的微服务架构;在内存解决方案中?

转载 作者:行者123 更新时间:2023-12-02 19:10:18 28 4
gpt4 key购买 nike

让我们定义以下用例:

  • 必须完成一项模拟任务,其中涉及 [第 1 天、第 2 天、...、第 N 天] 的迭代/模拟。迭代的每一步都取决于前一步,因此顺序是预先定义的。
  • 该任务的状态由Object1表示,该对象将在迭代的每个步骤中发生更改。
  • 迭代的步骤涉及 2 个不同的任务:任务 1任务 2
  • 为了完成任务1,需要来自数据库1的数据。
  • 要完成任务2,还需要来自不同数据库(即数据库2)的外部数据。
  • 任务 1 完成后,需要应用任务 2
  • 任务1任务2都需要访问对象1
  • 完成这两项任务后,Object1 的状态发生变化,并且一个迭代步骤已完成。

enter image description here

此迭代/模拟任务平均涉及10,000 次迭代 步骤。平均需要并发执行100个迭代/模拟任务,并由多个最终用户启动。

由于生产中应用程序需要可扩展性,现在我们讨论解决该问题的微服务架构。对于开发目的来说,这一点也至关重要,因为 Task1 和 Task2 最近添加了新功能/参数,并且在开发过程中的扩展方式不同

So, to avoid the network bottleneck here, involving the constant database access in every iteration and also the send data between Task1 and Task2, what would be an appropriate system architecture to this problem?

Should there be at least two different services for Task1 and Task2 and maybe even one for the actual iteration/simulation state control? Can someone maybe tell us a little bit more about the use of an in memory data grid solution like hazlecast or only in-memory database like redis for this problem?

The main question here is what are the arguments for a microservice architecture due to probably communication/network bottleneck? The only way to speed this up is to spawn all needed data for the simulation task in memory and keep it there the whole time, to avoid the network bottleneck?

感谢您对此的回答和宝贵意见。

(这个问题与服务间通信无关,例如消息传递或 REST http(pub/sub 或 req/resp),两者都可能为此任务施加高网络负载。)

最佳答案

Now we discuss a microservice architecture for the problem, due to the needed scalability of the application in production. Also for development purpose this is crucial, because Task1 and Task2 are recently added new features/parameters and scale differently in development.

这正是流处理平台所擅长的。我建议使用像 Apache Kafka 这样的系统或Apache Pulsar对于这个问题。

Should there be at least two different services for Task1 and Task2 and maybe even one for the actual iteration/simulation state control?

Task1 和 Task2 就是所谓的流处理器,它们读取(订阅)一个主题,执行一些操作/转换并写入(发布)另一个话题

The main question here is what are the arguments for a microservice architecture due to probably communication/network bottleneck? The only way to speed this up is to spawn all needed data for the simulation task in memory and keep it there the whole time, to avoid the network bottleneck?

同样,这正是像 Apache Kafka 或 Apache Pulsar 这样的系统正在解决的问题。要扩展流处理系统中的写入和读取,您可以分区您的主题

关于redis - 用于高频数据访问的微服务架构;在内存解决方案中?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58902946/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com