gpt4 book ai didi

scala - Akka Streams Reactive Kafka - 高负载下的 OutOfMemoryError

转载 作者:行者123 更新时间:2023-12-01 09:47:03 24 4
gpt4 key购买 nike

我正在运行一个 Akka Streams Reactive Kafka 应用程序,它应该可以在重负载下正常工作。运行应用程序大约 10 分钟后,应用程序停止运行并出现 OutOfMemoryError。我尝试调试堆转储,发现 akka.dispatch.Dispatcher 占用了大约 5GB 的内存。以下是我的配置文件。

Akka 版本:2.4.18

响应式(Reactive) Kafka 版本:2.4.18

1.application.conf:

consumer {
num-consumers = "2"
c1 {
bootstrap-servers = "localhost:9092"
bootstrap-servers=${?KAFKA_CONSUMER_ENDPOINT1}
groupId = "testakkagroup1"
subscription-topic = "test"
subscription-topic=${?SUBSCRIPTION_TOPIC1}
message-type = "UserEventMessage"
poll-interval = 100ms
poll-timeout = 50ms
stop-timeout = 30s
close-timeout = 20s
commit-timeout = 15s
wakeup-timeout = 10s
use-dispatcher = "akka.kafka.default-dispatcher"
kafka-clients {
enable.auto.commit = true
}
}

2.build.sbt:

java -Xmx6g \
-Dcom.sun.management.jmxremote.port=27019 \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=localhost \
-Dzookeeper.host=$ZK_HOST \
-Dzookeeper.port=$ZK_PORT \
-jar ./target/scala-2.11/test-assembly-1.0.jar

3.SourceSink Actor :

class EventStream extends Actor with ActorLogging {

implicit val actorSystem = context.system
implicit val timeout: Timeout = Timeout(10 seconds)
implicit val materializer = ActorMaterializer()
val settings = Settings(actorSystem).KafkaConsumers

override def receive: Receive = {
case StartUserEvent(id) =>
startStreamConsumer(consumerConfig("EventMessage"+".c"+id))
}

def startStreamConsumer(config: Map[String, String]) = {
val consumerSource = createConsumerSource(config)

val consumerSink = createConsumerSink()

val messageProcessor = startMessageProcessor(actorA, actorB, actorC)

log.info("Starting The UserEventStream processing")

val future = consumerSource.map { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}.runWith(consumerSink)

future.onComplete {
case _ => actorSystem.stop(messageProcessor)
}
}

def startMessageProcessor(actorA: ActorRef, actorB: ActorRef, actorC: ActorRef) = {
actorSystem.actorOf(Props(classOf[MessageProcessor], actorA, actorB, actorC))
}

def createConsumerSource(config: Map[String, String]) = {
val kafkaMBAddress = config("bootstrap-servers")
val groupID = config("groupId")
val topicSubscription = config("subscription-topic").split(',').toList
println(s"Subscriptiontopics $topicSubscription")

val consumerSettings = ConsumerSettings(actorSystem, new ByteArrayDeserializer, new StringDeserializer)
.withBootstrapServers(kafkaMBAddress)
.withGroupId(groupID)
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
.withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true")

Consumer.committableSource(consumerSettings, Subscriptions.topics(topicSubscription:_*))
}

def createConsumerSink() = {
Sink.foreach(println)
}
}

本例中actorAactorBactorC是在做一些业务逻辑处理和数据库交互。在处理 Akka Reactive Kafka 消费者时,我是否缺少任何东西,例如提交、错误或限制配置?因为查看堆转储,我可以猜测消息正在堆积。

最佳答案

我要改变的一点是:

val future = consumerSource.map { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}.runWith(consumerSink)

在上面的代码中,您使用 askmessageProcessor 参与者发送消息并期待回复,但为了 ask要用作背压机制,您需要将其与 mapAsync 一起使用(更多信息在 documentation 中)。类似于以下内容:

val future =
consumerSource
.mapAsync(parallelism = 5) { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}
.runWith(consumerSink)

根据需要调整并行度。

关于scala - Akka Streams Reactive Kafka - 高负载下的 OutOfMemoryError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46437565/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com