gpt4 book ai didi

multithreading - Play Framework : Async vs Sync performance

转载 作者:行者123 更新时间:2023-12-03 12:52:23 25 4
gpt4 key购买 nike

我有以下代码:

  def sync = Action {
val t0 = System.nanoTime()
Thread.sleep(100)
val t1 = System.nanoTime()
Ok("Elapsed time: " + (t1 - t0) / 1000000.0 + "ms")
}

def async = Action {
val t0 = System.nanoTime()
Async {
Future{
Thread.sleep(100)
val t1 = System.nanoTime()
Ok("Elapsed time: " + (t1 - t0) / 1000000.0 + "ms")
}
}
}

上述代码的区别在于,sync 将在接收请求的线程上休眠,而 async 将在单独的线程上休眠,以便负责接收请求的线程可以继续接收请求而不会阻塞。当我分析线程时,我发现为异步请求创建的线程数量如预期突然增加。然而,上述两种方法(4000 个并发连接 20 秒斜坡)会产生相同的吞吐量和延迟。我希望异步能够表现得更好。为什么会这样?

最佳答案

简短的回答是,这两种方法本质上是相同的。

操作本身始终是异步的(请参阅 documentation on handling asynchronous results )。

在这两种情况下,sleep 调用都发生在操作的线程池中(这不是最佳的)。

Understanding Play thread pools 中所述:

Play framework is, from the bottom up, an asynchronous web framework. Streams are handled asynchronously using iteratees. Thread pools in Play are tuned to use fewer threads than in traditional web frameworks, since IO in play-core never blocks.

Because of this, if you plan to write blocking IO code, or code that could potentially do a lot of CPU intensive work, you need to know exactly which thread pool is bearing that workload, and you need to tune it accordingly.

例如,此代码片段使用单独的线程池:

Future {
// Some blocking or expensive code here
}(Contexts.myExecutionContext)

作为其他资源,请参阅 this answerthis video有关处理异步操作的更多信息和 thisthis论坛消息以进行有关该主题的广泛讨论。

关于multithreading - Play Framework : Async vs Sync performance,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24540544/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com