gpt4 book ai didi

asynchronous - Rust中异步/等待的目的是什么?

转载 作者:行者123 更新时间:2023-12-03 11:40:01 25 4
gpt4 key购买 nike

在像C#这样的语言中,给出以下代码(我不是故意使用await关键字):

async Task Foo()
{
var task = LongRunningOperationAsync();

// Some other non-related operation
AnotherOperation();

result = task.Result;
}

在第一行中,long操作在另一个线程中运行,并返回 Task(即将来的代码)。然后,您可以执行将与第一个操作并行运行的另一项操作,最后,您可以等待该操作完成。我认为这也是python,JavaScript等中 async/ await的行为。

另一方面,在Rust中,我在 the RFC中读到:

A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.



在这种情况下,Rust中 async/ await的目的是什么?从其他语言来看,这种表示法是运行并行操作的便捷方式,但是如果调用 async函数不执行任何操作,则我看不到它在Rust中的工作方式。

最佳答案

您正在混淆一些概念。

Concurrency is not parallelismasyncawait是用于并发的工具,这有时可能意味着它们也是并行性的工具。

此外,是否立即轮询将来与所选择的语法正交。
async/await
存在关键字asyncawait是为了使创建和与异步代码进行交互更容易阅读,并且看起来更像是“常规”同步代码。据我所知,在所有具有此类关键字的语言中都是如此。

更简单的代码

这是创建将来在轮询时将两个数字相加的代码

之前的

fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);

impl Future for Value {
type Output = u8;

fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}

Value(a, b)
}

之后的
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}

请注意,“之前”代码基本上是implementation of today's poll_fn function

另请参阅Peter Hall's answer,以了解如何更好地跟踪许多变量。

引用
async/await可能令人惊讶的一件事是,它启用了以前无法实现的特定模式:在 future 中使用引用。以下是一些以异步方式用值填充缓冲区的代码:

之前的
use std::io;

fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}

fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}

无法编译:

error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...

error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...

之后的
use std::io;

async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}

async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}

这行得通!

调用async函数不会运行任何东西

另一方面,Future的实现和设计以及围绕 future 的整个系统与关键字asyncawait无关。确实,在async/await关键字出现之前,Rust拥有一个蓬勃发展的异步生态系统(例如Tokio)。 JavaScript也是如此。

为什么在创建时不立即对Future进行轮询?

有关最权威的答案,请参阅RFC拉取请求中的this comment from withoutboats:

A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.

A point about this is that async & await in Rust are not inherently concurrent constructions. If you have a program that only uses async & await and no concurrency primitives, the code in your program will execute in a defined, statically known, linear order. Obviously, most programs will use some kind of concurrency to schedule multiple, concurrent tasks on the event loop, but they don't have to. What this means is that you can - trivially - locally guarantee the ordering of certain events, even if there is nonblocking IO performed in between them that you want to be asynchronous with some larger set of nonlocal events (e.g. you can strictly control ordering of events inside of a request handler, while being concurrent with many other request handlers, even on two sides of an await point).

This property gives Rust's async/await syntax the kind of local reasoning & low-level control that makes Rust what it is. Running up to the first await point would not inherently violate that - you'd still know when the code executed, it would just execute in two different places depending on whether it came before or after an await. However, I think the decision made by other languages to start executing immediately largely stems from their systems which immediately schedule a task concurrently when you call an async fn (for example, that's the impression of the underlying problem I got from the Dart 2.0 document).



this discussion from munificent涵盖了Dart 2.0的某些背景:

Hi, I'm on the Dart team. Dart's async/await was designed mainly by Erik Meijer, who also worked on async/await for C#. In C#, async/await is synchronous to the first await. For Dart, Erik and others felt that C#'s model was too confusing and instead specified that an async function always yields once before executing any code.

At the time, I and another on my team were tasked with being the guinea pigs to try out the new in-progress syntax and semantics in our package manager. Based on that experience, we felt async functions should run synchronously to the first await. Our arguments were mostly:

  1. Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really does. Even in cases where you can live with it, it's a drag to bleed a little perf everywhere.

  2. Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like (pseudo-code here):

    getThingFromNetwork():
    if (downloadAlreadyInProgress):
    return cachedFuture

    cachedFuture = startDownload()
    return cachedFuture

    In other words, you have an async operation that you can call multiple times before it completes. Later calls use the same previously-created pending future. You want to ensure you don't start the operation multiple times. That means you need to synchronously check the cache before starting the operation.

    If async functions are async from the start, the above function can't use async/await.

We pleaded our case, but ultimately the language designers stuck with async-from-the-top. This was several years ago.

That turned out to be the wrong call. The performance cost is real enough that many users developed a mindset that "async functions are slow" and started avoiding using it even in cases where the perf hit was affordable. Worse, we see nasty concurrency bugs where people think they can do some synchronous work at the top of a function and are dismayed to discover they've created race conditions. Overall, it seems users do not naturally assume an async function yields before executing any code.

So, for Dart 2, we are now taking the very painful breaking change to change async functions to be synchronous to the first await and migrating all of our existing code through that transition. I'm glad we're making the change, but I really wish we'd done the right thing on day one.

I don't know if Rust's ownership and performance model place different constraints on you where being async from the top really is better, but from our experience, sync-to-the-first-await is clearly the better trade-off for Dart.



cramert replies(请注意,某些语法现在已过时):

If you need code to execute immediately when a function is called rather than later on when the future is polled, you can write your function like this:

fn foo() -> impl Future<Item=Thing> {
println!("prints immediately");
async_block! {
println!("prints when the future is first polled");
await!(bar());
await!(baz())
}
}


程式码范例

这些示例使用了Rust 1.39和Future Crate 0.3.1中的异步支持。

C#代码的文字转录
use futures; // 0.3.1

async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");

a + b
}

fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");

c * d
}

async fn foo() -> u8 {
println!("foo");

let sum = long_running_operation(1, 2);

another_operation(3, 4);

sum.await
}

fn main() {
let task = foo();

futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}

如果您调用foo,那么Rust中的事件顺序将是:
  • 返回实现Future<Output = u8>的内容。

  • 而已。尚未完成任何“实际”工作。如果采用foo的结果并将其逼近完成(通过对其进行轮询(在这种情况下,通过futures::executor::block_on进行轮询)),那么接下来的步骤是:
  • 调用Future<Output = u8>会返回实现long_running_operation的内容(它尚未开始工作)。
  • another_operation确实可以工作,因为它是同步的。
  • .await语法使long_running_operation中的代码开始。 foo future将继续返回“未就绪”,直到计算完成。

  • 输出为:

    foo
    another_operation
    long_running_operation
    Result: 3

    请注意,这里没有线程池:这都是在单个线程上完成的。
    async

    您还可以使用async块:
    use futures::{future, FutureExt}; // 0.3.1

    fn long_running_operation(a: u8, b: u8) -> u8 {
    println!("long_running_operation");

    a + b
    }

    fn another_operation(c: u8, d: u8) -> u8 {
    println!("another_operation");

    c * d
    }

    async fn foo() -> u8 {
    println!("foo");

    let sum = async { long_running_operation(1, 2) };
    let oth = async { another_operation(3, 4) };

    let both = future::join(sum, oth).map(|(sum, _)| sum);

    both.await
    }

    在这里,我们将同步代码包装在async块中,然后等待两个操作完成,然后此功能才能完成。

    注意,对于任何需要花费很长时间的事情,包装这样的同步代码不是一个好主意。有关更多信息,请参见What is the best approach to encapsulate blocking I/O in future-rs?

    带线程池
    // Requires the `thread-pool` feature to be enabled 
    use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};

    async fn foo(pool: &mut ThreadPool) -> u8 {
    println!("foo");

    let sum = pool
    .spawn_with_handle(async { long_running_operation(1, 2) })
    .unwrap();
    let oth = pool
    .spawn_with_handle(async { another_operation(3, 4) })
    .unwrap();

    let both = future::join(sum, oth).map(|(sum, _)| sum);

    both.await
    }

    关于asynchronous - Rust中异步/等待的目的是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57151795/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com