- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
在像C#这样的语言中,给出以下代码(我不是故意使用await
关键字):
async Task Foo()
{
var task = LongRunningOperationAsync();
// Some other non-related operation
AnotherOperation();
result = task.Result;
}
Task
(即将来的代码)。然后,您可以执行将与第一个操作并行运行的另一项操作,最后,您可以等待该操作完成。我认为这也是python,JavaScript等中
async
/
await
的行为。
A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.
async
/
await
的目的是什么?从其他语言来看,这种表示法是运行并行操作的便捷方式,但是如果调用
async
函数不执行任何操作,则我看不到它在Rust中的工作方式。
最佳答案
您正在混淆一些概念。
Concurrency is not parallelism和async
和await
是用于并发的工具,这有时可能意味着它们也是并行性的工具。
此外,是否立即轮询将来与所选择的语法正交。async
/await
存在关键字async
和await
是为了使创建和与异步代码进行交互更容易阅读,并且看起来更像是“常规”同步代码。据我所知,在所有具有此类关键字的语言中都是如此。
更简单的代码
这是创建将来在轮询时将两个数字相加的代码
之前的
fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);
impl Future for Value {
type Output = u8;
fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}
Value(a, b)
}
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}
poll_fn
functionasync
/await
可能令人惊讶的一件事是,它启用了以前无法实现的特定模式:在 future 中使用引用。以下是一些以异步方式用值填充缓冲区的代码:use std::io;
fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}
fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}
error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...
error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...
use std::io;
async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}
async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}
async
函数不会运行任何东西Future
的实现和设计以及围绕 future 的整个系统与关键字async
和await
无关。确实,在async
/await
关键字出现之前,Rust拥有一个蓬勃发展的异步生态系统(例如Tokio)。 JavaScript也是如此。Future
进行轮询?A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.
A point about this is that async & await in Rust are not inherently concurrent constructions. If you have a program that only uses async & await and no concurrency primitives, the code in your program will execute in a defined, statically known, linear order. Obviously, most programs will use some kind of concurrency to schedule multiple, concurrent tasks on the event loop, but they don't have to. What this means is that you can - trivially - locally guarantee the ordering of certain events, even if there is nonblocking IO performed in between them that you want to be asynchronous with some larger set of nonlocal events (e.g. you can strictly control ordering of events inside of a request handler, while being concurrent with many other request handlers, even on two sides of an await point).
This property gives Rust's async/await syntax the kind of local reasoning & low-level control that makes Rust what it is. Running up to the first await point would not inherently violate that - you'd still know when the code executed, it would just execute in two different places depending on whether it came before or after an await. However, I think the decision made by other languages to start executing immediately largely stems from their systems which immediately schedule a task concurrently when you call an async fn (for example, that's the impression of the underlying problem I got from the Dart 2.0 document).
Hi, I'm on the Dart team. Dart's async/await was designed mainly by Erik Meijer, who also worked on async/await for C#. In C#, async/await is synchronous to the first await. For Dart, Erik and others felt that C#'s model was too confusing and instead specified that an async function always yields once before executing any code.
At the time, I and another on my team were tasked with being the guinea pigs to try out the new in-progress syntax and semantics in our package manager. Based on that experience, we felt async functions should run synchronously to the first await. Our arguments were mostly:
Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really does. Even in cases where you can live with it, it's a drag to bleed a little perf everywhere.
Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like (pseudo-code here):
getThingFromNetwork():
if (downloadAlreadyInProgress):
return cachedFuture
cachedFuture = startDownload()
return cachedFutureIn other words, you have an async operation that you can call multiple times before it completes. Later calls use the same previously-created pending future. You want to ensure you don't start the operation multiple times. That means you need to synchronously check the cache before starting the operation.
If async functions are async from the start, the above function can't use async/await.
We pleaded our case, but ultimately the language designers stuck with async-from-the-top. This was several years ago.
That turned out to be the wrong call. The performance cost is real enough that many users developed a mindset that "async functions are slow" and started avoiding using it even in cases where the perf hit was affordable. Worse, we see nasty concurrency bugs where people think they can do some synchronous work at the top of a function and are dismayed to discover they've created race conditions. Overall, it seems users do not naturally assume an async function yields before executing any code.
So, for Dart 2, we are now taking the very painful breaking change to change async functions to be synchronous to the first await and migrating all of our existing code through that transition. I'm glad we're making the change, but I really wish we'd done the right thing on day one.
I don't know if Rust's ownership and performance model place different constraints on you where being async from the top really is better, but from our experience, sync-to-the-first-await is clearly the better trade-off for Dart.
If you need code to execute immediately when a function is called rather than later on when the future is polled, you can write your function like this:
fn foo() -> impl Future<Item=Thing> {
println!("prints immediately");
async_block! {
println!("prints when the future is first polled");
await!(bar());
await!(baz())
}
}
use futures; // 0.3.1
async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = long_running_operation(1, 2);
another_operation(3, 4);
sum.await
}
fn main() {
let task = foo();
futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}
foo
,那么Rust中的事件顺序将是:Future<Output = u8>
的内容。 foo
的结果并将其逼近完成(通过对其进行轮询(在这种情况下,通过futures::executor::block_on
进行轮询)),那么接下来的步骤是:Future<Output = u8>
会返回实现long_running_operation
的内容(它尚未开始工作)。 another_operation
确实可以工作,因为它是同步的。 .await
语法使long_running_operation
中的代码开始。 foo
future将继续返回“未就绪”,直到计算完成。 foo
another_operation
long_running_operation
Result: 3
async
块async
块:use futures::{future, FutureExt}; // 0.3.1
fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = async { long_running_operation(1, 2) };
let oth = async { another_operation(3, 4) };
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
async
块中,然后等待两个操作完成,然后此功能才能完成。// Requires the `thread-pool` feature to be enabled
use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};
async fn foo(pool: &mut ThreadPool) -> u8 {
println!("foo");
let sum = pool
.spawn_with_handle(async { long_running_operation(1, 2) })
.unwrap();
let oth = pool
.spawn_with_handle(async { another_operation(3, 4) })
.unwrap();
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
关于asynchronous - Rust中异步/等待的目的是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52835725/
编辑备注 由于 Rust(版本:1.42)仍然没有稳定的 ABI ,推荐使用extern (目前相当于extern "C"(将来可能会改变))否则,可能需要重新编译库。 This article解释如
词法分析器/解析器文件位于 here非常大,我不确定它是否适合只检索 Rust 函数列表。也许我自己编写/使用另一个库是更好的选择? 最终目标是创建一种执行管理器。为了上下文化,它将能够读取包装在函数
我试图在 Rust 中展平 Enum 的向量,但我遇到了一些问题: enum Foo { A(i32), B(i32, i32), } fn main() { let vf =
我正在 64 位模式下运行的 Raspberry Pi 3 上使用 Rust 进行裸机编程。我已经实现了一个自旋锁,如下所示: use core::{sync::atomic::{AtomicBool
我无法理解以下示例是如何从 this code 中提炼出来的, 编译: trait A: B {} trait B {} impl B for T where T: A {} struct Foo;
在我写了一些代码和阅读了一些文章之后,我对 Rust 中的移动语义有点困惑,我认为值移动后,它应该被释放,内存应该是无效的。所以我尝试写一些代码来作证。 第一个例子 #[derive(Debug)]
https://doc.rust-lang.org/reference/types/closure.html#capture-modes struct SetVec { set: HashSe
考虑 const-generic 数据结构的经典示例:方矩阵。 struct Matrix { inner: [[T; N]; N] } 我想返回一个结构体,其 const 参数是动态定义的:
以下代码无法编译,因为 x在移动之后使用(因为 x 具有类型 &mut u8 ,它没有实现 Copy 特性) fn main() { let mut a: u8 = 1; let x:
我在玩 Rust,发现了下面的例子: fn main() { let mut x = [3, 4, 5].to_vec(); x; println!("{:?}", x); }
假设一个 Rust 2018 宏定义了一个 async里面的功能。它将使用的语法与 Rust 2015 不兼容。因此,如果您使用 2015 版编译您的 crate,那么宏中的扩展代码不会与它冲突吗?
假设我有一些 Foo 的自定义集合s: struct Bar {} struct Foo { bar: Bar } struct SubList { contents: Vec, }
代码如下: fn inner(x:&'a i32, _y:&'b i32) -> &'b i32 { x } fn main() { let a = 1; { let b
在lifetime_things的定义中,'b的生命周期比'a长,但实际上当我调用这个函数时,x1比y1长,但是这样可以编译成功: //here you could see 'b:'a means
我正在尝试检索 FLTK-RS Widget 周围的 Arc Mutex 包装器的内部值: pub struct ArcWidget(Arc>); impl ArcWidget{ pub
如下代码所示,我想封装一个定时函数,返回一个闭包的结果和执行时间。 use tap::prelude::Pipe; use std::time::{Instant, Duration}; pub fn
我想实现自己的通用容器,这是我正在使用的特征的片段: pub trait MyVec where Self: Default + Clone + IntoIterator, Self:
所需代码: 注释掉的块可以编译并工作,但是我想从嵌套的匹配样式转变为更简洁的函数链 async fn ws_req_resp(msg: String, conn: PgConn) -> Result>
我正在尝试编写一些代码,该代码将生成具有随机值的随机结构。对于结构,我具有以下特征和帮助程序宏: use rand::{thread_rng, Rng}; use std::fmt; pub trai
我有一个带有函数成员的结构: struct Foo { fun: Box, } type FooI = Foo; 这不起作用: error[E0106]: missing lifetime s
我是一名优秀的程序员,十分优秀!