New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async once fixtures #141
Comments
Is use Otherwise the real problem is that you cannot share async future with different runtimes (every test have its own runtime). I wrote an example to how you can workaround that issue for a friend some time ago. But also in this case the fixture are not async function but just use async function. use once_cell::sync::OnceCell;
use rstest::*;
use sqlx::postgres::PgConnectOptions;
use sqlx::{ConnectOptions, Connection, PgConnection};
use std::sync::Mutex;
type Runtime = &'static Mutex<tokio::runtime::Runtime>;
#[fixture]
pub fn singletons_runtime() -> Runtime {
static RUNTIME: OnceCell<Mutex<tokio::runtime::Runtime>> = OnceCell::new();
RUNTIME.get_or_init(move || Mutex::new(tokio::runtime::Runtime::new().unwrap()))
}
type MyConnection = &'static Mutex<PgConnection>;
#[fixture]
pub fn connection(singletons_runtime: Runtime) -> MyConnection {
static CONNECTION: OnceCell<Mutex<PgConnection>> = OnceCell::new();
CONNECTION.get_or_init(move || {
let connection = tokio::task::block_in_place(|| {
singletons_runtime
.lock()
.unwrap()
.block_on(
PgConnectOptions::new()
.username("postgres")
.host("localhost")
.password("password")
.database("postgres")
.connect(),
)
.unwrap()
});
Mutex::new(connection)
})
}
#[rstest]
#[tokio::test(core_threads = 2)]
async fn first_one(connection: MyConnection) {
connection.lock().unwrap().ping().await.unwrap();
}
#[rstest]
#[tokio::test(core_threads = 2)]
async fn second_one(connection: MyConnection) {
connection.lock().unwrap().ping().await.unwrap();
} I wrote that code before introduce #[fixture]
#[once]
pub fn connection(singletons_runtime: Runtime) -> Mutex<PgConnection> {
let connection = tokio::task::block_in_place(|| {
singletons_runtime
.lock()
.unwrap()
.block_on(
PgConnectOptions::new()
.username("postgres")
.host("localhost")
.password("password")
.database("postgres")
.connect(),
)
.unwrap()
});
Mutex::new(connection)
} DISCLAIMER: I didn't test it |
Thanks for the idea! I had a similar hack in mind (starting runtime inside the fixture), but it didn't work. #[cfg(test)]
mod tests {
use rstest::*;
#[rstest]
#[tokio::test]
async fn test_ok(something: u8) {
assert_eq!(3, something)
}
#[fixture]
fn something() -> u8 {
tokio_test::block_on(async {3})
}
} Panic
This does work if |
@jennydaman You have to use macro_rules! block_on {
($async_expr:expr) => {{
tokio::task::block_in_place(|| {
let handle = tokio::runtime::Handle::current();
handle.block_on($async_expr)
})
}};
}
#[rstest]
#[tokio::test(flavor = "multi_thread")]
async fn test_ok(something: u8) {
assert_eq!(3, something)
}
#[fixture]
fn something() -> u8 {
block_on!(async {3})
} it works as a charm |
Great @pleshevskiy The ultimate goal was to have an async fixture which runs only once, so piecing it together: #[cfg(test)]
mod tests {
use rstest::*;
macro_rules! block_on {
($async_expr:expr) => {{
tokio::task::block_in_place(|| {
let handle = tokio::runtime::Handle::current();
handle.block_on($async_expr)
})
}};
}
#[rstest]
#[tokio::test(flavor = "multi_thread")]
async fn test_ok(something: &u8) {
assert_eq!(3, something.clone())
}
#[rstest]
#[tokio::test(flavor = "multi_thread")]
async fn test_another_ok(something: &u8) {
assert_eq!(3, something.clone())
}
#[fixture]
#[once]
fn something() -> u8 {
block_on!(async {
println!("i am being called!");
3
})
}
} Working versions: [dependencies]
rstest = "0.16.0"
tokio = { version = "1.24.0", features = ["full"] } |
Instead of a custom-defined macro, you could also use futures::executor::block_on |
I have a use case where I want to run async code setup before all tests (e.g. check server is online). Unfortunately, I saw that async once fixtures are forbidden. Is there any workaround?
fwiw I'm also limited to using tokio...
The text was updated successfully, but these errors were encountered: