Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON performance in TechEmpower benchmarks #1177

Closed
nuvacore opened this issue Jul 20, 2022 · 15 comments
Closed

JSON performance in TechEmpower benchmarks #1177

nuvacore opened this issue Jul 20, 2022 · 15 comments

Comments

@nuvacore
Copy link

The latest TechEmpower benchmarks have been released. Axum is doing extremely well and is in the top ten. Very well done!

https://www.techempower.com/benchmarks/#section=data-r21&test=composite

What surprised me is that the JSON benchmark shows especially the JSON benchmark being way down, almost 50% of the top performer. I thought that there was little overhead in Rust for JSON. "may-minihttp", which is also Rust, is almost double of Axum's JSON performance.

What's the reason for this? Is this something that can be improved?

@jplatte
Copy link
Member

jplatte commented Jul 20, 2022

axum's JSON support uses serde_json. I can't think of anything that would make it particularily fast, or slow, compared to others like actix. Do you know what exactly is being benchmarked there for the JSON column?

@jplatte
Copy link
Member

jplatte commented Jul 20, 2022

Sooo I actually had an idea, looked into it, here's a PR that should improve performance: #1178

@ibraheemdev
Copy link
Contributor

Most other rust frameworks use simd-json, and some even re-use a byte buffer per thread, which is probably the difference.

@davidpdrsn
Copy link
Member

Which frameworks? simd-json's dependents don't show much.

actix-web uses serde-json but reads into a buffer with some reserved capacity. We could do that as well.

@jplatte
Copy link
Member

jplatte commented Jul 21, 2022

Maybe before we tweak our JSON (de)serialization code more, we should start some benchmarking? I'm curious whether my change helped / how much :)

@davidpdrsn
Copy link
Member

That is a good idea 😅

@davidpdrsn
Copy link
Member

Generally I don't put much care into micro benchmarks but small tweaks that don't impact the user experience are fine imo.

@ibraheemdev
Copy link
Contributor

@davidpdrsn You have to look at the benchmark code not the frameworks default json responder 🙃. Looking again though, hyper just uses serde-json and performs significantly better than axum. It also does much better on plaintext, and is only worse on fortunes but is using an out of date postgres dependency.... which suggests to me the difference is elsewhere.

@davidpdrsn
Copy link
Member

You have to look at the benchmark code not the frameworks default json responder

🤦

Looking again though, hyper just uses serde-json and performs significantly better than axum

Uh that sounds very weird 🤔

It also does much better on plaintext, and is only worse on fortunes but is using an out of date postgres dependency.... which suggests to me the difference is elsewhere.

"Fortunes"?

@ibraheemdev
Copy link
Contributor

ibraheemdev commented Jul 21, 2022

Fortunes is the name of a benchmark case that does templating + database queries. Axum only beats hyper in benchmarks involving the database, which is probably because of hyper's database client. Axum losing in all other benchmarks suggest something in the framework is slowing it down significantly (or could just be necessary overhead 🤷‍♂️).

@davidpdrsn
Copy link
Member

Note to self: This seems to be the code https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Rust/axum

@jplatte
Copy link
Member

jplatte commented Jul 21, 2022

Well it uses default features which add a small amount of unnecessary overhead. I guess I should send a PR at least disabling unused features.

@ishtms
Copy link

ishtms commented Jul 24, 2022

I had been looking at this low performance in TFB's archives (unofficial runs) since last few months, I thought this was a known issue and didn't raise it. That could've helped with the official rounds if that was reported to you.

You can check their "unofficial" benchmarks that runs on every merge to the tfb's master branch (think so) here - https://tfb-status.techempower.com/

@davidpdrsn
Copy link
Member

I've benchmarked axum's json serialization and vs actix-web's actix_web::web::Json. I used rewrk.

axum code:

use axum::{routing::get, Json, Router};

#[tokio::main]
async fn main() {
    let app = Router::new().route("/json", get(json));

    axum::Server::bind(&"0.0.0.0:8000".parse().unwrap())
        .serve(app.into_make_service())
        .await
        .unwrap();
}

async fn json() -> Json<Message> {
    let message = Message {
        message: "Hello, World!",
    };
    Json(message)
}

#[derive(serde::Serialize)]
pub struct Message {
    pub message: &'static str,
}

axum results:

❯ rewrk -d 10s -h http://localhost:8000/json -c 10 -t 12 --pct
Beginning round 1...
Benchmarking 10 connections @ http://localhost:8000/json for 10 second(s)
  Latencies:
    Avg      Stdev    Min      Max
    0.06ms   0.02ms   0.02ms   0.70ms
  Requests:
    Total: 1724393 Req/Sec: 172472.07
  Transfer:
    Total: 222.01 MB Transfer Rate: 22.21 MB/Sec
+ --------------- + --------------- +
|   Percentile    |   Avg Latency   |
+ --------------- + --------------- +
|      99.9%      |     0.22ms      |
|       99%       |     0.14ms      |
|       95%       |     0.11ms      |
|       90%       |     0.10ms      |
|       75%       |     0.08ms      |
|       50%       |     0.07ms      |
+ --------------- + --------------- +

actix-web code:

use actix_web::{get, App, HttpServer, Responder};

#[get("/json")]
async fn hello() -> impl Responder {
    let message = Message {
        message: "Hello, World!",
    };
    actix_web::web::Json(message)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| App::new().service(hello))
        .bind(("0.0.0.0", 8000))?
        .run()
        .await
}

actix-web results:

❯ rewrk -d 10s -h http://localhost:8000/json -c 10 -t 12 --pct
Beginning round 1...
Benchmarking 10 connections @ http://localhost:8000/json for 10 second(s)
  Latencies:
    Avg      Stdev    Min      Max
    0.06ms   0.11ms   0.01ms   35.41ms
  Requests:
    Total: 1626356 Req/Sec: 162651.35
  Transfer:
    Total: 209.39 MB Transfer Rate: 20.94 MB/Sec
+ --------------- + --------------- +
|   Percentile    |   Avg Latency   |
+ --------------- + --------------- +
|      99.9%      |     2.33ms      |
|       99%       |     0.73ms      |
|       95%       |     0.26ms      |
|       90%       |     0.17ms      |
|       75%       |     0.11ms      |
|       50%       |     0.08ms      |
+ --------------- + --------------- +

I see axum's json serialization is 6% faster. So I'm not sure there is anything wrong with axum's performance.

I guess the difference in the TechEmpower benchmarks comes down unrealistic optimizations used in actix-web's json serialization code. See https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Rust/actix/src/main_server.rs#L46-L58. I don't think that code reflects how people actually use actix-web.

The code for the axum benchmark does reflect what a user would actually write . See https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Rust/axum/src/main.rs#L24-L30

I also tested #1178 and got about a 3% speed up.

So yeah maybe I'm misunderstanding something but everything seems to be working fine 🤷

@jplatte
Copy link
Member

jplatte commented Jul 25, 2022

I agree, everything seems to be working fine based on these results. I think we should close this.

Also with TechEmpower/FrameworkBenchmarks#7484 (and later with the next axum release), performance on TechEmpower should improve a bit. Maybe we'll catch the actix-web benchmark without weird hacks 🙂

I also tested #1178 and got about a 3% speed up.

🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants