Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression in 0.23.12 #157

Open
bastewart opened this issue Jan 13, 2023 · 9 comments
Open

Performance regression in 0.23.12 #157

bastewart opened this issue Jan 13, 2023 · 9 comments

Comments

@bastewart
Copy link

We suspect there is a performance regression in v0.23.12. This was initially noticed by updating http4s-jetty to 0.23.12 which brings servlet 0.23.13 in transitively.

We've observed a ~15-20% performance hit in a production service (increased CPU to handle the same load). Sorry, this is both pretty unscientific in benchmarking terms and we haven't yet isolated the cause (an exact commit). We are almost entirely certain on the release though.

We've isolated this to the 0.23.12 release of this library by manually bumping the servlet version (and keeping all other dependencies constant). 0.23.11 appears fine, 0.23.12 and 0.23.13 show the reduced performance.

In addition it does not appear that http4s-jetty (or jetty itself) are the cause. Running http4s-jetty 0.23.12 alongside servlet 0.23.13 shows the same performance as http4s-jetty 0.23.11 and servlet 0.23.13.

@TimWSpence is currently looking to isolate the commit!

@bastewart bastewart changed the title Suspected performance regression in 0.23.12 Performance regression in 0.23.12 Jan 13, 2023
@rossabaker
Copy link
Member

It would make me very happy to find out it's not #50, but I'd look there first.

@bastewart
Copy link
Author

That's where our suspicion fell internal as well! 😬

@rossabaker
Copy link
Member

Make sure it's cats-effect-3.4. Transitively, it should be.

We could play with the type of queue. I didn't rigorously performance test, and apparently should have. I think this implementation is much nicer, so I'd hate to have to revert, but we could if we can't fix this.

@armanbilge
Copy link
Member

I was chatting with @TimWSpence about this, their current suspicion is that a large number of fiber allocations are at play.

If you are using a "parallel" Dispatcher, then each of these unsafeRunAndForget will start a new Fiber.

def onAllDataRead(): Unit =
unsafeRunAndForget(q.offer(End))
def onError(t: Throwable): Unit =
unsafeRunAndForget(q.offer(Error(t)))
def unsafeRunAndForget[A](fa: F[A]): Unit =
dispatcher.unsafeRunAndForget(
fa.onError { case t => F.delay(logger.error(t)("Error in servlet read listener")) }
)

However, if you create a "sequential" Dispatcher, then all those effects will be run sequentially on the same fiber. In this case you would want the Dispatcher to be local to the request.

@bastewart
Copy link
Author

Also just going to confirm here (for the record) that @TimWSpence did isolate this to #50. He confirmed this on discord though, hence no message here. Everyone in this thread is aware (I think!) though.

@rossabaker
Copy link
Member

A request-scoped, sequential dispatcher sounds reasonable to me. I took a quick look and it's going to be a moderate pain to fix in a binary compatible fashion, but I think it can be done.

@rossabaker
Copy link
Member

It's late and I'm getting stupid, but, uh, who dispatches the dispatcher?

  override def service(
      servletRequest: HttpServletRequest,
      servletResponse: HttpServletResponse,
  ): Unit =
    Dispatcher.sequential[F].use { dispatcher =>
       ???
    }

I think the servlet needs a parallel one, and each request can get a local, sequential one?

@armanbilge
Copy link
Member

Heh, that sounds about right. Each request would get its own fiber (from the parallel dispatcher) on which it starts a single-fiber sequential dispatcher.

@TimWSpence
Copy link

It would make me very happy to find out it's not #50, but I'd look there first.

Sorry Ross 😛 I just opened #171. Hope both the description and the changes make sense!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants