Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to use variable length queue to avoid dropping message on burst #431

Open
MichaelMure opened this issue Jul 12, 2021 · 7 comments
Open
Labels
effort/days Estimated to take multiple days, but less than a week P3 Low: Not priority right now status/ready Ready to be worked

Comments

@MichaelMure
Copy link
Contributor

There is in this package a bunch of pressure release mechanisms to avoid breaking or using too much memory when things get clogged under load. The downside is that pubsub can drop some messages under load, even if the message did find its way across the network. This makes sense most of the time.

There is at least 3 spots where that can happen:

  • validation queue (tunnable with pubsub.WithValidateQueueSize)
  • outbound queue (tunnable with pubsub.WithPeerOutboundQueueSize)
  • topic subscribe output queue (soon to be tunnable as well)

I believe some applications (like mine) would benefit from being able to tell pubsub to use as much memory as necessary and not drop messages in a heavy load scenario. Of course that mean that the application expose itself to being OOMed, but that can be easier to predict and handle than message semi-randomly disappearing.

@aschmahmann aschmahmann added effort/days Estimated to take multiple days, but less than a week P2 Medium: Good to have, but can wait until someone steps up status/ready Ready to be worked P3 Low: Not priority right now and removed P2 Medium: Good to have, but can wait until someone steps up labels Jul 23, 2021
@aschmahmann
Copy link
Contributor

@vyzo does this seem like a reasonable idea if someone wanted to implement it?

@vyzo
Copy link
Collaborator

vyzo commented Jul 23, 2021 via email

@zivkovicmilos
Copy link

Hey @vyzo,

Has there been any effort recently regarding this specific issue?

@vyzo
Copy link
Collaborator

vyzo commented Mar 1, 2022

no, do you want to take it in? Shouldnt be too hard for subscriptions.

@vyzo
Copy link
Collaborator

vyzo commented Mar 1, 2022

also note that validation of published messages is now synchronous and cannot be dropped.

@lthibault
Copy link
Contributor

@vyzo Could this be extended to include the outbound/publishing queue as well? I have an application with similar requirements to @MichaelMure's which is also expected to publish in rather large bursts.

Separately, would it be reasonable to define a Queue interface and allow users to pass in their own implementation? This would make it possible to tune the runtime performance of unbounded queues. For example, one might use an implementation based on VList to reduce allocations in an unbounded queue.

@vyzo
Copy link
Collaborator

vyzo commented Nov 7, 2022

Probably, yeah.
All very reasonable propositions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/days Estimated to take multiple days, but less than a week P3 Low: Not priority right now status/ready Ready to be worked
Projects
None yet
Development

No branches or pull requests

5 participants