Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not automatically balancing partitions within the same consumption group? #2765

Open
yezhenli opened this issue Jan 11, 2024 · 2 comments
Open

Comments

@yezhenli
Copy link

When there is a backlog of Kafka tasks, there will be an automatic imbalance of partitions. What should I do?

@yezhenli
Copy link
Author

For example, with 10 pods and 10 partitions, there will be no consumption.

@dnwe
Copy link
Collaborator

dnwe commented Feb 11, 2024

@yezhenli I'm not sure what you mean here? If there are 10 partitions and 10 active consumers in the group, then each will be assigned one partition from the group. If one of the consumers shuts down then the 10 partitions will be spread across the remaining 9 consumers and one will be consuming from 2 partitions whilst the rest only have 1. That is the only automatic balancing of partitions that takes place, an even distribution of number across the group. There's no understanding of one client being "more performant" than the others and hence able to handle more partitions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants