New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix size calculation logic in PooledByteBufAllocator #13229
base: 4.1
Are you sure you want to change the base?
Conversation
(I'm not one of committer.) IMHO,for the issue # 1. I think it is correct behavior. The reason you're getting an error when using a 4MiB page size and the default maxOrder of 9 is because it causes an int32 overflow. To fix this, you can manually set the By the way, if this issue is from your use cases, may I ask you why you need such large page and chunk sizes? AFAIK, using such huge chunk size almost certainly can degrade performance and make it harder to manage memory efficiently, with the current implementations. |
I agree with you above. But I take this issue from a pure code-logic perspective.
It is a correct behavior ONLY when the The So the The following code piece is from
The method
So, we should solve the overflow problem in the The overflow |
I'm unsure if the proposed approach is the most suitable one for this situation. It seems that setting a large value for pageSize also affects the default value of maxOrder. Perhaps we should explore alternative solutions that include a fallback to the default page size. |
Based on current PR's committed code, this can be simply done by setting a lower from:
to:
If we set If |
Cool, Thank you for taking the time to reply to my comment. I would like to kindly request that we wait for the core developers to weigh in. Their expertise and input could provide valuable insights into this issue and guide us in making the best decision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some way to add a test to validate what's trying to solve?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some way to add a test to validate what's trying to solve?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, sorry for letting this hang. I had some comments.
if (pageSize > MAX_CHUNK_SIZE) { | ||
throw new IllegalArgumentException("pageSize: " + pageSize + " (expected: " + MAX_CHUNK_SIZE + ')'); | ||
} | ||
if (!Pow2.isPowerOfTwo(pageSize)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can use Pow2 directly because it violates OSGi encapsulation. Add an indirection method to PlatformDependent and call it that way.
if (pageSize < alignment) { | ||
throw new IllegalArgumentException("Alignment cannot be greater than page size. " + | ||
"Alignment: " + alignment + ", page size: " + pageSize + '.'); | ||
} | ||
|
||
checkPositiveOrZero(alignment, "alignment"); | ||
if (!Pow2.isPowerOfTwo(alignment)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
@@ -369,6 +380,18 @@ private static int validateAndCalculateChunkSize(int pageSize, int maxOrder) { | |||
return chunkSize; | |||
} | |||
|
|||
private static int calculateDefaultMaxOrder(int pageSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I read this right, it will find the largest chunk size that pass validation. That could end up using a lot of memory. I think we should have more reserved heuristics beyond, say, 16 MiB chunks or there abouts, such that we try to keep memory usage down beyond that point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrisvest If we limit the default max CHUNK size within 16 MiB, then should we also limit the default max PAGE size within a similar reasonable size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't thinking to place a limit beyond what we already do, but rather have the algorithm put fewer pages in the chunks as the page size gets bigger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrisvest , to approach this, I think the logic in method PooledByteBufAllocator.validateAndCalculateChunkSize(int pageSize, int maxOrder)
also needs to be modified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so we'd basically change the heuristic to find a balance between the two variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talking about heuristics and reading through the comments, I'm inclined to come down on the side of being more strict and less clever.
In my opinion, the assertion that PooledByteBufAllocator.DEFAULT
should just work only holds for systems where the default properties have not been modified. Those who go about modifying the default page size or max-order, need to do so with enough understanding to break the allocator.
What we should do, is place sufficient validation early in initialization, and produce useful error messages for validation failures.
If we paper over bad configurations with heuristics, we'd be pretending to know what people intended with the settings. I think it's better to fail early and explicitly, as an exception from a static initializer would.
@@ -57,6 +58,14 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements | |||
|
|||
private static final int CACHE_NOT_USED = 0; | |||
|
|||
private static final int MAX_ORDER_UPPER_BOUNDER = 14; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
private static final int MAX_ORDER_UPPER_BOUNDER = 14; | |
private static final int MAX_ORDER_UPPER_BOUND = 14; |
The ER
ending looks like a typo.
@@ -369,6 +380,18 @@ private static int validateAndCalculateChunkSize(int pageSize, int maxOrder) { | |||
return chunkSize; | |||
} | |||
|
|||
private static int calculateDefaultMaxOrder(int pageSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so we'd basically change the heuristic to find a balance between the two variables.
Agree, then we can remove the |
Motivation:
"io.netty.allocator.pageSize"
to4MiB
or larger, thePooledByteBufAllocator.DEFAULT
instance throwsjava.lang.IllegalArgumentException
in the construction stage:The
pageSize
forPooledByteBufAllocator.DEFAULT
is initialized and validated in the static block, and it is supposed to resolve the overflow problem in the static block initialization stage.After the static block initialization finished, the
PooledByteBufAllocator.DEFAULT
should not throwpageSize
overflow exception in the construction stage.PooledByteBufAllocator
's constructor and static block can be optimized.For example:
Some validation code in the constructor should move into method
validateAndCalculatePageShifts()
.Modification:
Correct the size validation logic.
Result:
Fix the problem above, and optimize size-validation related code.