Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up NioEventLoop #9799

Merged
merged 2 commits into from Nov 26, 2019
Merged

Clean up NioEventLoop #9799

merged 2 commits into from Nov 26, 2019

Conversation

njhill
Copy link
Member

@njhill njhill commented Nov 23, 2019

Motivation

The event loop implementations had become somewhat tangled over time and work was done recently to streamline EpollEventLoop. NioEventLoop would benefit from the same treatment and it is more straightforward now that we can follow the same structure as was done for epoll.

Modifications

Untangle NioEventLoop logic and mirror what's now done in EpollEventLoop w.r.t. the volatile selector wake-up guard and scheduled task deadline handling.

Some common refinements to EpollEventLoop have also been included - to use constants for the "special" deadline/wakeup volatile values and to avoid some unnecessary calls to System.nanoTime() on task-only iterations.

Result

Hopefully cleaner, more efficient and less fragile NIO transport implementation.

Motivation

The event loop implementations had become somewhat tangled over time and
work was done recently to streamline EpollEventLoop. NioEventLoop would
benefit from the same treatment and it is more straighforward now that
we can follow the same structure as was done for epoll.

Modifications

Untangle NioEventLoop logic and mirror what's now done in EpollEventLoop
w.r.t. the volatile selector wake-up guard and scheduled task deadline
handling.

Some common refinements to EpollEventLoop have also been included - to
use constants for the "special" deadline/wakeup volatile values and to
avoid some unnecessary calls to System.nanoTime() on task-only
iterations.

Result

Hopefully cleaner, more efficient and less fragile NIO transport
implementation.
@netty-bot
Copy link

Can one of the admins verify this patch?

@njhill
Copy link
Member Author

njhill commented Nov 23, 2019

This should address #9112, #9782 and #8644, and probably #4543 and #8698.

@normanmaurer
Copy link
Member

@netty-bot test this please

1 similar comment
@normanmaurer
Copy link
Member

@netty-bot test this please

@normanmaurer normanmaurer added this to the 4.1.44.Final milestone Nov 25, 2019
// other value T when EL is waiting with wakeup scheduled at time T
private final AtomicLong nextWakeupNanos = new AtomicLong(-1L);
private final AtomicLong nextWakeupNanos = new AtomicLong(AWAKE);
Copy link
Contributor

@franz1981 franz1981 Nov 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is outside the scope of this PR, but why not an AtomicLongFieldUpdater?
It would allow to control its layout if profiling shows that's highly contended (and it seems the case 2 me, with many producers)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a good followup PR (which is also true for EpollEventLoop.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I can do it in a follow-on. I used AtomicLong here to be consistent with EpollEventLoop. That was originally an atomic updater but @normanmaurer suggested changing it to AtomicLong in a prior PR here: #9605 (comment)... I am fine with either though!

if (strategy > 0 && processReady(events, strategy)) {
prevDeadlineNanos = Long.MAX_VALUE;
if (processReady(events, strategy)) {
prevDeadlineNanos = NONE;
}
} finally {
// Ensure we always run tasks.
final long ioTime = System.nanoTime() - ioStartTime;
runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this (100 - ioRatio) / ioRatio has to be called on each iteration? / seems quite heavy

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@franz1981 maybe also for a separate PR since it's always been like this AFAIK... though I think if we want to calculate an accurate percentage of the measured time then a div will have to be done at some point, unless we want do a floating point multiply (and conversion to/from fp).

Also we had been thinking of getting rid of this altogether, and always process all tasks (with some limit on number of outer iterations) - see #9590 (comment). After more thought I feel like that makes more sense actually... thoughts?

// AWAKE when EL is awake
// NONE when EL is waiting with no wakeup scheduled
// other value T when EL is waiting with wakeup scheduled at time T
private final AtomicLong nextWakeupNanos = new AtomicLong(AWAKE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AtomicLongFiedUpdater as well

final long ioStartTime = System.nanoTime();
try {
processSelectedKeys();
} finally {
// Ensure we always run tasks.
final long ioTime = System.nanoTime() - ioStartTime;
runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(100 - ioRatio) / ioRatio should be done on each iteration?

@normanmaurer
Copy link
Member

normanmaurer commented Nov 25, 2019 via email

@franz1981
Copy link
Contributor

@njhill @normanmaurer Agree about the follow-up: it seems already ok as it is (it's a port indeed)!

@njhill
Copy link
Member Author

njhill commented Nov 25, 2019

Thanks @normanmaurer @franz1981, I also agree about the follow-up... my question in the comment above was kind of off-topic not meant for this PR! I will open another later to discuss elimination of ioRatio.

@normanmaurer normanmaurer merged commit e208e96 into netty:4.1 Nov 26, 2019
@normanmaurer
Copy link
Member

@njhill thanks a lot... Please also back port to master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants