Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

features = [ "mio-support" ] causes 100% CPU usage #16

Closed
ghost opened this issue Jul 12, 2019 · 3 comments
Closed

features = [ "mio-support" ] causes 100% CPU usage #16

ghost opened this issue Jul 12, 2019 · 3 comments

Comments

@ghost
Copy link

ghost commented Jul 12, 2019

  1. cargo new somesample
  2. add this to Cargo.toml:
[dependencies]
#signal-hook = { version = "0.1.9" , features = [ "mio-support" ] }  # 100% cpu usage                                           
signal-hook = { version = "0.1.9" , features = [ ] } # no cpu usage!
libc = "0.2.59"
  1. use this src/main.rs:
extern crate libc;
extern crate signal_hook;

use std::io::Error;
use std::thread;

use signal_hook::iterator::Signals;

fn main() -> Result<(), Error> {
    let signals = Signals::new(&[
        signal_hook::SIGUSR1,
        signal_hook::SIGUSR2,
        signal_hook::SIGHUP,
    ])?;
    thread::spawn(move || {
        for signal in signals.forever() {
            match signal {
                signal_hook::SIGUSR1 => {
                    println!("exiting");
                    break;
                }
                signal_hook::SIGUSR2 => {
                    println!("ignoring {}", signal);
                }
                signal_hook::SIGHUP => {
                    println!("ignoring {}", signal);
                }
                _ => unreachable!(),
            }
        }
    })
    .join()
    .unwrap();
    Ok(())
}
  1. cargo run

When features = [ "mio-support" ] is in Cargo.toml, cargo run will run the program and it will use 100% CPU all the time.

@ghost
Copy link
Author

ghost commented Jul 12, 2019

for another (complicated/working) example, whereby signals and new server sockets are both handled in the same match, please see:
https://gist.github.com/howaboutsynergy/4f1e79437929075aa40f981a18c1ab64
this one is actually using mio, so the 100% CPU usage only happens after an ignored signal is being sent. (because the unignored ones just exit the program)

On another note, since I'm just learning stuff, I've transformed the example in OP into an example that can use its own feature(s) to showcase this issue:
https://github.com/howaboutsynergy/reflo/blob/5bf0538d8db0841d0b611fc028caae708b82f2f1/others/sighooksample1/Cargo.toml#L21
this is so cool! :D

vorner added a commit that referenced this issue Jul 12, 2019
When the mio-support feature is enabled, it doesn't block on .wait :-(.

Reproducer for #16.
@vorner
Copy link
Owner

vorner commented Jul 12, 2019

I've figured what the problem is, I still need to implement the solution. Anyway, thanks for discovering this.

But, by the way, in the https://github.com/howaboutsynergy/reflo/blob/4ee633c89a984ce4a4664abf6fed1ba79bd1a041/book/l_20_1_hello/src/bin/main.rs#L83 you should be using for signal in signals.pending(). Once this is fixed, your code will no longer take 100% CPU, but it'll never exit from that branch, because for s in &signals is equivalent to for s in signals.forever() ‒ which just provides an infinite stream of incoming signals, waiting for them as needed.

vorner added a commit that referenced this issue Jul 13, 2019
When the mio-support feature is enabled, it doesn't block on .wait :-(.

Reproducer for #16.
vorner added a commit that referenced this issue Jul 13, 2019
The problem was, when the feature was enabled, the wakeup socket got
switched to non-blocking mode (no matter if anything mio-related was
actually used). So things that should have been blocking didn't and it
was retrying the read from the pipe in a tight loop.

Now we don't switch to non-blocking, but use the MSG_DONTWAIT flag when
we need non-blocking read.

Closes #16
@ghost
Copy link
Author

ghost commented Jul 13, 2019

Thanks for telling me that for s in signals.forever() was bad there, and I noticed that without having used this construct(from an example, no doubt), I wouldn't have stumbled upon this #16 issue because it would just exit the loop after each check :D

Now I'm using for signal in signals.pending() and I see why it makes sense! Previously, no server connections would've been handled after receiving any of the expected signals!(the unexpected ones would just coredump(eg. SIGBUS) or exit(eg. SIGUSR1))

@vorner vorner closed this as completed in b4fd16c Jul 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant