Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible overflow when adding duration to instant in libp2p_mdns::behaviour::Behaviour #1974

Closed
helixstreet opened this issue Oct 22, 2023 · 5 comments
Labels
I10-unconfirmed Issue might be valid, but it's not yet known.

Comments

@helixstreet
Copy link

2023-10-19 22:53:30 ✨ Imported #17795322 (0xeb24…427a)
2023-10-19 22:53:30 ✨ Imported #17795322 (0x9740…8c23)
2023-10-19 22:53:30 ✨ Imported #17795322 (0xf182…b6cf)
2023-10-19 22:53:34 💤 Idle (22 peers), best: #17795322 (0xeb24…427a), finalized #17795318 (0x0a5a…2347), ⬇ 529.3kiB/s ⬆ 623.8kiB/s
2023-10-19 22:53:36 ✨ Imported #17795323 (0xf5a5…50a7)

====================

Version: 1.2.0-72c45356393

0: backtrace::capture::Backtrace::new
1: sp_panic_handler::set::{{closure}}
2: std::panicking::rust_panic_with_hook
3: std::panicking::begin_panic_handler::{{closure}}
4: std::sys_common::backtrace::__rust_end_short_backtrace
5: _rust_begin_unwind
6: core::panicking::panic_fmt
7: core::option::expect_failed
8: tokio::time::interval::Interval::poll_tick
9: <libp2p_mdns::behaviour::Behaviour

as libp2p_swarm::behaviour::NetworkBehaviour>::poll
10: <sc_network::behaviour::Behaviour as libp2p_swarm::behaviour::NetworkBehaviour>::poll
11: libp2p_swarm::Swarm::poll_next_event
12: sc_network::service::NetworkWorker<B,H>::next_action::{{closure}}::{{closure}}::{{closure}}
13: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}}
14: sc_service::builder::build_network::{{closure}}
15: tokio::runtime::task::raw::poll
16: std::sys_common::backtrace::__rust_begin_short_backtrace
17: core::ops::function::FnOnce::call_once{{vtable.shim}}
18: std::sys::unix::thread::Thread::new::thread_start
19: __pthread_joiner_wake

Thread 'tokio-runtime-worker' panicked at 'overflow when adding duration to instant', library/std/src/time.rs:408

This is a bug. Please report it at:

    https://github.com/paritytech/polkadot/issues/new
@github-actions github-actions bot added the I10-unconfirmed Issue might be valid, but it's not yet known. label Oct 22, 2023
@helixstreet helixstreet changed the title runtimer-worker panicked runtime-worker panicked Oct 22, 2023
@bkchr
Copy link
Member

bkchr commented Oct 23, 2023

@helixstreet what is your node version?

@bkchr
Copy link
Member

bkchr commented Oct 23, 2023

Ahh:
Version: 1.2.0-72c45356393 🤦

CC @paritytech/networking

@dmitry-markin dmitry-markin changed the title runtime-worker panicked Possible overflow when adding duration to instant in libp2p_mdns::behaviour::Behaviour Oct 24, 2023
@MOZGIII
Copy link

MOZGIII commented Apr 5, 2024

We seem to be seeing this at Humanode with Substrate v0.9.41 (pre-1.0.0 that is).

Details

Version: 996aef730e6a2be14df1a017dec34dfcb705e0df
0: sp_panic_handler::set::{{closure}}
1: <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args
>>::call
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/alloc/src/boxed.rs:2021:9
std::panicking::rust_panic_with_hook
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/panicking.rs:735:13
2: std::panicking::begin_panic_handler::{{closure}}
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/panicking.rs:609:13
3: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/sys_common/backtrace.rs:170:18
4: rust_begin_unwind
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/panicking.rs:597:5
5: core::panicking::panic_fmt
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/core/src/panicking.rs:72:14
6: core::panicking::panic_display
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/core/src/panicking.rs:178:5
core::panicking::panic_str
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/core/src/panicking.rs:152:5
core::option::expect_failed
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/core/src/option.rs:1978:5
7: core::option::Option<T>::expect
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/core/src/option.rs:888:21
<std::time::Instant as core::ops::arith::Add<core::time::Duration>>::add
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/time.rs:419:33
8: tokio::time::interval
::Interval::poll_tick
9: libp2p_mdns::behaviour::timer::tokio::<impl futures_core::stream::Stream for libp2p_mdns::behaviour::timer::Timer<tokio
::time::interval::Interval>>::poll_next
10: <libp2p_mdns::behaviour::Behaviour<P> as libp2p_swarm::behaviour
::NetworkBehaviour>::poll
11: <sc_network::discovery::DiscoveryBehaviour as libp2p_swarm::behaviour::NetworkBehaviour>::poll
12: libp2p_swarm::Swarm<TBehaviour>::poll_next_event
13: sc_network::service::NetworkWorker<B,H>::
next_action::{{closure}}::{{closure}}::{{closure}}
14: <
futures_util
::future::poll_fn::PollFn<F> as core::future::future
::Future>::poll
15: sc_network::service::NetworkWorker<B,H>::next_action::{{closure}}
16: <futures_util::future::future::fuse::Fuse<Fut> as core::future::future::Future>::poll
17: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}}
18: <futures_util::future::poll_fn::PollFn
<F> as core::future::future::Future>::poll
19: <sc_service::task_manager::prometheus_future::PrometheusFuture<
T> as core::future::future::Future>::poll
20: <futures_util::future::select::Select<
A,B> as core::future::future::Future>::poll
21: <tracing_futures::Instrumented<T> as core::future::future
::Future>::poll
22: tokio::runtime::context::BlockingRegionGuard::block_on
23: tokio::runtime::handle::Handle::block_on
24: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
25: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
26: tokio::runtime::
task::core::Core<T,S>::poll
27: tokio::runtime::task::harness::Harness<T,S>::poll
28: std::sys_common::backtrace::__rust_begin_short_backtrace
29: core::ops::function::FnOnce::call_once{{vtable.shim}
}
30: <alloc::boxed::Box<F,A> as core::ops::function::
FnOnce<Args>>::call_once
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/alloc/src/boxed.rs:2007:9
<alloc::boxed::Box<F,A> as
core::ops::function::FnOnce<Args>>::call_once
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/alloc/src/boxed.rs:2007:9
std::sys::
unix::thread::Thread::new::thread_start
at /rustc/97c81e1b537088f1881c8894ee8579812ed9b6d1/library/std/src/sys/unix/thread.rs:108:17
31: start_thread
32: clone
Thread 'tokio-runtime-worker' panicked at '
overflow when adding duration to instant', library/std/src/time.rs:419
This is a bug. Please report it at:
https://link.humanode.io/bug-report

@MOZGIII
Copy link

MOZGIII commented Apr 5, 2024

I think what happens is the code runs on overloaded CPU, and the implementation manages to tick (i.e. delayed enough that the sum overflows).

This code looks odd to me:

        fn at(instant: Instant) -> Self {
            // Taken from: https://docs.rs/async-io/1.7.0/src/async_io/lib.rs.html#91
            let mut inner = time::interval_at(
                TokioInstant::from_std(instant),
                Duration::new(std::u64::MAX, 1_000_000_000 - 1),
            );
            inner.set_missed_tick_behavior(MissedTickBehavior::Skip);
            Self { inner }
        }

https://github.com/libp2p/rust-libp2p/blob/47e19f7175edb6954bf69bd2a61edaaf4f521ea9/protocols/mdns/src/behaviour/timer.rs#L95-L118

Issue seems to be originating from here: libp2p/rust-libp2p#2748

@bkchr
Copy link
Member

bkchr commented Apr 6, 2024

Ty @MOZGIII, so this is indeed some upstream bug. We can not do that much, so I will close this issue.

@bkchr bkchr closed this as completed Apr 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I10-unconfirmed Issue might be valid, but it's not yet known.
Projects
Status: Blocked ⛔️
Development

No branches or pull requests

3 participants