Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support intercepting onion messages for offline peers #2973

Merged
merged 9 commits into from
May 9, 2024
31 changes: 31 additions & 0 deletions lightning/src/events/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1041,6 +1041,18 @@ pub enum Event {
///
/// [`ChannelHandshakeConfig::negotiate_anchors_zero_fee_htlc_tx`]: crate::util::config::ChannelHandshakeConfig::negotiate_anchors_zero_fee_htlc_tx
BumpTransaction(BumpTransactionEvent),
/// We received an onion message that is intended to be forwarded to a peer
/// that is currently offline. This event will only be generated if the
/// `OnionMessenger` was initialized with
/// [`OnionMessenger::new_with_offline_peer_interception`], see its docs.
///
/// [`OnionMessenger::new_with_offline_peer_interception`]: crate::onion_message::messenger::OnionMessenger::new_with_offline_peer_interception
OnionMessageIntercepted {
/// The node id of the offline peer.
peer_node_id: PublicKey,
/// The onion message intended to be forwarded to `peer_node_id`.
message: msgs::OnionMessage,
},
}

impl Writeable for Event {
Expand Down Expand Up @@ -1286,6 +1298,13 @@ impl Writeable for Event {
35u8.write(writer)?;
// Never write ConnectionNeeded events as buffered onion messages aren't serialized.
},
&Event::OnionMessageIntercepted { ref peer_node_id, ref message } => {
37u8.write(writer)?;
write_tlv_fields!(writer, {
(0, peer_node_id, required),
(2, message, required),
});
}
// Note that, going forward, all new events must only write data inside of
// `write_tlv_fields`. Versions 0.0.101+ will ignore odd-numbered events that write
// data via `write_tlv_fields`.
Expand Down Expand Up @@ -1697,6 +1716,18 @@ impl MaybeReadable for Event {
},
// Note that we do not write a length-prefixed TLV for ConnectionNeeded events.
35u8 => Ok(None),
37u8 => {
let mut f = || {
_init_and_read_len_prefixed_tlv_fields!(reader, {
(0, peer_node_id, required),
(2, message, required),
});
Ok(Some(Event::OnionMessageIntercepted {
peer_node_id: peer_node_id.0.unwrap(), message: message.0.unwrap()
}))
};
f()
},
// Versions prior to 0.0.100 did not ignore odd types, instead returning InvalidValue.
// Version 0.0.100 failed to properly ignore odd types, possibly resulting in corrupt
// reads.
Expand Down
38 changes: 38 additions & 0 deletions lightning/src/onion_message/messenger.rs
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,8 @@ where
message_router: MR,
offers_handler: OMH,
custom_handler: CMH,
intercept_messages_for_offline_peers: bool,
pending_events: Mutex<Vec<Event>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use a separate queue if these aren't persisted? If we instead use message_recipients for buffering -- with a new variant -- then we can re-use outbound_buffer_full and generate events in the same manner as ConnectionNeeded. Otherwise, we are left with two different systems for buffering messages and generating events.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I believe with this approach an attacker may be able to fragment our heap by continually sending bogus forwards to a new fake peer each time (where each fake forward results in a VecDeque allocation). I think my other reasoning is that because the events generated may all be bogus, they shouldn't be allowed to count towards outbound_buffer_full, since currently all of that traffic is known to be more legitimate (either an outbound OM initiated by the LDK user or a forward to a peer that is connected).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I believe with this approach an attacker may be able to fragment our heap by continually sending bogus forwards to a new fake peer each time (where each fake forward results in a VecDeque allocation).

Yeah, though this seems to be an issue already with onion messages, FWIW. Processing an OnionMessage involves allocating a Vec<u8> for Packet::hop_data.

I think my other reasoning is that because the events generated may all be bogus, they shouldn't be allowed to count towards outbound_buffer_full, since currently all of that traffic is known to be more legitimate (either an outbound OM initiated by the LDK user or a forward to a peer that is connected).

Right, though this makes me wonder if the user should specify which peers it supports forwarding to. See other comment about the race condition.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. Yeah, I suppose specifying which peers support forwarding adds a bit of user to the burden. They are already doing this when processing events, so no sense forcing them to duplicate it here an risk inconsistencies.

}

/// [`OnionMessage`]s buffered to be sent.
Expand Down Expand Up @@ -796,6 +798,28 @@ where
pub fn new(
entropy_source: ES, node_signer: NS, logger: L, node_id_lookup: NL, message_router: MR,
offers_handler: OMH, custom_handler: CMH
) -> Self {
Self::new_inner(
entropy_source, node_signer, logger, node_id_lookup, message_router,
offers_handler, custom_handler, false
)
}

///
pub fn new_with_offline_peer_interception(
entropy_source: ES, node_signer: NS, logger: L, node_id_lookup: NL,
message_router: MR, offers_handler: OMH, custom_handler: CMH
) -> Self {
Self::new_inner(
entropy_source, node_signer, logger, node_id_lookup, message_router,
offers_handler, custom_handler, true
)
}

fn new_inner(
entropy_source: ES, node_signer: NS, logger: L, node_id_lookup: NL,
message_router: MR, offers_handler: OMH, custom_handler: CMH,
intercept_messages_for_offline_peers: bool
) -> Self {
let mut secp_ctx = Secp256k1::new();
secp_ctx.seeded_randomize(&entropy_source.get_secure_random_bytes());
Expand All @@ -809,6 +833,8 @@ where
message_router,
offers_handler,
custom_handler,
intercept_messages_for_offline_peers,
pending_events: Mutex::new(Vec::new()),
}
}

Expand Down Expand Up @@ -1004,6 +1030,11 @@ where
}
}
}
let mut events = Vec::new();
core::mem::swap(&mut *self.pending_events.lock().unwrap(), &mut events);
for ev in events {
handler.handle_event(ev);
}
}
}

Expand Down Expand Up @@ -1081,6 +1112,13 @@ where
e.get_mut().enqueue_message(onion_message);
log_trace!(logger, "Forwarding an onion message to peer {}", next_node_id);
},
_ if self.intercept_messages_for_offline_peers => {
self.pending_events.lock().unwrap().push(
Event::OnionMessageIntercepted {
peer_node_id: next_node_id, message: onion_message
}
);
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we log_trace here like the other arms?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I omitted it because of our past discussions of avoiding adding logs that would allow someone to adversarially blow up our logs on disk (there are some pre-existing ones in this method, ofc). So I'm not sure it's a good idea?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... I thought that was only applicable for higher-level logging like log_error. Will let @TheBlueMatt provide guidance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, happy to add it in follow-up too, no reason to hold the PR up.

_ => {
log_trace!(
logger,
Expand Down