Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct MIN_FINAL_CLTV_EXPIRY to match our enforced requirements #911

Merged
merged 3 commits into from May 6, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
23 changes: 11 additions & 12 deletions fuzz/src/full_stack.rs

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion lightning-invoice/src/utils.rs
Expand Up @@ -92,6 +92,7 @@ where
mod test {
use {Currency, Description, InvoiceDescription};
use lightning::ln::PaymentHash;
use lightning::ln::channelmanager::MIN_FINAL_CLTV_EXPIRY;
use lightning::ln::functional_test_utils::*;
use lightning::ln::features::InitFeatures;
use lightning::ln::msgs::ChannelMessageHandler;
Expand All @@ -107,7 +108,7 @@ mod test {
let _chan = create_announced_chan_between_nodes(&nodes, 0, 1, InitFeatures::known(), InitFeatures::known());
let invoice = ::utils::create_invoice_from_channelmanager(&nodes[1].node, nodes[1].keys_manager, Currency::BitcoinTestnet, Some(10_000), "test".to_string()).unwrap();
assert_eq!(invoice.amount_pico_btc(), Some(100_000));
assert_eq!(invoice.min_final_cltv_expiry(), 9);
assert_eq!(invoice.min_final_cltv_expiry(), MIN_FINAL_CLTV_EXPIRY as u64);
assert_eq!(invoice.description(), InvoiceDescription::Direct(&Description("test".to_string())));

let mut route_hints = invoice.routes().clone();
Expand Down
2 changes: 1 addition & 1 deletion lightning/src/chain/channelmonitor.rs
Expand Up @@ -206,7 +206,7 @@ pub(crate) const CLTV_SHARED_CLAIM_BUFFER: u32 = 12;
/// HTLC-Success transaction.
/// In other words, this is an upper bound on how many blocks we think it can take us to get a
/// transaction confirmed (and we use it in a few more, equivalent, places).
pub(crate) const CLTV_CLAIM_BUFFER: u32 = 6;
pub(crate) const CLTV_CLAIM_BUFFER: u32 = 18;
/// Number of blocks by which point we expect our counterparty to have seen new blocks on the
/// network and done a full update_fail_htlc/commitment_signed dance (+ we've updated all our
/// copies of ChannelMonitors, including watchtowers). We could enforce the contract by failing
Expand Down
11 changes: 7 additions & 4 deletions lightning/src/ln/channelmanager.rs
Expand Up @@ -563,7 +563,7 @@ pub const BREAKDOWN_TIMEOUT: u16 = 6 * 24;
pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7;

/// The minimum number of blocks between an inbound HTLC's CLTV and the corresponding outbound
/// HTLC's CLTV. The current default represents roughly six hours of blocks at six blocks/hour.
/// HTLC's CLTV. The current default represents roughly seven hours of blocks at six blocks/hour.
///
/// This can be increased (but not decreased) through [`ChannelConfig::cltv_expiry_delta`]
///
Expand All @@ -572,13 +572,16 @@ pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7;
// i.e. the node we forwarded the payment on to should always have enough room to reliably time out
// the HTLC via a full update_fail_htlc/commitment_signed dance before we hit the
// CLTV_CLAIM_BUFFER point (we static assert that it's at least 3 blocks more).
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6 * 6;
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6*7;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we modify the default config value which is of 72 ? What's the rational to have differing default config value and implementation lower bound, cautiousness ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rationale is that I forgot to update the config :).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I take that back, the config looks good? Config default is 12 hours, not the 6/7 here, I don't think we need to change that.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm good with the cautiousness argument to justify config default of 12 hours :)

pub(super) const CLTV_FAR_FAR_AWAY: u32 = 6 * 24 * 7; //TODO?

/// Minimum CLTV difference between the current block height and received inbound payments.
/// Invoices generated for payment to us must set their `min_final_cltv_expiry` field to at least
/// this value.
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER;
// Note that we fail if exactly HTLC_FAIL_BACK_BUFFER + 1 was used, so we need to add one for
// any payments to succeed. Further, we don't want payments to fail if a block was found while
// a payment was being routed, so we add an extra block to be safe.
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER + 3;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think you could replace LATENCY_GRACE_PERIOD_BLOCKS, quite used with same purpose in other places?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, LATENCY_GRACE_PERIOD_BLOCKS is a bit different in meaning - its "time it may take our counterparty to download a new block, process it, and then do a full CS, RAA, CS, RAA dance with us". This, on the other hand, is a function of the math - we need HTLC_FAIL_BACK_BUFFER + 2 for any payments to succeed, we just add one more as headroom.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, not a strong opinion there though the one more for headroom is blurring the distinction.


// Check that our CLTV_EXPIRY is at least CLTV_CLAIM_BUFFER + ANTI_REORG_DELAY + LATENCY_GRACE_PERIOD_BLOCKS,
// ie that if the next-hop peer fails the HTLC within
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment about CHECK_CLTV_EXPIRY_SANITY_2 is slightly inaccurate. An adversary delaying an inbound claim (a.k.a an offered HTLC from counterparty viewpoint) is giving us more time to come with a preimage. I believe it should say outbound claim and this interpretation seems to match would_broadcast_at_height comment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think it was referring to "inbound from the attacker's PoV". I've updated the comments to not say inbound or outbound.

Expand All @@ -590,7 +593,7 @@ pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER;
#[allow(dead_code)]
const CHECK_CLTV_EXPIRY_SANITY: u32 = MIN_CLTV_EXPIRY_DELTA as u32 - LATENCY_GRACE_PERIOD_BLOCKS - CLTV_CLAIM_BUFFER - ANTI_REORG_DELAY - LATENCY_GRACE_PERIOD_BLOCKS;

// Check for ability of an attacker to make us fail on-chain by delaying inbound claim. See
// Check for ability of an attacker to make us fail on-chain by delaying an HTLC claim. See
// ChannelMontior::would_broadcast_at_height for a description of why this is needed.
#[deny(const_err)]
#[allow(dead_code)]
Expand Down
2 changes: 1 addition & 1 deletion lightning/src/ln/functional_test_utils.rs
Expand Up @@ -1144,7 +1144,7 @@ pub fn claim_payment<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_route:
claim_payment_along_route(origin_node, &[expected_route], false, our_payment_preimage);
}

pub const TEST_FINAL_CLTV: u32 = 50;
pub const TEST_FINAL_CLTV: u32 = 70;

pub fn route_payment<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_route: &[&Node<'a, 'b, 'c>], recv_value: u64) -> (PaymentPreimage, PaymentHash, PaymentSecret) {
let net_graph_msg_handler = &origin_node.net_graph_msg_handler;
Expand Down
23 changes: 14 additions & 9 deletions lightning/src/ln/functional_tests.rs
Expand Up @@ -4038,7 +4038,8 @@ fn do_test_htlc_timeout(send_partial_mpp: bool) {
};
connect_block(&nodes[0], &block);
connect_block(&nodes[1], &block);
for _ in CHAN_CONFIRM_DEPTH + 2 ..TEST_FINAL_CLTV + CHAN_CONFIRM_DEPTH + 2 - CLTV_CLAIM_BUFFER - LATENCY_GRACE_PERIOD_BLOCKS {
let block_count = TEST_FINAL_CLTV + CHAN_CONFIRM_DEPTH + 2 - CLTV_CLAIM_BUFFER - LATENCY_GRACE_PERIOD_BLOCKS;
for _ in CHAN_CONFIRM_DEPTH + 2..block_count {
block.header.prev_blockhash = block.block_hash();
connect_block(&nodes[0], &block);
connect_block(&nodes[1], &block);
Expand All @@ -4055,9 +4056,9 @@ fn do_test_htlc_timeout(send_partial_mpp: bool) {

nodes[0].node.handle_update_fail_htlc(&nodes[1].node.get_our_node_id(), &htlc_timeout_updates.update_fail_htlcs[0]);
commitment_signed_dance!(nodes[0], nodes[1], htlc_timeout_updates.commitment_signed, false);
// 100_000 msat as u64, followed by a height of TEST_FINAL_CLTV + 2 as u32
// 100_000 msat as u64, followed by the height at which we failed back above
let mut expected_failure_data = byte_utils::be64_to_array(100_000).to_vec();
expected_failure_data.extend_from_slice(&byte_utils::be32_to_array(TEST_FINAL_CLTV + 2));
expected_failure_data.extend_from_slice(&byte_utils::be32_to_array(block_count - 1));
expect_payment_failed!(nodes[0], our_payment_hash, true, 0x4000 | 15, &expected_failure_data[..]);
}

Expand Down Expand Up @@ -5182,7 +5183,7 @@ fn test_duplicate_payment_hash_one_failure_one_success() {
let htlc_updates = get_htlc_update_msgs!(nodes[1], nodes[0].node.get_our_node_id());
assert!(htlc_updates.update_add_htlcs.is_empty());
assert_eq!(htlc_updates.update_fail_htlcs.len(), 1);
assert_eq!(htlc_updates.update_fail_htlcs[0].htlc_id, 1);
let first_htlc_id = htlc_updates.update_fail_htlcs[0].htlc_id;
assert!(htlc_updates.update_fulfill_htlcs.is_empty());
assert!(htlc_updates.update_fail_malformed_htlcs.is_empty());
check_added_monitors!(nodes[1], 1);
Expand All @@ -5207,7 +5208,7 @@ fn test_duplicate_payment_hash_one_failure_one_success() {
assert!(updates.update_add_htlcs.is_empty());
assert!(updates.update_fail_htlcs.is_empty());
assert_eq!(updates.update_fulfill_htlcs.len(), 1);
assert_eq!(updates.update_fulfill_htlcs[0].htlc_id, 0);
assert_ne!(updates.update_fulfill_htlcs[0].htlc_id, first_htlc_id);
assert!(updates.update_fail_malformed_htlcs.is_empty());
check_added_monitors!(nodes[1], 1);

Expand Down Expand Up @@ -7743,9 +7744,13 @@ fn test_bump_penalty_txn_on_revoked_htlcs() {
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);

let chan = create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1000000, 59000000, InitFeatures::known(), InitFeatures::known());
// Lock HTLC in both directions
let payment_preimage = route_payment(&nodes[0], &vec!(&nodes[1])[..], 3_000_000).0;
route_payment(&nodes[1], &vec!(&nodes[0])[..], 3_000_000).0;
// Lock HTLC in both directions (using a slightly lower CLTV delay to provide timely RBF bumps)
let route = get_route(&nodes[0].node.get_our_node_id(), &nodes[0].net_graph_msg_handler.network_graph.read().unwrap(),
&nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap();
let payment_preimage = send_along_route(&nodes[0], route, &[&nodes[1]], 3_000_000).0;
let route = get_route(&nodes[1].node.get_our_node_id(), &nodes[1].net_graph_msg_handler.network_graph.read().unwrap(),
&nodes[0].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap();
Comment on lines +7748 to +7752
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use an expression using TEST_FINAL_CLTV in replace of the 50 magic number so that the calculation is clear?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No - this test relies on the CLTV diff being 50 as it assumes the RBF bump will happen at certain block heights. When TEST_FINAL_CLTV was increased, the test suddenly fails because RBF bumps don't happen when they're "supposed" to.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow. I'm just suggesting replacing 50 with TEST_FINAL_CLTV - 20 or TEST_FINAL_CLTV - SOME_CONST_DESCRIBING_THE_DIFF. Otherwise, it isn't clear where 50 is coming from.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, could define a new constant for it, but using TEST_FINAL_CLTV would be incorrect here as, were TEST_FINAL_CLTV to change again, this test would then be buggy/wrong. The test relies on exactly 50, irrespective of what other CLTV settings/constants are set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. So how was 50 calculated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its more the other way around - the CLTV delay was 50, and the test was written assuming the CLTV diff is 50 and expecting RBF bumps on the timeline of an HTLC that times out in 50 blocks. It could be anything, but it requires rewriting the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I guess my confusion came from the fact that route_payment uses TEST_FINAL_CLTV and the updated comment in this test says, "(using a slightly lower CLTV delay to provide timely RBF bumps)". So I figured you just wanted route_payment with a different CLTV.

send_along_route(&nodes[1], route, &[&nodes[0]], 3_000_000);

let revoked_local_txn = get_local_commitment_txn!(nodes[1], chan.2);
assert_eq!(revoked_local_txn[0].input.len(), 1);
Expand Down Expand Up @@ -8058,7 +8063,7 @@ fn test_bump_txn_sanitize_tracking_maps() {
claim_payment(&nodes[0], &vec!(&nodes[1])[..], payment_preimage);

// Broadcast set of revoked txn on A
connect_blocks(&nodes[0], 52 - CHAN_CONFIRM_DEPTH);
connect_blocks(&nodes[0], TEST_FINAL_CLTV + 2 - CHAN_CONFIRM_DEPTH);
expect_pending_htlcs_forwardable_ignore!(nodes[0]);
assert_eq!(nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 0);

Expand Down