New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct MIN_FINAL_CLTV_EXPIRY to match our enforced requirements #911
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -563,7 +563,7 @@ pub const BREAKDOWN_TIMEOUT: u16 = 6 * 24; | |
pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7; | ||
|
||
/// The minimum number of blocks between an inbound HTLC's CLTV and the corresponding outbound | ||
/// HTLC's CLTV. The current default represents roughly six hours of blocks at six blocks/hour. | ||
/// HTLC's CLTV. The current default represents roughly seven hours of blocks at six blocks/hour. | ||
/// | ||
/// This can be increased (but not decreased) through [`ChannelConfig::cltv_expiry_delta`] | ||
/// | ||
|
@@ -572,13 +572,16 @@ pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7; | |
// i.e. the node we forwarded the payment on to should always have enough room to reliably time out | ||
// the HTLC via a full update_fail_htlc/commitment_signed dance before we hit the | ||
// CLTV_CLAIM_BUFFER point (we static assert that it's at least 3 blocks more). | ||
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6 * 6; | ||
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6*7; | ||
pub(super) const CLTV_FAR_FAR_AWAY: u32 = 6 * 24 * 7; //TODO? | ||
|
||
/// Minimum CLTV difference between the current block height and received inbound payments. | ||
/// Invoices generated for payment to us must set their `min_final_cltv_expiry` field to at least | ||
/// this value. | ||
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER; | ||
// Note that we fail if exactly HTLC_FAIL_BACK_BUFFER + 1 was used, so we need to add one for | ||
// any payments to succeed. Further, we don't want payments to fail if a block was found while | ||
// a payment was being routed, so we add an extra block to be safe. | ||
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER + 3; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you think you could replace LATENCY_GRACE_PERIOD_BLOCKS, quite used with same purpose in other places? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, not a strong opinion there though the one more for headroom is blurring the distinction. |
||
|
||
// Check that our CLTV_EXPIRY is at least CLTV_CLAIM_BUFFER + ANTI_REORG_DELAY + LATENCY_GRACE_PERIOD_BLOCKS, | ||
// ie that if the next-hop peer fails the HTLC within | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the comment about CHECK_CLTV_EXPIRY_SANITY_2 is slightly inaccurate. An adversary delaying an inbound claim (a.k.a an offered HTLC from counterparty viewpoint) is giving us more time to come with a preimage. I believe it should say outbound claim and this interpretation seems to match There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, I think it was referring to "inbound from the attacker's PoV". I've updated the comments to not say inbound or outbound. |
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4038,7 +4038,8 @@ fn do_test_htlc_timeout(send_partial_mpp: bool) { | |
}; | ||
connect_block(&nodes[0], &block); | ||
connect_block(&nodes[1], &block); | ||
for _ in CHAN_CONFIRM_DEPTH + 2 ..TEST_FINAL_CLTV + CHAN_CONFIRM_DEPTH + 2 - CLTV_CLAIM_BUFFER - LATENCY_GRACE_PERIOD_BLOCKS { | ||
let block_count = TEST_FINAL_CLTV + CHAN_CONFIRM_DEPTH + 2 - CLTV_CLAIM_BUFFER - LATENCY_GRACE_PERIOD_BLOCKS; | ||
for _ in CHAN_CONFIRM_DEPTH + 2..block_count { | ||
block.header.prev_blockhash = block.block_hash(); | ||
connect_block(&nodes[0], &block); | ||
connect_block(&nodes[1], &block); | ||
|
@@ -4055,9 +4056,9 @@ fn do_test_htlc_timeout(send_partial_mpp: bool) { | |
|
||
nodes[0].node.handle_update_fail_htlc(&nodes[1].node.get_our_node_id(), &htlc_timeout_updates.update_fail_htlcs[0]); | ||
commitment_signed_dance!(nodes[0], nodes[1], htlc_timeout_updates.commitment_signed, false); | ||
// 100_000 msat as u64, followed by a height of TEST_FINAL_CLTV + 2 as u32 | ||
// 100_000 msat as u64, followed by the height at which we failed back above | ||
let mut expected_failure_data = byte_utils::be64_to_array(100_000).to_vec(); | ||
expected_failure_data.extend_from_slice(&byte_utils::be32_to_array(TEST_FINAL_CLTV + 2)); | ||
expected_failure_data.extend_from_slice(&byte_utils::be32_to_array(block_count - 1)); | ||
expect_payment_failed!(nodes[0], our_payment_hash, true, 0x4000 | 15, &expected_failure_data[..]); | ||
} | ||
|
||
|
@@ -5182,7 +5183,7 @@ fn test_duplicate_payment_hash_one_failure_one_success() { | |
let htlc_updates = get_htlc_update_msgs!(nodes[1], nodes[0].node.get_our_node_id()); | ||
assert!(htlc_updates.update_add_htlcs.is_empty()); | ||
assert_eq!(htlc_updates.update_fail_htlcs.len(), 1); | ||
assert_eq!(htlc_updates.update_fail_htlcs[0].htlc_id, 1); | ||
let first_htlc_id = htlc_updates.update_fail_htlcs[0].htlc_id; | ||
assert!(htlc_updates.update_fulfill_htlcs.is_empty()); | ||
assert!(htlc_updates.update_fail_malformed_htlcs.is_empty()); | ||
check_added_monitors!(nodes[1], 1); | ||
|
@@ -5207,7 +5208,7 @@ fn test_duplicate_payment_hash_one_failure_one_success() { | |
assert!(updates.update_add_htlcs.is_empty()); | ||
assert!(updates.update_fail_htlcs.is_empty()); | ||
assert_eq!(updates.update_fulfill_htlcs.len(), 1); | ||
assert_eq!(updates.update_fulfill_htlcs[0].htlc_id, 0); | ||
assert_ne!(updates.update_fulfill_htlcs[0].htlc_id, first_htlc_id); | ||
assert!(updates.update_fail_malformed_htlcs.is_empty()); | ||
check_added_monitors!(nodes[1], 1); | ||
|
||
|
@@ -7743,9 +7744,13 @@ fn test_bump_penalty_txn_on_revoked_htlcs() { | |
let nodes = create_network(2, &node_cfgs, &node_chanmgrs); | ||
|
||
let chan = create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1000000, 59000000, InitFeatures::known(), InitFeatures::known()); | ||
// Lock HTLC in both directions | ||
let payment_preimage = route_payment(&nodes[0], &vec!(&nodes[1])[..], 3_000_000).0; | ||
route_payment(&nodes[1], &vec!(&nodes[0])[..], 3_000_000).0; | ||
// Lock HTLC in both directions (using a slightly lower CLTV delay to provide timely RBF bumps) | ||
let route = get_route(&nodes[0].node.get_our_node_id(), &nodes[0].net_graph_msg_handler.network_graph.read().unwrap(), | ||
&nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap(); | ||
let payment_preimage = send_along_route(&nodes[0], route, &[&nodes[1]], 3_000_000).0; | ||
let route = get_route(&nodes[1].node.get_our_node_id(), &nodes[1].net_graph_msg_handler.network_graph.read().unwrap(), | ||
&nodes[0].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap(); | ||
Comment on lines
+7748
to
+7752
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you use an expression using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No - this test relies on the CLTV diff being 50 as it assumes the RBF bump will happen at certain block heights. When There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not sure I follow. I'm just suggesting replacing There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, could define a new constant for it, but using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Gotcha. So how was 50 calculated? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think its more the other way around - the CLTV delay was 50, and the test was written assuming the CLTV diff is 50 and expecting RBF bumps on the timeline of an HTLC that times out in 50 blocks. It could be anything, but it requires rewriting the test. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, I guess my confusion came from the fact that |
||
send_along_route(&nodes[1], route, &[&nodes[0]], 3_000_000); | ||
|
||
let revoked_local_txn = get_local_commitment_txn!(nodes[1], chan.2); | ||
assert_eq!(revoked_local_txn[0].input.len(), 1); | ||
|
@@ -8058,7 +8063,7 @@ fn test_bump_txn_sanitize_tracking_maps() { | |
claim_payment(&nodes[0], &vec!(&nodes[1])[..], payment_preimage); | ||
|
||
// Broadcast set of revoked txn on A | ||
connect_blocks(&nodes[0], 52 - CHAN_CONFIRM_DEPTH); | ||
connect_blocks(&nodes[0], TEST_FINAL_CLTV + 2 - CHAN_CONFIRM_DEPTH); | ||
expect_pending_htlcs_forwardable_ignore!(nodes[0]); | ||
assert_eq!(nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().len(), 0); | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we modify the default config value which is of 72 ? What's the rational to have differing default config value and implementation lower bound, cautiousness ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rationale is that I forgot to update the config :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I take that back, the config looks good? Config default is 12 hours, not the 6/7 here, I don't think we need to change that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good with the cautiousness argument to justify config default of 12 hours :)