Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct MIN_FINAL_CLTV_EXPIRY to match our enforced requirements #911

Merged
merged 3 commits into from May 6, 2021

Conversation

TheBlueMatt
Copy link
Collaborator

Otherwise invoices generated with the new lightning-invoice utility will be unpayable.

@codecov
Copy link

codecov bot commented May 5, 2021

Codecov Report

Merging #911 (0ba727a) into main (c60543c) will increase coverage by 0.26%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #911      +/-   ##
==========================================
+ Coverage   90.57%   90.84%   +0.26%     
==========================================
  Files          59       59              
  Lines       29738    30798    +1060     
==========================================
+ Hits        26936    27977    +1041     
- Misses       2802     2821      +19     
Impacted Files Coverage Δ
lightning/src/chain/channelmonitor.rs 95.45% <ø> (ø)
lightning/src/ln/channelmanager.rs 87.71% <ø> (+3.75%) ⬆️
lightning/src/ln/functional_test_utils.rs 95.06% <ø> (ø)
lightning-invoice/src/utils.rs 83.69% <100.00%> (ø)
lightning/src/ln/functional_tests.rs 97.02% <100.00%> (+<0.01%) ⬆️
lightning/src/ln/onion_utils.rs 94.90% <0.00%> (-0.27%) ⬇️
lightning/src/ln/msgs.rs 89.70% <0.00%> (+0.94%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c60543c...0ba727a. Read the comment docs.

Comment on lines +7748 to +7752
let route = get_route(&nodes[0].node.get_our_node_id(), &nodes[0].net_graph_msg_handler.network_graph.read().unwrap(),
&nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap();
let payment_preimage = send_along_route(&nodes[0], route, &[&nodes[1]], 3_000_000).0;
let route = get_route(&nodes[1].node.get_our_node_id(), &nodes[1].net_graph_msg_handler.network_graph.read().unwrap(),
&nodes[0].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 3_000_000, 50, nodes[0].logger).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use an expression using TEST_FINAL_CLTV in replace of the 50 magic number so that the calculation is clear?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No - this test relies on the CLTV diff being 50 as it assumes the RBF bump will happen at certain block heights. When TEST_FINAL_CLTV was increased, the test suddenly fails because RBF bumps don't happen when they're "supposed" to.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow. I'm just suggesting replacing 50 with TEST_FINAL_CLTV - 20 or TEST_FINAL_CLTV - SOME_CONST_DESCRIBING_THE_DIFF. Otherwise, it isn't clear where 50 is coming from.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, could define a new constant for it, but using TEST_FINAL_CLTV would be incorrect here as, were TEST_FINAL_CLTV to change again, this test would then be buggy/wrong. The test relies on exactly 50, irrespective of what other CLTV settings/constants are set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. So how was 50 calculated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its more the other way around - the CLTV delay was 50, and the test was written assuming the CLTV diff is 50 and expecting RBF bumps on the timeline of an HTLC that times out in 50 blocks. It could be anything, but it requires rewriting the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I guess my confusion came from the fact that route_payment uses TEST_FINAL_CLTV and the updated comment in this test says, "(using a slightly lower CLTV delay to provide timely RBF bumps)". So I figured you just wanted route_payment with a different CLTV.

lightning/src/ln/channelmanager.rs Outdated Show resolved Hide resolved
lightning-invoice/src/utils.rs Outdated Show resolved Hide resolved
@TheBlueMatt TheBlueMatt force-pushed the 2021-05-fix-cltv-diff branch 2 times, most recently from eb7f00b to 2803794 Compare May 5, 2021 18:30
This increases the CLTV_CLAIM_BUFFER constant to 18, much better
capturing how long it takes to go on chain to claim payments.
This is also more in line with other clients, and the spec, which
sets the default CLTV delay in invoices to 18.

As a side effect, we have to increase MIN_CLTV_EXPIRY_DELTA as
otherwise as are subject to an attack where someone can hold an
HTLC being forwarded long enough that we *also* close the channel
on which we received the HTLC.
Our enforced requirements for HTLC acceptance is that we have at
least HTLC_FAIL_BACK_BUFFER blocks before the HTLC expires. When we
receive an HTLC, the HTLC would be "already expired" if its
`cltv_expiry` is current-block + 1 (ie the next block could
broadcast the commitment transaction and time out the HTLC). From
there, we want an extra HTLC_FAIL_BACK_BUFFER in blocks, plus an
extra block or two to account for any differences in the view of
the current height before send or while the HTLC is transiting the
network.
@TheBlueMatt
Copy link
Collaborator Author

Squashed with no changes.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments otherwise look good to me.

// Note that we fail if exactly HTLC_FAIL_BACK_BUFFER + 1 was used, so we need to add one for
// any payments to succeed. Further, we don't want payments to fail if a block was found while
// a payment was being routed, so we add an extra block to be safe.
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER + 3;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think you could replace LATENCY_GRACE_PERIOD_BLOCKS, quite used with same purpose in other places?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, LATENCY_GRACE_PERIOD_BLOCKS is a bit different in meaning - its "time it may take our counterparty to download a new block, process it, and then do a full CS, RAA, CS, RAA dance with us". This, on the other hand, is a function of the math - we need HTLC_FAIL_BACK_BUFFER + 2 for any payments to succeed, we just add one more as headroom.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, not a strong opinion there though the one more for headroom is blurring the distinction.

// Note that we fail if exactly HTLC_FAIL_BACK_BUFFER + 1 was used, so we need to add one for
// any payments to succeed. Further, we don't want payments to fail if a block was found while
// a payment was being routed, so we add an extra block to be safe.
pub const MIN_FINAL_CLTV_EXPIRY: u32 = HTLC_FAIL_BACK_BUFFER + 3;

// Check that our CLTV_EXPIRY is at least CLTV_CLAIM_BUFFER + ANTI_REORG_DELAY + LATENCY_GRACE_PERIOD_BLOCKS,
// ie that if the next-hop peer fails the HTLC within
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment about CHECK_CLTV_EXPIRY_SANITY_2 is slightly inaccurate. An adversary delaying an inbound claim (a.k.a an offered HTLC from counterparty viewpoint) is giving us more time to come with a preimage. I believe it should say outbound claim and this interpretation seems to match would_broadcast_at_height comment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think it was referring to "inbound from the attacker's PoV". I've updated the comments to not say inbound or outbound.

@@ -572,13 +572,16 @@ pub(crate) const MAX_LOCAL_BREAKDOWN_TIMEOUT: u16 = 2 * 6 * 24 * 7;
// i.e. the node we forwarded the payment on to should always have enough room to reliably time out
// the HTLC via a full update_fail_htlc/commitment_signed dance before we hit the
// CLTV_CLAIM_BUFFER point (we static assert that it's at least 3 blocks more).
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6 * 6;
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6*7;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we modify the default config value which is of 72 ? What's the rational to have differing default config value and implementation lower bound, cautiousness ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rationale is that I forgot to update the config :).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I take that back, the config looks good? Config default is 12 hours, not the 6/7 here, I don't think we need to change that.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm good with the cautiousness argument to justify config default of 12 hours :)

@ariard
Copy link

ariard commented May 6, 2021

Code Review ACK 0ba727a

@TheBlueMatt TheBlueMatt merged commit d2955be into lightningdevkit:main May 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants