Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Splice draft (feature 62/63) #863

Draft
wants to merge 32 commits into
base: master
Choose a base branch
from

Conversation

rustyrussell
Copy link
Collaborator

@rustyrussell rustyrussell commented Apr 20, 2021

Based on #862 #869:

We use the interactive tx construction protocol to make a splice:

  1. Initiator pays for input and output, sets fees.
  2. You can do more than one but you have to increase feerate by >= 25% each time
  3. We use quiescence to pause the channel in a known state while negotiating.
  4. A simple channel_update tells people not to notice the channel close, since we're splicing.

@rustyrussell
Copy link
Collaborator Author

OK, many tweaks to wording, but in particular:

  • No more minimum_depth; after 6 we ack the splice. That fits well with gossip, since we only have 10 blocks before old one will be forgotten.
  • No reserve requirements if you don't pull funds out of channel.

@rustyrussell
Copy link
Collaborator Author

OK, I reworked this on top of quiescence, and dropped the deterministic points which I am no longer convinced by.

@Kixunil
Copy link

Kixunil commented Jun 7, 2021

I don't see a way to merge two existing channels into one. Is it being considered?

@t-bast
Copy link
Collaborator

t-bast commented Jun 7, 2021

I don't see a way to merge two existing channels into one. Is it being considered?

AFAIK there's no "magic" trick but you can easily merge multiple channels. If you have N channels, just do a mutual close on N-1 of these channels and use the resulting utxos to splice into your last remaining channel. It does have an on-chain cost but there's no way around it (and once it's done, you have a single channel and you're good to go forever).

@Kixunil
Copy link

Kixunil commented Jun 7, 2021

Would be really nice being able to do it in a single transaction. It also wouldn't need confirmation because cheating is impossible in that case.

@t-bast
Copy link
Collaborator

t-bast commented Jun 8, 2021

It won't be possible in a single transaction, every existing channel needs one transaction to close and they're completely independent of each other.

02-peer-protocol.md Outdated Show resolved Hide resolved
Jump to message numer 80 to avoid conflicts with `stfu` etc
@ddustin
Copy link
Contributor

ddustin commented Feb 13, 2024

The channel_reestablish logic isn't there either (because we'd need to rebase on top of the dual funding PR to be able to extend it).

I believe the only update we need is logic for re-sending splice_locked: 79bf5ae

Some of the wording still refers to funding_locked instead of channel_ready, which shows we'll need quite a rebase (but we should IMO wait for dual funding to be merged to avoid rebasing too frequently).

I believe this gets rid of the last reference: 10ee7df

Our todo list is now narrowed to:

@t-bast
Copy link
Collaborator

t-bast commented Feb 19, 2024

I believe the batch size should be deterministic at each stage
The implied batch size logic is how we implemented it in CLN and works / passes the tests.

I think you're missing some tests then 😉
How do you handle concurrency between commit_sig in one direction and splice_locked in the other direction?
Specifically, this scenario (extracted from my old gist without the obsolete funding_txid field):

Initial active commitments:

   +------------+        +------------+
   | FundingTx1 |------->| FundingTx2 |
   +------------+        +------------+

   Alice                           Bob
     |                              |
     | splice_locked                | funding_txid = FundingTx2
     |----------------------------->|
     | update_add_htlc              |
     |----------------------------->|
     | commit_sig                   |
     |------------>                 |
     |                splice_locked | funding_txid = FundingTx2
     |              <---------------|
     |                   commit_sig | Bob doesn't know if Alice received splice_locked before or after sending commit_sig
     |                  ----------->| Without batch_size, Bob doesn't know to which funding transaction this commitment applies 
     | splice_locked                |
     |<--------------               |

Without batch_size, Bob doesn't know if he will receive:

  • two commit_sig messages (one for FundingTx1 and one for FundingTx2) indicating that Alice had not received splice_locked yet (in that case Bob can simply ignore the commit_sig for FundingTx1)
  • one commit_sig message (for FundingTx2) indicating that Alice had received splice_locked

Also, batch_size has been invaluable in identifying bug in a lot of subtle edge cases.

Using method (1) would mean the reserve requirement is ignored until it is first met, as occurs with a traditional channel open.

The issue with this is how it composes with splice-out: if we go with option 1), do we allow our peer to splice out below their reserve? This is something that can be gamed to work-around the reserve requirements.

That's why @morehouse suggested doing something that looks more like option 2), even though I'm not a fan of it at all (I tried implementing it, and it's really messy).

(Using funding id makes us more APO ready -> #863 (comment))

I don't think APO-readiness is a relevant argument, whenever we move to Eltoo there are so many things that will change that it's impossible to know it will impact lightning sub-protocols like splicing. I don't think that should be something we consider at all, especially given the uncertainty around which kind of soft-fork will be added to bitcoin.

Copy link
Collaborator

@t-bast t-bast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe there is quite a bit of clean-up to do on this PR to remove references to old, obsolete parts of the proposal and tidying everything up. We should also:

  • rebase the quiescence PR on top of master and squash it to a single commit
  • rebase splice on top of that quiescence PR and squash it to a single commit after cleaning it up to match the latest state of splice
  • then we can iterate to converge

It will make it much easier for newcomers to jump in and waste less time explaining obsolete details of earlier prototypes.

@@ -32,6 +32,8 @@ operation, and closing.
* [The `commitment_signed` Message](#the-commitment_signed-message)
* [Sharing funding signatures: `tx_signatures`](#sharing-funding-signatures-tx_signatures)
* [Fee bumping: `tx_init_rbf` and `tx_ack_rbf`](#fee-bumping-tx_init_rbf-and-tx_ack_rbf)
* [The `funding_locked` Message](#the-funding_locked-message)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: rebase leftover?

@@ -203,6 +205,7 @@ This message contains a transaction input.

The sending node:
- MUST add all sent inputs to the transaction
- MUST only send confirmed inputs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? This should be gated by the require_confirmed_inputs TLV on splice_init / splice_ack.

@@ -32,6 +32,8 @@ operation, and closing.
* [The `commitment_signed` Message](#the-commitment_signed-message)
* [Sharing funding signatures: `tx_signatures`](#sharing-funding-signatures-tx_signatures)
* [Fee bumping: `tx_init_rbf` and `tx_ack_rbf`](#fee-bumping-tx_init_rbf-and-tx_ack_rbf)
* [The `funding_locked` Message](#the-funding_locked-message)
* [Channel Quiescence](#channel-quiescence)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a link to the splicing section?

1. type: 80 (`splice`)
2. data:
* [`channel_id`:`channel_id`]
* [`chain_hash`:`chain_hash`]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a chain_hash parameter? This applies to an existing channel, so it applies to its chain?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me, would love to get @rustyrussell's opinion on perhaps the next spec call.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rustyrussell "Yes that is redundant now"


### The `splice` Message

1. type: 80 (`splice`)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the require_confirmed_inputs TLV to splice and splice_ack?

Upon receipt of consecutive `tx_complete`s, each node:
- MUST fail negotiation if there is not exactly one input spending the current funding transaction.
- MUST fail negotiation if there is not exactly one output with zero value paying to the two funding keys (a.k.a. the new channel funding output)
- MUST calculate the channel capacity for each side:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not how it works anymore since we started using a signed funding contribution (that is a delta of the what each peer is adding or removing), that section should be re-worked.

Comment on lines +579 to +1580
- MUST calculate the channel capacity for each side:
- Start with the previous balance
- Add that side's new inputs (excluding the one spending the current funding transaction)
- Subtracting each sides new outputs (except the zero-value one paying to the funding keys)
- Subtract the total fee that side is paying for the splice transaction.
- MUST replace the zero-value funding output amount with the total channel capacity.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be removed as well.

Comment on lines +1584 to +1585
- If either side has added an output other than the new channel funding output:
- MUST fail the negotiation if the balance for that side is less than 1% of the total channel capacity.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is accurate either: what we want to capture here is that nodes cannot splice-out below their reserve, right? This should be based on the relative_amount in splice / splice_ack.

2. types:
1. type: 0 (`splice_info`)
2. data:
* [`channel_id`:`splice_channel_id`]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As previously discussed, I think we should rather use a batch_size here, because otherwise you cannot properly deal with concurrency issues between splice_locked and commit_sig and may deadlock.

We need to specify in which order commit_sigs are sent in that "batch": eclair currently sends them ordered by sending the commit_sig matching the latest splice first, and the older ones afterwards.

@@ -1032,7 +1270,15 @@ A receiving node:
- MUST fail the channel.
- if any `htlc_signature` is not valid for the corresponding HTLC transaction OR non-compliant with LOW-S-standard rule <sup>[LOWS](https://github.com/bitcoin/bitcoin/pull/6769)</sup>:
- MUST fail the channel.
- MUST respond with a `revoke_and_ack` message.
- if there is not exactly one `commitsigs` for each splice in progress:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should just be:

Suggested change
- if there is not exactly one `commitsigs` for each splice in progress:
- if there is not exactly one `commit_sig` message for each splice in progress:

This is much clearer?

@ddustin
Copy link
Contributor

ddustin commented Feb 21, 2024

I believe the batch size should be deterministic at each stage
The implied batch size logic is how we implemented it in CLN and works / passes the tests.

I think you're missing some tests then 😉 How do you handle concurrency between commit_sig in one direction and splice_locked in the other direction? Specifically, this scenario (extracted from my old gist without the obsolete funding_txid field):

Initial active commitments:

   +------------+        +------------+
   | FundingTx1 |------->| FundingTx2 |
   +------------+        +------------+

   Alice                           Bob
     |                              |
     | splice_locked                | funding_txid = FundingTx2
     |----------------------------->|
     | update_add_htlc              |
     |----------------------------->|
     | commit_sig                   |
     |------------>                 |
     |                splice_locked | funding_txid = FundingTx2
     |              <---------------|
     |                   commit_sig | Bob doesn't know if Alice received splice_locked before or after sending commit_sig
     |                  ----------->| Without batch_size, Bob doesn't know to which funding transaction this commitment applies 
     | splice_locked                |
     |<--------------               |

Without batch_size, Bob doesn't know if he will receive:

  • two commit_sig messages (one for FundingTx1 and one for FundingTx2) indicating that Alice had not received splice_locked yet (in that case Bob can simply ignore the commit_sig for FundingTx1)
  • one commit_sig message (for FundingTx2) indicating that Alice had received splice_locked

Ah, yes, you're absolutely right. Adding batch_size to the todo list.

Also, batch_size has been invaluable in identifying bug in a lot of subtle edge cases.

Using method (1) would mean the reserve requirement is ignored until it is first met, as occurs with a traditional channel open.

The issue with this is how it composes with splice-out: if we go with option 1), do we allow our peer to splice out below their reserve? This is something that can be gamed to work-around the reserve requirements.

That's why @morehouse suggested doing something that looks more like option 2), even though I'm not a fan of it at all (I tried implementing it, and it's really messy).

Sounds like a good thing to bring up on the spec call 📞

(Using funding id makes us more APO ready -> #863 (comment))

I don't think APO-readiness is a relevant argument, whenever we move to Eltoo there are so many things that will change that it's impossible to know it will impact lightning sub-protocols like splicing. I don't think that should be something we consider at all, especially given the uncertainty around which kind of soft-fork will be added to bitcoin.

batch_size sounds good 👍

@ddustin
Copy link
Contributor

ddustin commented Feb 21, 2024

I wonder if batch_size should only be attached to the first commit_sig message or be something that decrements.

It's not essential but it feels messy that say the last commit_sig message could indicate batch_size 3 but it is the final one.

@t-bast
Copy link
Collaborator

t-bast commented Feb 22, 2024

I wonder if batch_size should only be attached to the first commit_sig message or be something that decrements.
It's not essential but it feels messy that say the last commit_sig message could indicate batch_size 3 but it is the final one.

You mean that we would instead use something like remaining_sigs, that we would decrement in each commit_sig? That would also work, but I don't see the point, it achieves exactly the same result with slightly more logic, not less.

Sounds like a good thing to bring up on the spec call 📞

Agreed, I think the reserve requirements and edge cases it may create around splice-out (and combined splice-in and splice-out) is the main question that is still open for feedback, as we haven't really explored all of those edge cases yet! I won't be able to attend the next spec call, but I'll read the meeting notes to see the outcome of that brainstorm!

@ddustin
Copy link
Contributor

ddustin commented Feb 26, 2024

Came up in the spec meeting: Should we be adding a channel_type upgrade path to the splice?

@t-bast
Copy link
Collaborator

t-bast commented Feb 27, 2024

Came up in the spec meeting: Should we be adding a channel_type upgrade path to the splice?

This is somewhat orthogonal to splicing: channel_type upgrades can indeed leverage splicing, and it would be a very welcome addition, but it may conflict with other mechanisms for upgrading channel parameters. It would be nice to have one consistent mechanism for those upgrades instead of a mess of various upgrade paths.

But on the other hand, it makes splice consistent with open_channel, which makes a lot of sense, and would be very useful...I think I'm leaning towards adding the channel_type to splice, and leaving the upgrade of other parameters to a different protocol (for now, but that protocol could decide to build on top of splicing).

@ProofOfKeags
Copy link
Contributor

There are some key design considerations when it comes to changing the channel_type as part of the splicing process. At first glance it makes a lot of sense, but it does cause some thorny issues if you consider it in context with all of the other stuff going on in LN today.

The Dynamic Commitments proposal's main goal is to upgrade the channel_type parameter, specifically from pre-taproot channels to taproot channels. Like splicing, successfully doing this requires reanchoring the funding output.

The trouble is that if we were to just negotiate a taproot upgrade and then immediately broadcast the transaction that converts the funding output to the new segwit v1 output, we run into the problem wherein the new funding output is incompatible with our current gossip system. We are choosing to sidestep this by making it such that the output conversion transaction (referred to in the proposal as the "kickoff transaction") is not immediately broadcast. Like with splicing we can continue to use the channel when the kickoff hasn't confirmed yet, which in the extreme case doesn't have to confirm until the channel is closed!

Splices can't do this because of the case where a splice-in's have other transaction inputs which may be double-spent to invalidate the splice transaction. (Note that if you are only splicing out, this is not the case!)

Dynamic commitment channel_type conversions can get away with this because there are no actions that can be taken that would invalidate them, making the signed but unconfirmed kickoff as secure as the commitment transactions themselves.

If you want to upgrade channel_type during the splicing process, there is no issue with that, but if we use the splicing protocol as the sole means of upgrading the channel_type and we also want that conversion to include taproot channels, we have to accept that the newly upgraded channels become unusable for routing until taproot gossip is widely deployed. For this reason, Dynamic Commitments cannot be based upon splicing without vastly complicating the codepath.

During my research into the different mechanisms that had to be balanced I did notice that the "ideal" protocol would be capable of renegotiating all channel parameters that were originally negotiated in the open/accept process. Splicing covers the situation where we want to change the amount of money in the channel. Dynamic commitments explicitly desires to exclude this to take advantage of the fact that it allows us to defer broadcast of the kickoff (the corollary of the splice transaction) indefinitely.

I do think that it is unfortunate that we can't really unify the renegotiation of all of open/accept under a single protocol but the reality is that the optimizations afforded by excluding the ability to add funds to the channel are so consequential that we can't really ignore them for the sake of symmetry in the protocol design.

In summary, if you want to add channel_type conversion to splicing, that would possibly provide some modest optimizations if you wanted to do bundled parameter changes, but it will not obviate the need for Dynamic Commitments.

@t-bast
Copy link
Collaborator

t-bast commented Apr 10, 2024

The trouble is that if we were to just negotiate a taproot upgrade and then immediately broadcast the transaction that converts the funding output to the new segwit v1 output, we run into the problem wherein the new funding output is incompatible with our current gossip system.

Indeed, we would only use this after adding gossip support. FWIW I would have only enabled upgrading to taproot channels after adding support for taproot gossip, which makes this point moot. I think we should go ahead with taproot gossip as soon as possible to provide a clean path for channel upgrades.

making the signed but unconfirmed kickoff as secure as the commitment transactions themselves.

Note that this "kick-off" transaction would thus need anchor outputs, or some other way of mitigating pinning.

Dynamic commitments explicitly desires to exclude this to take advantage of the fact that it allows us to defer broadcast of the kickoff (the corollary of the splice transaction) indefinitely.

Won't we run into issues where dynamic commitments and splicing create incompatibilities between each other? If you have an unconfirmed / unlocked kick-off transaction, how do you subsequently splice on top of this?

The way I see it, splicing requires broadcasting a transaction that spends the current funding output. It makes it suitable for adding and removing funds into the channel, and upgrading the channel_type when it requires a transaction (which actually is the case for upgrading to taproot channels). Splicing is thus not a great tool for updating channel parameters that don't need a transaction, for example updating max_htlc_value_in_flight or doing a channel_type upgrade from static_remotekey to anchor_outputs. This is where I believe another mechanism would be useful, but it seems to me that this should be a mechanism that does not rely on spending the funding transaction.

To be honest, it seems to me that dynamic commitment upgrades as it is proposed today is rather a hack to enable updating channels to taproot before taproot gossip, instead of being a generally useful complement to splicing. That may be fine if it doesn't interfere negatively with splicing, but if it makes those upgraded channels incompatible with splicing, that is IMO an issue.

@ProofOfKeags
Copy link
Contributor

To be honest, it seems to me that dynamic commitment upgrades as it is proposed today is rather a hack to enable updating channels to taproot before taproot gossip.

Yes. That is its most important goal. I think it's a good one since gossip upgrades require the whole network to adopt the new code, rather than dynamic commitments which only requires the two nodes on either side of the taproot edge to adopt it to keep it in sync with the rest of the network.

instead of being a generally useful complement to splicing

It was never supposed to be a useful complement to splicing to my knowledge. It was always a separate goal of making upgrading to taproot channels possible in a broadly usable way prior to a widely deployed new gossip system.

how do you subsequently splice on top of this?

You have a few choices but they all have various sacrifices. One way or another you have to build the splice tx on top of the dyncomm kickoff, which would require a kickoff broadcast since the splice requires broadcast, or you rebuild the kickoff on top of the splice transaction which makes things a bit more complicated as you have to remember some extra context when doing chain monitoring and splicing.

but if it makes those upgraded channels incompatible with splicing

It isn't fundamentally incompatible by my estimation, but mixing them is a thorny engineering issue with some discussions to be had and it would be incompatible until those engineering issues are solved.

@t-bast
Copy link
Collaborator

t-bast commented Apr 12, 2024

It was never supposed to be a useful complement to splicing to my knowledge. It was always a separate goal of making upgrading to taproot channels possible in a broadly usable way prior to a widely deployed new gossip system.

But then if that's the case, dynamic commitment upgrades doesn't seem like a good candidate for inclusion in the BOLTs, because once taproot gossip is deployed, upgrading to taproot channels will be more cleanly done using splicing? I don't think the BOLTs should contain complex protocols whose only goal is enabling something in the short term to work around a temporary limitation (lack of taproot gossip), because that creates technical debt. A cleaner path towards upgrading to taproot channels is to work more aggressively on taproot gossip and upgrade after that.

In my opinion, the BOLTs should contain:

  • one protocol for channel upgrades that require spending the current funding output: that should be splicing, because if you create a new funding transaction, you should also have the opportunity of taking funds in and out of the channel
  • one protocol for channel upgrades that don't require spending the current funding output: this would be used for upgrading commit txs to a new format that doesn't need to change the funding output (e.g. static_remotekey -> anchor_outputs) or for upgrading channel parameters (max_in_flight, dust_limit, etc)

Those protocols would complement each other nicely without creating technical debt.

That being said, I understand that lnd wants to be able to upgrade to taproot channels before taproot gossip is widely supported. But I believe lnd is the only implementation with that requirement, so it would make sense to have an lnd-only mechanism for that in the short term, and support for doing that upgrade using splicing in the longer term for compatibility with other implementations.

@Roasbeef
Copy link
Collaborator

Roasbeef commented Apr 12, 2024

But then if that's the case, dynamic commitment upgrades doesn't seem like a good candidate for inclusion in the BOLTs, because once taproot gossip is deployed, upgrading to taproot channels will be more cleanly done using splicing?

It depends on your goal. If you don't care about forcing all users to do an on chain transaction to migrate to the new output type, then ofc you can just have them all close their transactions and re-open them. Today we have ~50k public channels, our goal is to be able to migrate the vast majority of them without having some flag day resulting in 100k transactions on chain.

Commitment upgrades that require an off-chain kick off is just one of the upgrade types that dynamic commitments will enable.

Even if everyone had already implemented taproot gossip, I still think we'd pursue this path to avoid forcing pretty much every publicly routable mainnet channel to close. If we can allow the channels that have been open for years to pretty much never close, then why wouldn't we pursue that path? The interaction of splicing and the greater gossip network is still incomplete: the only way to handle the change over in lock step is by using an on-chain hint, but not everyone seems to be a fan of that approach.

The kick off upgrade mechanism is also the most direct path to enabling PTLCs across the network as well. Once upgraded a new channel update bit can be used to signal that the underlying channel actually supports the new payment type.

That being said, I understand that lnd wants to be able to upgrade to taproot channels before taproot gossip is widely supported.

As I mentioned above, I don't think those two events are intertwined. Even if we already had taproot gossip widely deployed, we wouldn't want to force the entire public network to close and re-open channels.

But I believe lnd is the only implementation with that requirement, so it would make sense to have an lnd-only mechanism for that in the short term, and support for doing that upgrade using splicing in the longer term for compatibility with other implementations

Nothing inherently makes the proposal lnd only. Though I understand we all have varying priorities, resources, and development road maps we're all concurrently pursuing.

I also don't think it's accurate to convey splicing as designed today, as a more generic mechanism than it actually is. If you only want to change the funding output on a channel, then you really don't need: dynamic ability to add multiple inputs, have multiple in flight splices, the RBF mechanism, ambiguity of settled balances, etc, etc. Not to mention the pinning concerns that the current approach deems to be out of scope.

@Roasbeef
Copy link
Collaborator

Roasbeef commented Apr 12, 2024

Splicing is thus not a great tool for updating channel parameters that don't need a transaction, for example updating max_htlc_value_in_flight or doing a channel_type upgrade from static_remotekey to anchor_outputs. This is where I believe another mechanism would be useful, but it seems to me that this should be a mechanism that does not rely on spending the funding transaction.

There's no requirement to spend the funding transaction to update those channel parameters. I think you might be understanding some key concepts w.r.t the proposal. Negotiation and execution are distinct. If I negotiate a channel type that needs to change the funding output, then during execution we'd need to handle creation + signing of that kick off. If we negotiate a change to the dust param, then we'd use the existing HTLC update log semantics, and then next signature would cover a state that applies that new update.

Pure param updates would be similar to the way update_fee works today. By putting these messages into the commitment update log, we can also re-use all the existing retransmission semantics.

@t-bast
Copy link
Collaborator

t-bast commented Apr 15, 2024

Today we have ~50k public channels, our goal is to be able to migrate the vast majority of them without having some flag day resulting in 100k transactions on chain.

This will never happen as a flag day anyway, since people won't upgrade their lightning implementation and decide to migrate all of their channels at the same time. Implementations should provide tools to upgrade channels when the feerate is low enough, to smooth out this transition. Using an unconfirmed kick-off transaction just hides the problem in the short term, but doesn't fix it at all?

I also don't think it's accurate to convey splicing as designed today, as a more generic mechanism than it actually is. If you only want to change the funding output on a channel, then you really don't need: dynamic ability to add multiple inputs, have multiple in flight splices, the RBF mechanism, ambiguity of settled balances, etc, etc. Not to mention the pinning concerns that the current approach deems to be out of scope.

I don't see what you are referring to exactly in this vague comment: dynamic commitments without publishing the kick-off transaction create a much greater security risk regarding pinning that splicing...

The kick off upgrade mechanism is also the most direct path to enabling PTLCs across the network as well. Once upgraded a new channel update bit can be used to signal that the underlying channel actually supports the new payment type.

Not really, we can do something much simpler than that, there's no reason for a kick-off transaction here. We can modify commitment transactions without spending the current funding output.

There's no requirement to spend the funding transaction to update those channel parameters. I think you might be understanding some key concepts w.r.t the proposal. Negotiation and execution are distinct. If I negotiate a channel type that needs to change the funding output, then during execution we'd need to handle creation + signing of that kick off. If we negotiate a change to the dust param, then we'd use the existing HTLC update log semantics, and then next signature would cover a state that applies that new update.

I agree, and that's exactly what I'm saying in my comment. That kind of upgrades are useful, and cannot be done cleanly with splicing. But what I'm arguing is that the protocol that enables such upgrades should never require spending the funding output, as anything that spends the funding output should instead be done on top of splicing to avoid the technical debt of two competing protocols.

@Roasbeef
Copy link
Collaborator

But what I'm arguing is that the protocol that enables such upgrades should never require spending the funding output, as anything that spends the funding output should instead be done on top of splicing to avoid the technical debt of two competing protocols.

Sure, that isn't how dynamic commitments is specified today. Negotiation and execution are distinct. You can update your dust limit without needing to spend the funding output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet