Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

swarm: Deprecate ConnectionHandler::{In,Out}boundOpenInfo #3268

Open
8 of 9 tasks
Tracked by #2863
thomaseizinger opened this issue Dec 20, 2022 · 13 comments · May be fixed by #5242
Open
8 of 9 tasks
Tracked by #2863

swarm: Deprecate ConnectionHandler::{In,Out}boundOpenInfo #3268

thomaseizinger opened this issue Dec 20, 2022 · 13 comments · May be fixed by #5242

Comments

@thomaseizinger
Copy link
Contributor

thomaseizinger commented Dec 20, 2022

Currently, a user can specify an arbitrary payload type that can be passed a long with the opening of an inbound or an outbound stream. In general, being able to pass along user data with an asynchronous operation can be helpful because one doesn't need to track this information locally. Furthermore, for some asynchronous operations (like connection dialing in the case of relay's and the dcutr protocol) it is very important that one can 1-to-1 associate the initiated action with the result of the asynchronous operation.

Streams however are completely interchangeable in libp2p once the protocol to be spoken on top of it has been negotiated. Thus, we should be able to remove ConnectionHandler::InboundOpenInfo and ConnectionHandler::OutboundOpenInfo completely in favor of ConnectionHandlers tracking the user data in a FIFO queue.

In fact, that is what we are doing within swarm::Connection anyway. Upon opening a new stream, we just associate it with the next upgrade from the list:

match muxing.poll_outbound_unpin(cx)? {
Poll::Pending => {}
Poll::Ready(substream) => {
let (user_data, timeout, upgrade) = requested_substream.extract();
negotiating_out.push(SubstreamUpgrade::new_outbound(
substream,
user_data,
timeout,
upgrade,
*substream_upgrade_protocol_override,
));
continue; // Go back to the top, handler can potentially make progress again.
}

Tasks

  1. send-it
  2. send-it trivial
  3. send-it trivial
  4. difficulty:moderate help wanted
    dgarus
  5. send-it
  6. decision-pending difficulty:hard
    thomaseizinger
  7. send-it
  8. send-it
@thomaseizinger
Copy link
Contributor Author

With the 👍 from @mxinden and @elenaf9, I am removing the decision-pending label and putting this up for grabs.

@thomaseizinger thomaseizinger added help wanted priority:nicetohave and removed decision-pending Marks issues where a decision is pending before we can move forward. labels Jan 18, 2023
@thomaseizinger thomaseizinger self-assigned this Jan 20, 2023
@thomaseizinger
Copy link
Contributor Author

Linking the original discussion here: #1709

The motivation for adding this was to properly deal with upgrade errors. Hence, I'd suggest that we proceed here once #2863 is implemented. Depending on the outcome of the discussion in #3353 (comment), this requirement will go away completely and we should be able to remove these associated types without harming existing usecases.

#1710 also mentions that there is no guarantee that a call to listen_protocol is followed by what is now ConnectionEvent::FullyNegotiatedInbound or ConnectionEvent::ListenUpgradeError. The only scenario I can think of where this is the case is when the connection gets shut down in the meantime. In every other case, we pair these calls up.

Long-term, I'd like us to only expose a list of protocols from a ConnectionHandler. That will result in a much simpler interface, especially once #2831 is shipped as well.

I wonder what a good migration plan is from one to the other interface. It is quite a big change so I'd want to offer a smooth migration path to our users.

@thomaseizinger
Copy link
Contributor Author

One thing that just came to my mind that is easier with the current design:

Being able to pass state to as part of creating a new stream allows ConnectionHandlers to simplify their state because you don't have to keep track of "how many streams have I requested and how many do I need?"

With the current design, you just drain a list of commands and open a new stream per command, passing along the data they need.

@mxinden
Copy link
Member

mxinden commented Mar 13, 2023

because you don't have to keep track of "how many streams have I requested and how many do I need?"

Long term ConnectionHandlers need to be able to receive backpressure on stream creation. I.e. they should limit the amount of in-flight stream requests. Thus I would argue that they need this state anyways.

@thomaseizinger
Copy link
Contributor Author

#3763 is a pretty classic refactoring that users will have to do. The new version is more verbose but I think that for new-comers, it is easier to understand because there is a smaller API surface to learn.

mergify bot pushed a commit that referenced this issue Apr 28, 2023
In a previous refactoring, we actually removed the usage of this openinfo so this is just removing dead code.

Related: #3268.

Pull-Request: #3762.
mergify bot pushed a commit that referenced this issue Apr 28, 2023
Instead of passing the command along, we store it in a buffer and retrieve it once the stream is upgraded.

Related: #3268.

Pull-Request: #3763.
mergify bot pushed a commit that referenced this issue Apr 28, 2023
As part of pushing #3268 forward, remove the use of `OutboundOpenInfo` from `libp2p-kad`.

Related #3268.

Pull-Request: #3760.
@thomaseizinger

This comment was marked as outdated.

@thomaseizinger

This comment was marked as outdated.

@dgarus

This comment was marked as outdated.

@thomaseizinger

This comment was marked as outdated.

@dgarus

This comment was marked as outdated.

@thomaseizinger

This comment was marked as outdated.

@thomaseizinger thomaseizinger added this to the Simplify ConnectionHandler trait milestone Sep 18, 2023
@thomaseizinger thomaseizinger removed this from the Simplify ConnectionHandler trait milestone Sep 19, 2023
mergify bot pushed a commit that referenced this issue Oct 26, 2023
This patch refactors `libp2p-request-response` to not use the "upgrade infrastructure" provided by `libp2p-swarm`. Instead, we directly convert the negotiated streams into futures that read and write the messages.

Related: #3268.
Related: #2863.

Pull-Request: #3914.

Co-authored-by: Yiannis Marangos <yiannis@eiger.co>
mergify bot pushed a commit that referenced this issue Nov 22, 2023
This refactoring addresses several aspects of the current handler implementation:

- Remove the manual state machine for outbound streams in favor of using `async-await`.
- Use `oneshot`s to track the number of requested outbound streams
- Use `futures_bounded::FuturesMap` to track the execution of a stream, thus applying a timeout to the entire request.

Resolves: #3130.
Related: #3268.
Related: #4510.

Pull-Request: #4901.
umgefahren pushed a commit to umgefahren/rust-libp2p that referenced this issue Mar 8, 2024
This patch refactors `libp2p-request-response` to not use the "upgrade infrastructure" provided by `libp2p-swarm`. Instead, we directly convert the negotiated streams into futures that read and write the messages.

Related: libp2p#3268.
Related: libp2p#2863.

Pull-Request: libp2p#3914.

Co-authored-by: Yiannis Marangos <yiannis@eiger.co>
@drHuangMHT
Copy link
Contributor

A brief search around the codebase showed that there is no use case for the open info. Should this be added to 0.54 milestone?

@thomaseizinger
Copy link
Contributor Author

A brief search around the codebase showed that there is no use case for the open info. Should this be added to 0.54 milestone?

Yes we could. We would have to issue a deprecation first to make upstream users aware that these associated types are going away.

@drHuangMHT drHuangMHT linked a pull request Mar 18, 2024 that will close this issue
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants