Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add distinct connections over Io #18

Merged
merged 3 commits into from Sep 6, 2022
Merged

Add distinct connections over Io #18

merged 3 commits into from Sep 6, 2022

Conversation

mcches
Copy link
Contributor

@mcches mcches commented Sep 2, 2022

Real world servers often use concurrency for processing requests.
Protocols also lean on message ordering per connection, ie TCP. Neither
of these features were available with the current Io implementation.

This change overlays connection support on top of Io by multiplexing
through an actor. A simple TCP-like state machine is used to initiate and
accept connections.

Real world servers often use concurrency for processing requests.
Protocols also lean on message ordering per connection, ie TCP. Neither
of these features were available with the current `Io` implementation.

This change overlays connection support on top of `Io` by multiplexing
through an actor. A simple TCP-like state machine is used to initiate and
accept connections.
@mcches
Copy link
Contributor Author

mcches commented Sep 2, 2022

I'm looking for specific feedback on:

  • Idiomatic state management and error handling within the actor.
  • Connection API - mainly is io::Result what we want?
  • How to apply "turmoil" to connections. Currently, delay and drop violate the connection semantics so it has to be disabled. We likely want a toggle switch, and eventually a way to apply "turmoil" and respect connections.

Copy link
Member

@LucioFranco LucioFranco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, left a few questions as I am still learning the codebase. My biggest nit is the amount of M: Message where I only thing you need it in the impl for Segment but I think you can remove a lot of them.

use tokio_stream::wrappers::ReceiverStream;

/// A connection between two hosts.
pub struct Connection<M: Message> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub struct Connection<M: Message> {
pub struct Connection<M> {

You don't need any thing from the trait except for the generic type on the struct.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL. Applying this suggestion throughout.


async fn event_loop(mut inner: Inner<M>) {
loop {
futures::select_biased! {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we using the futures one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Purely cosmetics, avoiding the biased; inline on the branches.

I can link a TODO once something like this lands: tokio-rs/tokio#4910

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I swapped to tokio::select! w/ biased, but I end up busy looping due to something with the fused nature of the unicycle component. I need to investigate this further..

by_idx: IndexMap<usize, ConnectionInfo>,

/// Connection message receivers
receivers: unicycle::IndexedStreamsUnordered<ReceiverStream<(ConnectionInfo, M)>>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use use tokio's StreamMap here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at this earlier, but I can't recall why I didn't choose it. I'll swap in and see how it looks... could simplify the idx lookup bit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fn connect(&mut self, id: u64, dst: SocketAddr, notify: oneshot::Sender<Connection<M>>) {
let info = ConnectionInfo { id, peer: dst };

let (tx, rx) = mpsc::channel(1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why size 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always default no buffer for consistent behavior. There is an interesting question here in that a slow consumer can back up the whole host. The tests I generally write don't have behavior like that, rather we might see a slow host due to conditions we created in the network topology. Thoughts?

src/lib.rs Outdated
mod connection;
pub use connection::Connection;
pub use connection::ConnectionIo;
pub use connection::Segment;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a blocker, but pub use connection::{Connection, Connectionio, Segment}; is more idiomatic here.l

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg

Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it works, merge it 👍

@mcches mcches merged commit aaaf069 into main Sep 6, 2022
@mcches mcches deleted the conns branch September 6, 2022 21:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants