Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent peer identity during restart make it unable to publish message with gossipsub #5097

Open
gnattu opened this issue Jan 16, 2024 · 14 comments
Labels

Comments

@gnattu
Copy link

gnattu commented Jan 16, 2024

Summary

When using QUIC transport exclusively with a persistent identity, the restart of the client of one node would make that node unable to publish message and getting an InsufficientPeers error. However, both swarm and gossipsub would report that there is a connected peer.

For example:

Both PeerA and PeerB has a static identity(and peer id derived from that identity) and connected to each other using QUIC transport exclusively, and gossipsub is able to send messages to each other.

Now PeerA restarts, connect to PeerB with the same identity, PeerA will be unable to publish any message and will always get InsufficientPeers, but the swarm.connected_peers() and gossipsub.all_peers() are returning PeerB as connected peer. PeerB can publish message to PeerA.

To reproduce this, we can simply modify the chat example:

First we add ed25519_dalek as a direct dependency, then add the following to the chat example:

// in header
use ed25519_dalek::{pkcs8::DecodePrivateKey, SigningKey};
...
// in main function:
let pem = std::fs::read_to_string("private_key.pem")?;
let mut verifying_key_bytes = SigningKey::from_pkcs8_pem(&*pem).unwrap().to_bytes();
let keypair = libp2p::identity::Keypair::ed25519_from_bytes(&mut verifying_key_bytes)?;
...
// use keypair as exisiting identity:
let mut swarm = libp2p::SwarmBuilder::with_existing_identity(keypair)
...
// comment out the tcp listening so we use quic exclusively:
//swarm.listen_on("/ip4/0.0.0.0/tcp/0".parse()?)?;

Then change the private_key.pem for PeerA and PeerB, we can use openssl to generate the pem:

openssl genpkey -algorithm Ed25519 -out private_key.pem

Make sure PeerA and PeerB is using different private key, and restart one of the client, you will see Publish error: InsufficientPeers on the restarted node.

Expected behavior

The peer should be able to publish message even after restart

Actual behavior

The peer restarted will report InsufficientPeers, the peer not restarted is able to publish message to the peer restarted.

Relevant log output

Enter messages via STDIN and they will be sent to connected peers using Gossipsub
Local node is listening on /ip4/127.0.0.1/udp/62392/quic-v1
Local node is listening on /ip4/192.168.125.145/udp/62392/quic-v1
mDNS discovered a new peer: 12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm
mDNS discovered a new peer: 12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm
test
Publish error: InsufficientPeers

Possible Solution

Current workarounds including:

  • Use TCP based transports as fallback.
  • Don't reboot too fast. Shut down the node and wait for ~10 seconds until the remote node closed the connection, which can be detected with an swarm::ConnectionError::IO event, then start the node. Starting the node after remote closed the connection will not trigger InsufficientPeers

Version

I tried "0.53.2" as well as running the chat example directly on master with 3e57cf494792031459d7f831408af24dbedc1a0b.

Would you like to work on fixing this bug ?

Yes

@thomaseizinger
Copy link
Contributor

This is very interesting. I don't have any resources to debug this but it is definitely something that shouldn't happen! :)

@gnattu
Copy link
Author

gnattu commented Jan 17, 2024

From a little bit more tracing I found that handle_received_message in gossipsub::behavior is never called, which caused the peer is never added to its subscription. I also found that after the restart, there is no Connection::poll: multistream_select::listener_select: logs handling /meshsub/1.1.0 protocol.

logs for successful connection:

2024-01-17T03:22:18.122303Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/60591/quic-v1/p2p/12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP id=1 peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP}:Connection::poll: multistream_select::dialer_select: Dialer: Received confirmation for protocol protocol=/meshsub/1.1.0
2024-01-17T03:22:18.122434Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: New peer type found for peer peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP peer_type=Gossipsub v1.1
2024-01-17T03:22:18.122477Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/60591/quic-v1/p2p/12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP id=1 peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP}:Connection::poll: multistream_select::listener_select: Listener: confirming protocol protocol=/meshsub/1.1.0
2024-01-17T03:22:18.122499Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/60591/quic-v1/p2p/12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP id=1 peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP}:Connection::poll: multistream_select::listener_select: Listener: sent confirmed protocol protocol=/meshsub/1.1.0
2024-01-17T03:22:18.123424Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Handling subscriptions: [Subscription { action: Subscribe, topic_hash: TopicHash { hash: "test-net" } }] source=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP
2024-01-17T03:22:18.123535Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: SUBSCRIPTION: Adding gossip peer to topic peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP topic=test-net
2024-01-17T03:22:18.125323Z DEBUG Swarm::poll: libp2p_swarm: Connection established peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP endpoint=Listener { local_addr: "/ip4/0.0.0.0/udp/54087/quic-v1", send_back_addr: "/ip4/192.168.125.155/udp/60591/quic-v1" } total_peers=2

After reboot:

2024-01-17T03:23:16.564884Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/60591/quic-v1/p2p/12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP id=1 peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP}:Connection::poll: multistream_select::dialer_select: Dialer: Received confirmation for protocol protocol=/meshsub/1.1.0
2024-01-17T03:23:16.565016Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: New peer type found for peer peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP peer_type=Gossipsub v1.1
(Nothing to handle to subscription)

@thomaseizinger
Copy link
Contributor

Does PeerA subscribe to the topic on startup? Are these the logs of PeerB?

@gnattu
Copy link
Author

gnattu commented Jan 17, 2024

Both peers subscribe the topic on startup, and these are the logs of the peer that restarted. I'm just using the chat example, which does swarm.behaviour_mut().gossipsub.subscribe(&topic)?; right before swarm.listen_on("/ip4/0.0.0.0/udp/0/quic-v1".parse()?)?;

The node did not restart handles the subscription normally(note the time which corresponds to restarted connection):

2024-01-17T03:23:16.562371Z DEBUG Swarm::poll: libp2p_swarm: Connection established peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm endpoint=Listener { local_addr: "/ip4/0.0.0.0/udp/60591/quic-v1", send_back_addr: "/ip4/192.168.125.155/udp/56322/quic-v1" } total_peers=3
2024-01-17T03:23:16.563669Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/56322/quic-v1 id=3 peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm}:Connection::poll: multistream_select::listener_select: Listener: confirming protocol protocol=/meshsub/1.1.0
2024-01-17T03:23:16.563960Z DEBUG new_established_connection{remote_addr=/ip4/192.168.125.155/udp/56322/quic-v1 id=3 peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm}:Connection::poll: multistream_select::listener_select: Listener: sent confirmed protocol protocol=/meshsub/1.1.0
2024-01-17T03:23:16.565055Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: New peer type found for peer peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm peer_type=Gossipsub v1.1
2024-01-17T03:23:16.566437Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Handling subscriptions: [Subscription { action: Subscribe, topic_hash: TopicHash { hash: "test-net" } }] source=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm

To ease understanding: in my setup, PeerA is the one restarts, and PeerB is the one does not restart.

@gnattu
Copy link
Author

gnattu commented Jan 17, 2024

A little bit more observation:

PeerB, the one did not restart, will have a timeout error after a few seconds after handing for the subscription:

2024-01-17T06:49:53.166552Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Handling subscriptions: [Subscription { action: Subscribe, topic_hash: TopicHash { hash: "test-net" } }] source=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm
2024-01-17T06:50:01.386577Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Starting heartbeat
2024-01-17T06:50:01.386842Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: HEARTBEAT: Mesh low. Topic contains: 0 needs: 5 topic=test-net
2024-01-17T06:50:01.386966Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: RANDOM PEERS: Got 0 peers
2024-01-17T06:50:01.387018Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Updating mesh, new mesh: {}
2024-01-17T06:50:01.387176Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: RANDOM PEERS: Got 0 peers
2024-01-17T06:50:01.387220Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Gossiping IHAVE to 0 peers
2024-01-17T06:50:01.387300Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Completed Heartbeat
2024-01-17T06:50:02.711905Z DEBUG Swarm::poll: libp2p_swarm: Connection closed with error IO(Custom { kind: Other, error: Connection(ConnectionError(TimedOut)) }): Connected { endpoint: Dialer { address: "/ip4/192.168.155.154/udp/49454/quic-v1/p2p/12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm", role_override: Dialer }, peer_id: PeerId("12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm") } total_peers=2
2024-01-17T06:50:03.905434Z DEBUG new_established_connection{remote_addr=/ip4/192.168.155.154/udp/49454/quic-v1 id=2 peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm}:Connection::poll: libp2p_gossipsub::handler: Failed to read from inbound stream: Failed to encode/decode message
2024-01-17T06:50:03.905531Z DEBUG new_established_connection{remote_addr=/ip4/192.168.155.154/udp/49454/quic-v1 id=2 peer=12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm}:Connection::poll: libp2p_gossipsub::handler: Inbound substream error while closing: Failed to encode/decode message
2024-01-17T06:50:03.905802Z DEBUG Swarm::poll: libp2p_swarm: Connection closed with error IO(Custom { kind: Other, error: Connection(ConnectionError(TimedOut)) }): Connected { endpoint: Listener { local_addr: "/ip4/0.0.0.0/udp/51851/quic-v1", send_back_addr: "/ip4/192.168.155.154/udp/49454/quic-v1" }, peer_id: PeerId("12D3KooWC1tB5J6W7NWwzXVFHZBggNLU1T5etuTtmpnf9WBQaJCm") } total_peers=1

But even after this, PeerB is still able to send messages to PeerA. PeerA as usual is not able to send messages to PeerB

@gnattu
Copy link
Author

gnattu commented Jan 17, 2024

I noticed that the errored endpoint is the listen port of PeerA before the restart, so I make PeerA's listen port static, now the log is spamming with sending reset:

2024-01-17T06:56:25.806103Z DEBUG quinn_proto::endpoint: sending stateless reset for a4fe4561d700a2e4 to 192.168.155.154:51851
2024-01-17T06:56:25.806239Z DEBUG quinn_proto::endpoint: sending stateless reset for 14475ee2d7a9298e to 192.168.155.154:51851
2024-01-17T06:56:25.806366Z DEBUG quinn_proto::endpoint: sending stateless reset for b1a79fa094a21c6e to 192.168.155.154:51851
2024-01-17T06:56:25.806469Z DEBUG quinn_proto::endpoint: sending stateless reset for 9753551be2c2de5c to 192.168.155.154:51851
2024-01-17T06:56:25.806513Z DEBUG quinn_proto::endpoint: sending stateless reset for 2e0d8e1217041df1 to 192.168.155.154:51851
2024-01-17T06:56:25.880620Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:25.880714Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:25.880931Z DEBUG quinn_proto::endpoint: sending stateless reset for 5425e3c3f1af7f81 to 192.168.155.154:51851
2024-01-17T06:56:25.880971Z DEBUG quinn_proto::endpoint: sending stateless reset for 127452f6b4d4513d to 192.168.155.154:51851
2024-01-17T06:56:25.881124Z DEBUG quinn_proto::endpoint: sending stateless reset for e0afc6113b4ddf96 to 192.168.155.154:51851
2024-01-17T06:56:25.881240Z DEBUG quinn_proto::endpoint: sending stateless reset for 2bbbea7f0265a8b6 to 192.168.155.154:51851
2024-01-17T06:56:25.881285Z DEBUG quinn_proto::endpoint: sending stateless reset for 9d5ee50e895024bc to 192.168.155.154:51851
2024-01-17T06:56:25.881418Z DEBUG quinn_proto::endpoint: sending stateless reset for 17dba4402bc13c9e to 192.168.155.154:51851
2024-01-17T06:56:25.881688Z DEBUG quinn_proto::endpoint: sending stateless reset for d29d1471a9ec5473 to 192.168.155.154:51851
2024-01-17T06:56:25.882038Z DEBUG quinn_proto::endpoint: sending stateless reset for bc0929ae56886a4d to 192.168.155.154:51851
2024-01-17T06:56:26.028876Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:26.029063Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:26.031431Z DEBUG quinn_proto::endpoint: sending stateless reset for 215fccb219093ec7 to 192.168.155.154:51851
2024-01-17T06:56:26.031519Z DEBUG quinn_proto::endpoint: sending stateless reset for 0bf5005379e33a48 to 192.168.155.154:51851
2024-01-17T06:56:26.031868Z DEBUG quinn_proto::endpoint: sending stateless reset for a5a0f1f3c7a8f6b6 to 192.168.155.154:51851
2024-01-17T06:56:26.032090Z DEBUG quinn_proto::endpoint: sending stateless reset for 883f622e78cea9d9 to 192.168.155.154:51851
2024-01-17T06:56:26.032227Z DEBUG quinn_proto::endpoint: sending stateless reset for ead4057d16ec95dc to 192.168.155.154:51851
2024-01-17T06:56:26.032488Z DEBUG quinn_proto::endpoint: sending stateless reset for 50b65f3ed43b11d2 to 192.168.155.154:51851
2024-01-17T06:56:26.032592Z DEBUG quinn_proto::endpoint: sending stateless reset for c574dfe26139d6a9 to 192.168.155.154:51851
2024-01-17T06:56:26.032711Z DEBUG quinn_proto::endpoint: sending stateless reset for c6708282dd724b78 to 192.168.155.154:51851
2024-01-17T06:56:26.324990Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:26.325440Z DEBUG quinn_proto::endpoint: sending stateless reset for 124986f11f065992 to 192.168.155.154:51851
2024-01-17T06:56:26.326241Z DEBUG quinn_proto::endpoint: sending stateless reset for 6e2bbf27ce891daa to 192.168.155.154:51851
2024-01-17T06:56:26.326795Z DEBUG quinn_proto::endpoint: sending stateless reset for 412daa5e98b02778 to 192.168.155.154:51851

At this point, PeerA still cannot send messages to PeerB

But a few seconds later, a GRAFT message would arrvie:

2024-01-17T06:59:11.387382Z DEBUG Swarm::poll: libp2p_gossipsub::behaviour: Handling GRAFT message for peer peer=12D3KooWRfa762rJY2GRwcckteCdB8hQBGYkiA1S9bxeERwpuejP

After that PeerA is able to send message back to PeerB again.

So with a static listen port and address, the connection will eventually resume because we will receive a GRAFT, with random port assignment, the GRAFT message would never come. Neither one can resume connection like the TCP based protocols immediately on restart.

@gnattu
Copy link
Author

gnattu commented Jan 17, 2024

While tracing it further, my educated guess is that when a QUIC connection is not closed on PeerB, no new inbound substream will be received by PeerA even if it restarts. However, QUIC can still negotiate and create the connection. In this case, because no new inbound substream is received by PeerA, the ConnectionHandler::on_connection_event is never called. Consequently, gossipsub cannot handle the subscription because it relies on this event for the initial setup. Therefore, it has to wait for the GRAFT, which also does not reach PeerA if the listen address changed during restart.

I observed that ConnectionEvent::FullyNegotiatedInbound is not fired on PeerA after a restart (it does fire exactly once on the initial start). Now, I don't know how to go deeper. How is FullyNegotiatedInbound fired when using QUIC? Is the TCP handling has anything different?

@dariusc93
Copy link
Contributor

dariusc93 commented Jan 18, 2024

Sounds almost related to #4917 (since the timout was reduced in #4965) but sounds much deeper. I noticed something similar while connected with just quic back then but only assume that it was due to the connection havent timed out on the other end.

EDIT: I can reproduce this easily as well, which is an odd behaviour.

@thomaseizinger
Copy link
Contributor

To further isolate the issue, could somebody try reproducing this with just quinn itself? We don't have a lot of code on top of quinn to provide the QUIC transport so it might be an upstream bug?

@dariusc93
Copy link
Contributor

Have this ever been reproduced upstream yet? Interesting to know if this is something within libp2p-quic or quinn.

@julienmalard
Copy link

I have had what I believe may be the same issue when connected a webrtc-peer (A) to a libp2p-rust relay node (B). In my case, once A restarts, it is unable to receive new gossipsub messages from B, even though B appears in A's list of connected peers.
I am not very versed in Rust but would be willing to help debug this if possible.

@dariusc93
Copy link
Contributor

@julienmalard could you provide some logs?

@julienmalard
Copy link

Here's an example log. I could also rerun with debug loggin enabled, but the log size becomes very large.

rust-peer % cargo run
   Compiling rust-libp2p-webrtc-peer v0.1.1 (/Users/julienmalard/atom/constl/cnct-rouille/rust-peer)
    Finished dev [unoptimized + debuginfo] target(s) in 17.58s
     Running `target/debug/rust-libp2p-webrtc-peer`
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Using existing identity from ./local_key
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Using existing certificate from ./cert.pem
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/192.168.0.100/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/127.0.0.1/udp/9091/quic-v1/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/192.168.0.100/udp/9091/quic-v1/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:20Z INFO  rust_libp2p_webrtc_peer] Listening on /ip4/192.168.0.100/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu
[2024-03-31T05:23:22Z INFO  webrtc::peer_connection] signaling state changed to have-remote-offer
[2024-03-31T05:23:22Z INFO  webrtc_ice::mdns] mDNS is using 0.0.0.0:5353 as dest_addr
[2024-03-31T05:23:22Z INFO  webrtc_mdns::conn] Looping and listening Ok(0.0.0.0:5353)
[2024-03-31T05:23:22Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Checking
[2024-03-31T05:23:22Z INFO  webrtc::peer_connection] ICE connection state changed: checking
[2024-03-31T05:23:22Z INFO  webrtc::peer_connection] signaling state changed to stable
[2024-03-31T05:23:22Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Connected
[2024-03-31T05:23:22Z INFO  webrtc::peer_connection] ICE connection state changed: connected
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:22Z INFO  webrtc::peer_connection] peer connection state changed: connected
[2024-03-31T05:23:22Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf
[2024-03-31T05:23:22Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf") }
[2024-03-31T05:23:22Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): f3668c7bf19999f91bb23ebab128bc24ced9eedbe59237bde11a4c152c255b6) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/123.0.6312.4 Safari/537.36", listen_addrs: [], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:22Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Pushed { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 920c4a99fc0b05e335ce06ce2ba3ef1225218155a4345bb6cf9af7d5cd7bd4) }, protocol_version: "/ipfs/0.1.0", agent_version: "rust-libp2p/0.44.2", listen_addrs: ["/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu", "/ip4/192.168.0.100/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg", "/ip4/127.0.0.1/udp/9091/quic-v1", "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg", "/ip4/192.168.0.100/udp/9091/quic-v1", "/ip4/127.0.0.1/tcp/9092", "/ip4/192.168.0.100/tcp/9092"], protocols: ["/libp2p/circuit/relay/0.2.0/hop", "/ipfs/ping/1.0.0", "/meshsub/1.0.0", "/universal-connectivity/lan/kad/1.0.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/meshsub/1.1.0"], observed_addr: "/ip4/127.0.0.1/udp/59732/webrtc-direct" } }
[2024-03-31T05:23:23Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr
[2024-03-31T05:23:23Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr") }
[2024-03-31T05:23:23Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 43ac4da8ae7eb852daa777750535b3ed71bd389108da8c24bf741ff18f0d) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49570/ws", "/ip4/192.168.0.100/tcp/49570/ws"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/49570/ws/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/192.168.0.100/tcp/49570/ws/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:24Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:24Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 43ac4da8ae7eb852daa777750535b3ed71bd389108da8c24bf741ff18f0d) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49570/ws", "/ip4/192.168.0.100/tcp/49570/ws", "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit", "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq") }
[2024-03-31T05:23:24Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 8e7f8283a9beb096fae2e9d23f362c49cb75e1709c5cf08a7e72859c76e4dc) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49574/ws", "/ip4/192.168.0.100/tcp/49574/ws"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:25Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/49574/ws/p2p/12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"
[2024-03-31T05:23:25Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/192.168.0.100/tcp/49574/ws/p2p/12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"
[2024-03-31T05:23:25Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:25Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:25Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"
[2024-03-31T05:23:25Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"
[2024-03-31T05:23:26Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 8e7f8283a9beb096fae2e9d23f362c49cb75e1709c5cf08a7e72859c76e4dc) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49574/ws", "/ip4/192.168.0.100/tcp/49574/ws", "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit", "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:26Z WARN  rust_libp2p_webrtc_peer] Connection to 12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq closed: Some(IO(Custom { kind: Other, error: Error(Right(Closed)) }))
[2024-03-31T05:23:26Z INFO  rust_libp2p_webrtc_peer] Removed 12D3KooWKQcqDrjszA3aXwhL6JX27z3CnWuu1uPemGwubJ87LtMq from the routing table (if it was in there).
[2024-03-31T05:23:27Z WARN  rust_libp2p_webrtc_peer] Connection to 12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr closed: Some(IO(Custom { kind: Other, error: Error(Right(Closed)) }))
[2024-03-31T05:23:27Z INFO  rust_libp2p_webrtc_peer] Removed 12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr from the routing table (if it was in there).
[2024-03-31T05:23:32Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Disconnected
[2024-03-31T05:23:32Z INFO  webrtc::peer_connection] ICE connection state changed: disconnected
[2024-03-31T05:23:32Z INFO  webrtc::peer_connection] peer connection state changed: disconnected
[2024-03-31T05:23:42Z INFO  webrtc::peer_connection] signaling state changed to have-remote-offer
[2024-03-31T05:23:42Z INFO  webrtc_ice::mdns] mDNS is using 0.0.0.0:5353 as dest_addr
[2024-03-31T05:23:42Z INFO  webrtc_mdns::conn] Looping and listening Ok(0.0.0.0:5353)
[2024-03-31T05:23:42Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Checking
[2024-03-31T05:23:42Z INFO  webrtc::peer_connection] ICE connection state changed: checking
[2024-03-31T05:23:42Z INFO  webrtc::peer_connection] signaling state changed to stable
[2024-03-31T05:23:42Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Connected
[2024-03-31T05:23:42Z INFO  webrtc::peer_connection] ICE connection state changed: connected
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:42Z INFO  webrtc::peer_connection] peer connection state changed: connected
[2024-03-31T05:23:42Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf
[2024-03-31T05:23:42Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf") }
[2024-03-31T05:23:42Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): f3668c7bf19999f91bb23ebab128bc24ced9eedbe59237bde11a4c152c255b6) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/123.0.6312.4 Safari/537.36", listen_addrs: [], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:43Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr
[2024-03-31T05:23:43Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr") }
[2024-03-31T05:23:43Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 43ac4da8ae7eb852daa777750535b3ed71bd389108da8c24bf741ff18f0d) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49585/ws", "/ip4/192.168.0.100/tcp/49585/ws"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:44Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/49585/ws/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:44Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/192.168.0.100/tcp/49585/ws/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:44Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:44Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:23:44Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:44Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"
[2024-03-31T05:23:44Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWENXsSgmKXse4hi77cmCeyKtpLiQWedkcgYeFsiQPnJRr"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 43ac4da8ae7eb852daa777750535b3ed71bd389108da8c24bf741ff18f0d) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=v21.1.0", listen_addrs: ["/ip4/127.0.0.1/tcp/49585/ws", "/ip4/192.168.0.100/tcp/49585/ws", "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit", "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:48Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/164.90.222.145/tcp/9092/p2p/12D3KooWJ7P1yeoxB5mq3TwQh8YgmVhankjtT4rsVGZPUyf617aR"
[2024-03-31T05:23:53Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWSCVw8HCc4hrkzfkEeJmVW2xfQRkxEreLzoc1NDTfzYFf"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): f3668c7bf19999f91bb23ebab128bc24ced9eedbe59237bde11a4c152c255b6) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/123.0.6312.4 Safari/537.36", listen_addrs: ["/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit", "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:23:57Z WARN  webrtc_ice::agent::agent_internal] [controlled]: Failed to close candidate udp4 prflx 127.0.0.1:59732 related :0: the agent is closed
[2024-03-31T05:23:57Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Failed
[2024-03-31T05:23:57Z INFO  webrtc::peer_connection] ICE connection state changed: failed
[2024-03-31T05:23:57Z INFO  webrtc::peer_connection] peer connection state changed: failed
[2024-03-31T05:23:59Z INFO  webrtc::peer_connection] signaling state changed to have-remote-offer
[2024-03-31T05:23:59Z INFO  webrtc_ice::mdns] mDNS is using 0.0.0.0:5353 as dest_addr
[2024-03-31T05:23:59Z INFO  webrtc_mdns::conn] Looping and listening Ok(0.0.0.0:5353)
[2024-03-31T05:23:59Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Checking
[2024-03-31T05:23:59Z INFO  webrtc::peer_connection] ICE connection state changed: checking
[2024-03-31T05:23:59Z INFO  webrtc::peer_connection] signaling state changed to stable
[2024-03-31T05:23:59Z INFO  webrtc_ice::agent::agent_internal] [controlled]: Setting new connection state: Connected
[2024-03-31T05:23:59Z INFO  webrtc::peer_connection] ICE connection state changed: connected
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z WARN  webrtc_dtls::handshake::handshake_message_client_hello] Unsupported Extension Type 0 35
[2024-03-31T05:23:59Z INFO  webrtc::peer_connection] peer connection state changed: connected
[2024-03-31T05:23:59Z INFO  rust_libp2p_webrtc_peer] Connected to 12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ
[2024-03-31T05:23:59Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Sent { peer_id: PeerId("12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ") }
[2024-03-31T05:23:59Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): fdbf7ee67948a1dce658a23734325245f4e463cb22a1da4a3242e36f224770) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/121.0.6167.57 Safari/537.36", listen_addrs: [], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }
[2024-03-31T05:24:10Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:24:10Z ERROR rust_libp2p_webrtc_peer] Failed to parse multiaddress
[2024-03-31T05:24:10Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ"
[2024-03-31T05:24:10Z INFO  rust_libp2p_webrtc_peer] Received address: "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit/p2p/12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ"
[2024-03-31T05:24:10Z INFO  rust_libp2p_webrtc_peer] BehaviourEvent::Identify Received { peer_id: PeerId("12D3KooWSttgAgjpP52LyhdTMJSMTJc69e6quN6BFrxxc2ekZnoZ"), info: Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): fdbf7ee67948a1dce658a23734325245f4e463cb22a1da4a3242e36f224770) }, protocol_version: "ipfs/0.1.0", agent_version: "libp2p/1.2.4 UserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/121.0.6167.57 Safari/537.36", listen_addrs: ["/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit", "/ip4/127.0.0.1/tcp/9092/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu/p2p-circuit"], protocols: ["/floodsub/1.0.0", "/ipfs/bitswap/1.0.0", "/ipfs/bitswap/1.1.0", "/ipfs/bitswap/1.2.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/libp2p/autonat/1.0.0", "/libp2p/circuit/relay/0.2.0/stop", "/libp2p/dcutr", "/meshsub/1.0.0", "/meshsub/1.1.0", "/webrtc-signaling/0.0.1"], observed_addr: "/ip4/127.0.0.1/udp/9090/webrtc-direct/certhash/uEiAWCRG30KUzLUuWYZ9HZNcFBHt8uLjLRB2xlLW_L_8lQg/p2p/12D3KooWKeJNXytogRbQPvYrFxXAaDGaJUfysrPeGbbEBDM78iTu" } }

@dariusc93
Copy link
Contributor

dariusc93 commented May 8, 2024

I think the solution for now (at least for me and my use case) would be to use the following settings

quic_config.max_idle_timeout = Duration::from_millis(300);
quic_config.keep_alive_interval = Duration::from_millis(100);

so that when PeerB disconnects, PeerA would handle the connection in a short time span so when PeerB comes back online and connects using the same identity (and even address), both peers will be able to send messages to each other. Not ideal, but is the workaround i think I may have to use as a default for now in my project (after first testing for any ill-effects of a short duration, which so far there dont seem to be any, but havent tried it on a larger scale yet). Long term, this should be looked into since PeerA does drop the old connection when PeerB does come online before the timeout kicks in, and even send a event to the new handle to join the mesh (after the old connection drops due to timeout), but PeerB refuse to receive any RPC messages from PeerA, although PeerB does receive packets from PeerA so that works, though i do plan on testing outside gossipsub scope. Using quinn itself, I wasnt able to reproduce this odd behaviour (maybe somebody else would have better luck than me).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants