Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gossipsub Backoffs not stored in GossipSubRouter::sendPrune #367

Closed
blacktemplar opened this issue Jul 30, 2020 · 12 comments · Fixed by #473
Closed

Gossipsub Backoffs not stored in GossipSubRouter::sendPrune #367

blacktemplar opened this issue Jul 30, 2020 · 12 comments · Fixed by #473
Labels
kind/bug A bug in existing code (including security flaws) status/ready Ready to be worked

Comments

@blacktemplar
Copy link

Every time we send a prune via GossipSubRouter::sendPrune we include our backoff specified in the config here. But we never actually add this backoff for the given peer and topic to our data structure (we add it in many other situations like here, but not in the situation where we send a PRUNE).

This result of this is that if we PRUNE a peer we might try to reGRAFT immediately (or shortly after) which will result in a punishment by the receiving peer because of the backoff he received. Therefore I assume this is a bug.

A scenario, where this bug might occur heavily is if we leave a topic (this calls sendPrune on all peers of this topic) and then join it again shortly after.

@vyzo
Copy link
Collaborator

vyzo commented Jul 30, 2020

That's not correct, we add the backoff before we invoke sendPrune; it is always stored.

@blacktemplar
Copy link
Author

Ok, so for the calls in GossipsubRouter::Leave, where does the backoff get stored?

My scenario is: We leave a topic and then immediately join it again. Then, this will lead to many grafts that are all invalid and get punished because of the backoff.

@vyzo
Copy link
Collaborator

vyzo commented Jul 30, 2020

Hrm, if we do leave the topic then it is not store; the scenario of immediate rejoin (within a minute) would indeed be problematic.
Do you see this occurring in practice or is it a theoretical concern?

@blacktemplar
Copy link
Author

The issue occurred to me because I am currently implementing Gossipsub V1.1 for rust-lang in rust-libp2p and thought of how to handle this edge case. Since the spec is not really detailed enough for such concerns, I am looking at this repository for reference.

@vyzo
Copy link
Collaborator

vyzo commented Jul 30, 2020

So mostly theoretical; still a valid concern however.
We could store the backoffs after we leave the topic as a solution for this -- this is something that would be straightforward to implement.

@jacobheun jacobheun added kind/bug A bug in existing code (including security flaws) status/ready Ready to be worked labels Aug 14, 2020
@blacktemplar
Copy link
Author

Similar to the GossipRouter::Leave problem when joining a new topic the backoffs for peers to graft are not checked (is relevant if we unsubscribed from this topic a short period before).

@nisdas
Copy link
Contributor

nisdas commented Oct 9, 2021

@vyzo Carrying on from #456, would there be any downside for introducing a backoff for pruned peers ?

@vyzo
Copy link
Collaborator

vyzo commented Oct 9, 2021 via email

@nisdas
Copy link
Contributor

nisdas commented Oct 13, 2021

The one thing to be careful with would be
excessive prune times, eg an attacker launching gazillion of peers to be
pruned and stuff the backoff tracking.

True, although that would require the attacker to be able to connect and fill up the local peer table unimpeded. If this is fine, I can open a PR to implement this.

@vyzo
Copy link
Collaborator

vyzo commented Oct 14, 2021 via email

@nisdas nisdas mentioned this issue Feb 7, 2022
2 tasks
@nisdas
Copy link
Contributor

nisdas commented Feb 7, 2022

Hey @vyzo , opened up #473 to fix this case.

@BigLep BigLep linked a pull request Apr 26, 2022 that will close this issue
2 tasks
@BigLep
Copy link

BigLep commented Apr 26, 2022

Resolving since #473 ws merged.

@BigLep BigLep closed this as completed Apr 26, 2022
Maintenance Priorities - Go automation moved this from Backlog to Done Apr 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) status/ready Ready to be worked
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

5 participants