New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gossipsub Backoffs not stored in GossipSubRouter::sendPrune #367
Comments
That's not correct, we add the backoff before we invoke sendPrune; it is always stored. |
Ok, so for the calls in My scenario is: We leave a topic and then immediately join it again. Then, this will lead to many grafts that are all invalid and get punished because of the backoff. |
Hrm, if we do leave the topic then it is not store; the scenario of immediate rejoin (within a minute) would indeed be problematic. |
The issue occurred to me because I am currently implementing Gossipsub V1.1 for rust-lang in rust-libp2p and thought of how to handle this edge case. Since the spec is not really detailed enough for such concerns, I am looking at this repository for reference. |
So mostly theoretical; still a valid concern however. |
Similar to the |
we do have backoff for pruned peers, but it is not persisted if the peer
leaves. I dont think there is much of a downside to persist it, as it will
be cleared when it expires. The one thing to be careful with would be
excessive prune times, eg an attacker launching gazillion of peers to be
pruned and stuff the backoff tracking.
…On Sat, Oct 9, 2021, 06:59 Nishant Das ***@***.***> wrote:
@vyzo <https://github.com/vyzo> Carrying on from #456
<#456>, would there be
any downside for introducing a backoff for pruned peers ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#367 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAI4SWGSWZPVREWA2PYGWDUF64YZANCNFSM4PNOGGBQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
True, although that would require the attacker to be able to connect and fill up the local peer table unimpeded. If this is fine, I can open a PR to implement this. |
yeah, go for it.
For attack mitigation I would just set a max backoff interval to observe,
so that they are always cleaned reasonably soon.
…On Thu, Oct 14, 2021, 03:00 Nishant Das ***@***.***> wrote:
The one thing to be careful with would be
excessive prune times, eg an attacker launching gazillion of peers to be
pruned and stuff the backoff tracking.
True, although that would require the attacker to be able to connect and
fill up the local peer table unimpeded. If this is fine, I can open a PR to
implement this.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#367 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAI4SVRIAKMMUL3S7KTUCDUGYMQLANCNFSM4PNOGGBQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Resolving since #473 ws merged. |
Every time we send a prune via
GossipSubRouter::sendPrune
we include our backoff specified in the config here. But we never actually add this backoff for the given peer and topic to our data structure (we add it in many other situations like here, but not in the situation where we send aPRUNE
).This result of this is that if we
PRUNE
a peer we might try to reGRAFT
immediately (or shortly after) which will result in a punishment by the receiving peer because of the backoff he received. Therefore I assume this is a bug.A scenario, where this bug might occur heavily is if we leave a topic (this calls sendPrune on all peers of this topic) and then join it again shortly after.
The text was updated successfully, but these errors were encountered: