Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce MAX_INV_SZ so wtx invs fit within a network message #6643

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

teor2345
Copy link
Contributor

@teor2345 teor2345 commented May 7, 2023

MAX_INV_SZ is too large for a network message filled with wide transaction IDs:
https://zips.z.cash/zip-0239#deployment

50000 * 68 = 3400000 (3.4 MB)

But this is unlikely to happen with the default mempool config, because ZIP-401 limits the number of transactions in the mempool to 8,000.

@nuttycom nuttycom added A-networking Area: Networking code safe-to-build Used to send PR to prod CI environment labels May 9, 2023
Copy link
Contributor

@nuttycom nuttycom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

@daira daira self-requested a review May 9, 2023 19:47
@daira
Copy link
Contributor

daira commented May 9, 2023

I don't understand the rationale. Why does this need to be changed, more precisely?

@teor2345
Copy link
Contributor Author

teor2345 commented May 9, 2023

I don't understand the rationale. Why does this need to be changed, more precisely?

With the larger wtxid inv entries in ZIP-329, can zcashd send an inv message that is larger than the network message size limit?

In Zebra it was possible (but unlikely) if we got around 30,000 new v5 transactions in the mempool.

Copy link
Contributor

@daira daira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NACK. A node that sends an inv with more than MAX_INV_SZ entries will be treated as misbehaving:

zcash/src/main.cpp

Lines 6899 to 6904 in 7dc1f63

if (vInv.size() > MAX_INV_SZ)
{
LOCK(cs_main);
Misbehaving(pfrom->GetId(), 20);
return error("message inv size() = %u", vInv.size());
}

Therefore reducing MAX_INV_SZ is an incompatible p2p protocol change that could cause nodes unaware of the change to get banned unjustly.

(The "Changes requested" is to block merging for now, until we've discussed this.)

@daira
Copy link
Contributor

daira commented May 9, 2023

With the larger wtxid inv entries in ZIP-329, can zcashd send an inv message that is larger than the network message size limit?

If it can, this isn't the correct fix.

@ECC-CI ECC-CI removed the safe-to-build Used to send PR to prod CI environment label May 9, 2023
@daira
Copy link
Contributor

daira commented May 9, 2023

We had a close look at this. It appears as though it might technically be possible for zcashd to send a protocol message larger than the limit if there is a lot of mempool churn without pto->setInventoryTxToSend being flushed, but it's very unlikely in practice. This is a bug, but to fix it I think we need to introduce a separate constant on the sending side, rather than changing MAX_INV_SZ (i.e. do a Postel "be conservative in what you send"). The main reason why that's necessary is that MAX_INV_SZ is used for inv messages in general, and we can't infer that the 8000 limit on the number of mempool transactions will hold all of them (not to mention that a node operator could increase -mempooltxcostlimit).

@teor2345
Copy link
Contributor Author

teor2345 commented May 9, 2023

With the larger wtxid inv entries in ZIP-329, can zcashd send an inv message that is larger than the network message size limit?

If it can, this isn't the correct fix.

I think an over-sized network message can happen if:

  • a user configures their nodes with increased ZIP-401 limits
  • there are over 30,000 transactions in the mempool (I haven't calculated the exact figure, it could be slightly lower)
  • another node sends a mempool request

It might also be possible for transaction gossips to reach this size, if the user increases the mempool limits. But I don't know how this is implemented in zcashd.

I'm happy to close this PR, and leave the proper fix to you. I don't think I understand zcashd internals enough to come up with a correct fix.

For now, Zebra is going to impose a lower limit so we don't send over-sized messages, and that also means we'll temporarily disconnect from nodes that send over this limit. Happy to negotiate the exact limit and how it's implemented, but Zebra needs to get an initial fix in soon.

@teor2345
Copy link
Contributor Author

teor2345 commented May 9, 2023

and that also means we'll temporarily disconnect from nodes that send over this limit

Actually, it turns out we need to split the inbound and outbound limits, so that's done. I think it's still possible we might disconnect from nodes that send a lot of inv entries due to our preallocation checks. I'll have a look.

@daira
Copy link
Contributor

daira commented May 9, 2023

I propose setting the outbound limit to 30000. Then we have 68 * 30000 = 2040000 which is well within the limit of 2 * 1024 * 1024 = 2097152.

@str4d
Copy link
Contributor

str4d commented May 9, 2023

Just to make sure we're on the same page, the current effective limits for regular network requests are:

  • 50000 MSG_BLOCK which is less than 2MiB.
  • min(50000 MSG_TX || MSG_WTX, 2MiB network message size)

An inv message can contain any arbitrary mix of CInv kinds, but that will just fall into one of the above two cases, depending on whether the inv message contains more than around 4300 MSG_WTX entries (and the relative mix of entry kinds).

The problem with reducing MAX_INV_SZ is that it is a change to the effective network behaviour for the first case (advertising block hash inventory), and potentially a change to the effective network behaviour for the second case (if reduced below around 30,800).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-networking Area: Networking code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants