Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor trait Decodable (sketch) #1019

Closed
wants to merge 1 commit into from
Closed

Refactor trait Decodable (sketch) #1019

wants to merge 1 commit into from

Conversation

dpc
Copy link
Contributor

@dpc dpc commented May 28, 2022

Adjust code afterwards. This is to address #997.

Edit: Eh, this is backwards. 馃う I started with details somewhat different, then reverted that idea and didn't realize the implications until i woke up today.

consensus_decode_from_finite_reader should just forward to consensus_decode because in the general case decoding without finite-len requirement is more general than with it.

Most of the type won't care, because they are finite and non-recursive. Only vecs and some other containers will overwrite consensus_decode_from_finite_reader + consensus_decode .

Yeah. I'll redo later today if I find time.

@dpc
Copy link
Contributor Author

dpc commented May 28, 2022

I wanted to check how much effort would implementing #997 (comment) be.

I haven't even reviewed the changes myself yet, just changed the trait and then went one by one and fixed all the errors.

Cons:

  • well, there's quite some of churn involved
  • how this trait works becomes more subtle
    • decodables that need OOM, need to remember about overwritting fn consensus_decode to wrap the reader in a Take (though not much more of a problem than previous checking

Pros:

  • I guess it kind of works

Adjust code afterwards. This is to address #997.

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Self::consensus_decode_from_finite_reader(d.take(MAX_MSG_SIZE as u64))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, now I see that the previous code was using MAX_VEC_SIZE. I was wondering if messages on the wire bound by the same limit as block.

// Do not allocate upfront more items than if the sequnce of type
// occupied roughly quarter a block. This should never be the case
// for normal data, but even if that's not true - `push` will just
// reallocate. This should prevent even malicious decoding requests
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This justification is kind of a stretch. But it is true, that we can default to something lower now.

/// This function relies on `reader` being bound in amount of data
/// it returns for OOM protection. See [`Decodable::consensus_decode_from_finite_reader`].
fn consensus_decode_bytes_from_finite_reader<D: io::Read>(mut d: D, mut len: usize) -> Result<Vec<u8>, Error> {
// there should never be any byte vectors longer than this,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true?

@@ -412,6 +414,11 @@ impl Decodable for RawNetworkMessage {
payload,
})
}

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Copy link
Contributor Author

@dpc dpc May 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary. Since we're using consensus_decode on all inner-types, they will enforce the limits there.

}
})
}

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here? Get rid of this, and use consensus_decode in Self::consensus_decode_from_finite_reader for limit?

@dpc
Copy link
Contributor Author

dpc commented May 29, 2022

Replaced by #1023

@dpc dpc closed this May 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant