New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor trait Decodable
(sketch)
#1019
Conversation
I wanted to check how much effort would implementing #997 (comment) be. I haven't even reviewed the changes myself yet, just changed the trait and then went one by one and fixed all the errors. Cons:
Pros:
|
Adjust code afterwards. This is to address #997.
|
||
#[inline] | ||
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> { | ||
Self::consensus_decode_from_finite_reader(d.take(MAX_MSG_SIZE as u64)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, now I see that the previous code was using MAX_VEC_SIZE
. I was wondering if messages on the wire bound by the same limit as block.
// Do not allocate upfront more items than if the sequnce of type | ||
// occupied roughly quarter a block. This should never be the case | ||
// for normal data, but even if that's not true - `push` will just | ||
// reallocate. This should prevent even malicious decoding requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This justification is kind of a stretch. But it is true, that we can default to something lower now.
/// This function relies on `reader` being bound in amount of data | ||
/// it returns for OOM protection. See [`Decodable::consensus_decode_from_finite_reader`]. | ||
fn consensus_decode_bytes_from_finite_reader<D: io::Read>(mut d: D, mut len: usize) -> Result<Vec<u8>, Error> { | ||
// there should never be any byte vectors longer than this, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this true?
@@ -412,6 +414,11 @@ impl Decodable for RawNetworkMessage { | |||
payload, | |||
}) | |||
} | |||
|
|||
#[inline] | |||
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary. Since we're using consensus_decode
on all inner-types, they will enforce the limits there.
} | ||
}) | ||
} | ||
|
||
#[inline] | ||
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here? Get rid of this, and use consensus_decode
in Self::consensus_decode_from_finite_reader
for limit?
Replaced by #1023 |
Adjust code afterwards. This is to address #997.
Edit: Eh, this is backwards. 馃う I started with details somewhat different, then reverted that idea and didn't realize the implications until i woke up today.
consensus_decode_from_finite_reader
should just forward toconsensus_decode
because in the general case decoding without finite-len requirement is more general than with it.Most of the type won't care, because they are finite and non-recursive. Only vecs and some other containers will overwrite
consensus_decode_from_finite_reader
+consensus_decode
.Yeah. I'll redo later today if I find time.