New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consensus_decode
should take &mut D
?
#1020
Comments
I could be convinced. I dislake But you may be correct that the tendency to create giant In summary, I have no strong opinions here. |
Indeed |
Actually, no. Scratch that. Wrong way to think about it. We need to think separately about trait bond and function signature. A trait bond However fn foo_ref(f: &mut Foo); is more general from the caller perspective than: fn foo_owned(f: Foo); It is sometimes impossible to use (call) And we know that we only ever going to need Because of the current needlessly strict calling convention, we are sometimes unable to call The above point might be very important, because even if the compiler can optimize the runtime cost of As for the looks and amount of noise, apart from death and taxes, the one thing that's certain in this life is appeasing Rust compiler, so we just have to accept it. :D |
I remember some Rust devs saying |
I'm also very convinced by the monomorphism argument. |
Doesn't hurt to get a second opinion: https://users.rust-lang.org/t/taking-r-mut-r-vs-mut-r-r-where-r-io-read/76780 |
1fea098 Support unsized `R` and `W` in consensus encode/decode (Dawid Ciężarkiewicz) a24a3b0 Forward `consensus_decode` to `consensus_decode_from_finite_reader` (Dawid Ciężarkiewicz) 9c754ca Take Writer/Reader by `&mut` in consensus en/decoding (Dawid Ciężarkiewicz) Pull request description: Fix #1020 (see more relevant discussion there) This definitely makes the amount of generics compiler has to generate by avoding generating the same functions for `R`, `&mut R`, `&mut &mut R` and so on. old: ``` > ls -al target/release/deps/bitcoin-07a9dabf1f3e0266 -rwxrwxr-x 1 dpc dpc 9947832 Jun 2 22:42 target/release/deps/bitcoin-07a9dabf1f3e0266 > strip target/release/deps/bitcoin-07a9dabf1f3e0266 > ls -al target/release/deps/bitcoin-07a9dabf1f3e0266 -rwxrwxr-x 1 dpc dpc 4463024 Jun 2 22:46 target/release/deps/bitcoin-07a9dabf1f3e0266 ``` new: ``` > ls -al target/release/deps/bitcoin-07a9dabf1f3e0266 -rwxrwxr-x 1 dpc dpc 9866800 Jun 2 22:44 target/release/deps/bitcoin-07a9dabf1f3e0266 > strip target/release/deps/bitcoin-07a9dabf1f3e0266 > ls -al target/release/deps/bitcoin-07a9dabf1f3e0266 -rwxrwxr-x 1 dpc dpc 4393392 Jun 2 22:45 target/release/deps/bitcoin-07a9dabf1f3e0266 ``` In the unit-test binary itself, it saves ~100KB of data. I did not expect much performance gains, but turn out I was wrong(*): old: ``` test blockdata::block::benches::bench_block_deserialize ... bench: 1,072,710 ns/iter (+/- 21,871) test blockdata::block::benches::bench_block_serialize ... bench: 191,223 ns/iter (+/- 5,833) test blockdata::block::benches::bench_block_serialize_logic ... bench: 37,543 ns/iter (+/- 732) test blockdata::block::benches::bench_stream_reader ... bench: 1,872,455 ns/iter (+/- 149,519) test blockdata::transaction::benches::bench_transaction_deserialize ... bench: 136 ns/iter (+/- 3) test blockdata::transaction::benches::bench_transaction_serialize ... bench: 51 ns/iter (+/- 8) test blockdata::transaction::benches::bench_transaction_serialize_logic ... bench: 5 ns/iter (+/- 0) test blockdata::transaction::benches::bench_transaction_size ... bench: 3 ns/iter (+/- 0) ``` new: ``` test blockdata::block::benches::bench_block_deserialize ... bench: 1,028,574 ns/iter (+/- 10,910) test blockdata::block::benches::bench_block_serialize ... bench: 162,143 ns/iter (+/- 3,363) test blockdata::block::benches::bench_block_serialize_logic ... bench: 30,725 ns/iter (+/- 695) test blockdata::block::benches::bench_stream_reader ... bench: 1,437,071 ns/iter (+/- 53,694) test blockdata::transaction::benches::bench_transaction_deserialize ... bench: 92 ns/iter (+/- 2) test blockdata::transaction::benches::bench_transaction_serialize ... bench: 17 ns/iter (+/- 0) test blockdata::transaction::benches::bench_transaction_serialize_logic ... bench: 5 ns/iter (+/- 0) test blockdata::transaction::benches::bench_transaction_size ... bench: 4 ns/iter (+/- 0) ``` (*) - I'm benchmarking on a noisy laptop. Take this with a grain of salt. But I think at least it doesn't make anything slower. While doing all this manual labor that will probably generate conflicts, I took a liberty of changing generic type names and variable names to `r` and `R` (reader) and `w` and `W` for writer. ACKs for top commit: RCasatta: ACK 1fea098 tested in downstream lib, space saving in compiled code confirmed apoelstra: ACK 1fea098 Tree-SHA512: bc11994791dc97cc468dc9d411b9abf52ad475f23adf5c43d563f323bae0da180c8f57f2f17c1bb7b9bdcf523584b0943763742b81362880206779872ad7489f
After taking a stab at #1019 , it seems to me that
consensus_decode
should taked: &mut D
and not as it currently doesd: D
. The reason is - it is basically a wrapper aroundRead::read
which takes&mut self
.Instead of passing
&mut d
when descending to sub-decoding, the code could just passd
and it would work. Internally it would avoid creating&mut &mut &mut &mut T
which seems kind of undesirable, even if compiler probably can get rid of these needless references.The text was updated successfully, but these errors were encountered: