Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Miri support #937

Merged
merged 17 commits into from Jun 30, 2020
Merged

Miri support #937

merged 17 commits into from Jun 30, 2020

Conversation

divergentdave
Copy link
Collaborator

@divergentdave divergentdave commented Jan 24, 2020

This PR will add enough conditional compilation to support running tests with miri. So far, I have disabled use of the process's PID, turned off the body of try_lock(), and ignored the quickcheck tests, since they would take far too long to run. Miri is slow as tar, so using it would only be useful for spot-checking for unsoundness, not a CI job or anything.

Miri requires additional shims for mkdir and rmdir before this will work, I have changes in progress for that as well.

The first issue I've discovered thus far is that crossbeam-epoch has an unsound use of std::mem::uninitialized, creating a std::Mem::ManuallyDrop<T>. There is a PR at crossbeam-rs/crossbeam#458 that will fix this soon.

Relevant commands:

  • Run a test: cargo +nightly miri test --features=testing -- -Zmiri-disable-isolation -- bug_22
  • Install miri from source: cargo +nightly install --path . --force --locked --offline

Edit - updated commands:

  • Install miri from source (can use my merged-for-sled branch now): ./rustup-toolchain one time, then ./miri install
  • Run a test: cargo +miri miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks -- bug_22
  • Expanding macros, as seen by miri: RUSTFLAGS="--cfg miri" cargo +nightly expand --lib --tests serialization::qc

More commands:

  • Default test suite: cargo +miri miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks -- (set ulimit -v first or this may eat all your swap space)
  • Default test suite, with racing stripes: cargo +miri miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks -Zmiri-disable-stacked-borrows -Zmiri-disable-validation -- (takes ~1hr on my computer)
  • Try running a longer-running test: cargo +miri miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks -- --ignored tree_bug_01

@spacejam
Copy link
Owner

very cool work!!! I love this :)

@divergentdave
Copy link
Collaborator Author

divergentdave commented Jan 26, 2020

I wrote many new shims for Miri to handle file system operations and other bits. See my merged-for-sled branch on my fork of miri for all the updates so far. I'm working on cleaning them up in parallel and submitting PRs to upstream.

Currently, when using these two branches, and running the tree_bug_01 test case, I get the message "error: the evaluated program leaked memory", and it dumps a long list of crossbeam_epoch::deferred::Deferred allocations. Presumably these are all hanging around still because Miri is single threaded, and nothing has explicitly run to clean them up yet. Things looks promising!

Edit: I re-ran the one test and captured the output before the leak report:

running 1 test
test tree_bug_01 ... ok

Success!

@divergentdave
Copy link
Collaborator Author

Miri is very strict about leaks, and leak reports may be a fact of life with real-world programs. I think we're seeing reports from some combination of 'static variables, thread_local! variables, and lazy_static!s. With that said, I dialed down the number of reported leaks due to crossbeam structures by temporarily using a feature flag that makes its Bags smaller.

For comparison, running cargo +nightly miri test --features=crossbeam-epoch/sanitize -- --verbose -- epoch from in the MaybeUninit branch of crossbeam yields the following leak report items:

  • Two objects that have Bags, each with four pointers going out
  • Eight function closures from inside Deferred, held by the Bags
  • Three other items, at least one of which is the Collector and anything it holds in a Box or Arc

Since memory leaks are already covered by running lsan in CI, I won't pay too much attention to Miri leak reports going forward.

@spacejam
Copy link
Owner

If this seems like a good approach to you, in the experiments/epoch directory I've implemented a GC system that acts similarly to crossbeam-epoch, but uses less unsafe code (basically all around Arc::to/from_raw). I wonder if dropping that in during test allows for more of our codebase to be analyzed by miri?

@divergentdave
Copy link
Collaborator Author

At this point, crossbeam isn't causing any problems during run time, and it's only contributing a small slice of static variables in the leak report at the end, but I'll keep that in mind.

@jonhoo
Copy link

jonhoo commented Jan 29, 2020

@divergentdave Very cool work -- love it! I found this right after filing crossbeam-rs/crossbeam#464. Do you think the leak reports are actually misleading then?

jonhoo added a commit to jonhoo/flurry that referenced this pull request Jan 29, 2020
These are probably spurious; see
crossbeam-rs/crossbeam#464
and
spacejam/sled#937 (comment)

And besides, we're now running both the leak sanitizer and the address
sanitizer.
@divergentdave
Copy link
Collaborator Author

Just commented over there, but I think I'm seeing objects owned by containers with 'static lifetimes in the leak report. With that in mind, I don't think it's an issue for sled.

@RalfJung
Copy link

I love this, this is awesome. :) Once this works, I also know how to answer when people ask me "what would be a good benchmark for finding bottlenecks in Miri performance". :D

Let me know if there is any way I can help to make progress here.

@divergentdave
Copy link
Collaborator Author

I got a backtrace out of this from uninitialized data in an IoBufs being written to the log. Re-running this test case did not reproduce the report, but I bet another hour of runtime could shake it out again.

Backtrace
error: Undefined Behavior: reading uninitialized memory at alloc35487946+e2, but this operation requires initialized memory
   --> /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fd.rs:103:13
    |
103 |             libc::write(self.fd, buf.as_ptr() as *const c_void, cmp::min(buf.len(), max_len()))
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ reading uninitialized memory at alloc35487946+e2, but this operation requires initialized memory
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: inside call to `std::sys::unix::fd::FileDesc::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fs.rs:814:9
    = note: inside call to `std::sys::unix::fs::File::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/fs.rs:670:9
    = note: inside call to `<std::fs::File as std::io::Write>::write` at /home/david/Code/sled/src/pagecache/parallel_io_polyfill.rs:88:15
    = note: inside call to `sled::pagecache::parallel_io_polyfill::pwrite_all` at /home/david/Code/sled/src/pagecache/iobuf.rs:664:13
    = note: inside call to `sled::pagecache::iobuf::IoBufs::write_to_log` at /home/david/Code/sled/src/pagecache/iobuf.rs:1096:29
    = note: inside call to closure at /home/david/Code/sled/src/lib.rs:224:29
    = note: inside call to `sled::threadpool::spawn::<[closure@DefId(34:815 ~ sled[c98a]::pagecache[0]::iobuf[0]::maybe_seal_and_write_iobuf[0]::{{closure}}[3]) 0:std::sync::Arc<sled::pagecache::iobuf::IoBufs>, 1:std::sync::Arc<sled::pagecache::iobuf::IoBuf>, 2:i64], ()>` at /home/david/Code/sled/src/pagecache/iobuf.rs:1095:23
    = note: inside call to `sled::pagecache::iobuf::maybe_seal_and_write_iobuf` at /home/david/Code/sled/src/pagecache/logger.rs:285:17
    = note: inside call to `sled::pagecache::logger::Log::reserve_inner::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/logger.rs:143:9
    = note: inside call to `sled::pagecache::logger::Log::reserve::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/mod.rs:1308:21
    = note: inside call to `sled::pagecache::PageCache::cas_page` at /home/david/Code/sled/src/pagecache/mod.rs:1042:13
    = note: inside call to `sled::pagecache::PageCache::replace` at /home/david/Code/sled/src/tree.rs:1440:23
    = note: inside call to `sled::tree::Tree::split_node` at /home/david/Code/sled/src/tree.rs:1647:17
    = note: inside call to `sled::tree::Tree::view_for_key::<&[u8]>` at /home/david/Code/sled/src/tree.rs:433:17
    = note: inside call to `sled::tree::Tree::remove_inner::<&[u8]>` at /home/david/Code/sled/src/tree.rs:415:9
note: inside call to `sled::tree::Tree::remove::<&[u8]>` at tests/test_tree_failpoints.rs:300:27
   --> tests/test_tree_failpoints.rs:300:27
    |
300 |                 fp_crash!(tree.remove(&*vec![0, k]));
    |                           ^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside call to `run_tree_crashes_nicely` at tests/test_tree_failpoints.rs:104:9
   --> tests/test_tree_failpoints.rs:104:9
    |
104 |         run_tree_crashes_nicely(ops.clone(), flusher)
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<[closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool], bool>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<bool, [closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
note: inside call to `std::panic::catch_unwind::<[closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool], bool>` at tests/test_tree_failpoints.rs:103:15
   --> tests/test_tree_failpoints.rs:103:15
    |
103 |       let res = std::panic::catch_unwind(|| {
    |  _______________^
104 | |         run_tree_crashes_nicely(ops.clone(), flusher)
105 | |     });
    | |______^
note: inside call to `prop_tree_crashes_nicely` at tests/test_tree_failpoints.rs:569:13
   --> tests/test_tree_failpoints.rs:569:13
    |
569 |       assert!(prop_tree_crashes_nicely(
    |  _____________^
570 | |         vec![
571 | |             Del(175),
572 | |             Del(19),
...   |
772 | |         false,
773 | |     ))
    | |_____^
note: inside call to `failpoints_bug_10` at tests/test_tree_failpoints.rs:566:1
   --> tests/test_tree_failpoints.rs:566:1
    |
566 | / fn failpoints_bug_10() {
567 | |     // expected to iterate over 50 but got 49 instead
568 | |     // postmortem 1:
569 | |     assert!(prop_tree_crashes_nicely(
...   |
773 | |     ))
774 | | }
    | |_^
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<[closure@tests/test_tree_failpoints.rs:566:1: 774:2] as std::ops::FnOnce<()>>::call_once - shim` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<fn() as std::ops::FnOnce<()>>::call_once - shim(fn())` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:517:5
    = note: inside call to `test::__rust_begin_short_backtrace::<fn()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:508:30
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<[closure@DefId(50:631 ~ test[81d1]::run_test[0]::{{closure}}[2]) 0:fn()] as std::ops::FnOnce<()>>::call_once - shim(vtable)` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/liballoc/boxed.rs:1017:9
    = note: inside call to `<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send> as std::ops::FnOnce<()>>::call_once` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:318:9
    = note: inside call to `<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>> as std::ops::FnOnce<()>>::call_once` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>, ()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<(), std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
    = note: inside call to `std::panic::catch_unwind::<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>, ()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:541:18
    = note: inside call to `test::run_test_in_process` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:450:39
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:475:13
    = note: inside call to `test::run_test::run_test_inner` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:505:28
    = note: inside call to `test::run_test` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:284:13
    = note: inside call to `test::run_tests::<[closure@DefId(50:230 ~ test[81d1]::console[0]::run_tests_console[0]::{{closure}}[2]) 0:&mut test::console::ConsoleTestState, 1:&mut std::boxed::Box<dyn test::formatters::OutputFormatter>]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/console.rs:280:5
    = note: inside call to `test::run_tests_console` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:120:15
    = note: inside call to `test::test_main` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:139:5
    = note: inside call to `test::test_main_static`
    = note: inside call to `main` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:67:34
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:52:73
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys_common/backtrace.rs:130:5
    = note: inside call to `std::sys_common::backtrace::__rust_begin_short_backtrace::<[closure@DefId(1:6034 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]::{{closure}}[0]) 0:&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:52:13
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<[closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<i32, [closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
    = note: inside call to `std::panic::catch_unwind::<[closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:51:25
    = note: inside call to `std::rt::lang_start_internal` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:67:5
    = note: inside call to `std::rt::lang_start::<()>`
    = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

error: aborting due to previous error

test failpoints_bug_10 ... 
error: could not compile `sled`.

@divergentdave
Copy link
Collaborator Author

divergentdave commented Mar 30, 2020

Here's a similar backtrace, this time with logging on.

Log and backtrace
TRACE main                 context    starting context
TRACE main                 pagecache  starting pagecache
DEBUG main                 snapshot   no previous snapshot found
TRACE main                 iterator   file len: 0 segment len 256 segments: 0
DEBUG main                 iterator   ordering before clearing tears: {}, max_header_stable_lsn: 0
DEBUG main                 iterator   in clean_tail_tears, found missing item in tail: None and we'll scan segments [] above lowest lsn 0
TRACE main                 iterator   no segments remaining to iterate over
DEBUG main                 iterator   filtering out segments after detected tear at lsn 0 lid 0
TRACE main                 iterator   trying to find the max stable tip for bounding batch manifests with segment iter IntoIter { inner: Item { opt: None } } of segments >= max_header_stable_lsn 0
TRACE main                 iterator   no segments remaining to iterate over
TRACE main                 iterator   found max stable tip: 0
TRACE main                 iterator   generated iterator over segments {} with lsn >= 0
TRACE main                 snapshot   building on top of old snapshot: Snapshot { last_lsn: 0, last_lid: 0, max_header_stable_lsn: 0, pt: [] }
TRACE main                 iterator   no segments remaining to iterate over
TRACE main                 snapshot   generated new snapshot: Snapshot { last_lsn: 0, last_lid: 0, max_header_stable_lsn: 0, pt: [] }
TRACE main                 segment    initialized self.segments to []
TRACE main                 segment    initialized self.ordering to {}
TRACE main                 iobuf      starting IoBufs with next_lsn: 0 next_lid: 0
TRACE main                 segment    evaluating free list {} in SA::next
TRACE main                 segment    advancing file tip from 0 to 256
TRACE main                 segment    setting Segment to Active with new lsn 0
DEBUG main                 segment    segment accountant returning offset: 0 on deck: {}
DEBUG main                 iobuf      starting log at clean offset 0, recovered lsn 0
DEBUG main                 iobuf      storing lsn 0 in beginning of buffer
DEBUG main                 blob_io    gc_blobs removing any blob with an lsn above -1
DEBUG main                 pagecache  load_snapshot loading pages from 0..0
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  allocating pid 0 for the first time
TRACE main                 pagecache  cas_page called on pid 0 to Meta(Meta { inner: {} }) with old ts 0
TRACE main                 pagecache  cas_page on pid 0 has log kind: Replace
TRACE main                 logger     reserving buf of len 8
TRACE main                 logger     reserved 8 bytes at lsn 20 lid 20
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 pagecache  cas_page succeeded on pid 0
TRACE main                 segment    mark_link pid 0 at cache info CacheInfo { ts: 1, lsn: 20, pointer: Inline(20), log_size: 8 }
TRACE main                 segment    mark_replace pid 0 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 20, pointer: Inline(20), log_size: 8 } with lsn 0
TRACE main                 pagecache  accessed pid 0 -> paging out pids []
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  allocating pid 1 for the first time
TRACE main                 pagecache  cas_page called on pid 1 to Counter(0) with old ts 0
TRACE main                 pagecache  cas_page on pid 1 has log kind: Replace
TRACE main                 logger     reserving buf of len 9
TRACE main                 logger     reserved 9 bytes at lsn 28 lid 28
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 pagecache  cas_page succeeded on pid 1
TRACE main                 segment    mark_link pid 1 at cache info CacheInfo { ts: 1, lsn: 28, pointer: Inline(28), log_size: 9 }
TRACE main                 segment    mark_replace pid 1 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 28, pointer: Inline(28), log_size: 9 } with lsn 0
TRACE main                 pagecache  accessed pid 1 -> paging out pids []
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  pagecache started
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  allocating pid 2 for the first time
TRACE main                 pagecache  cas_page called on pid 2 to Node(Node { next: None, lo: [], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Leaf(Leaf { keys: [], values: [] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 2 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 37 lid 37
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 pagecache  cas_page succeeded on pid 2
TRACE main                 segment    mark_link pid 2 at cache info CacheInfo { ts: 1, lsn: 37, pointer: Inline(37), log_size: 16 }
TRACE main                 segment    mark_replace pid 2 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 37, pointer: Inline(37), log_size: 16 } with lsn 0
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 meta       allocated pid 2 for leaf in new_tree for namespace [95, 95, 115, 108, 101, 100, 95, 95, 100, 101, 102, 97, 117, 108, 116]
TRACE main                 pagecache  allocating pid 3 for the first time
TRACE main                 pagecache  cas_page called on pid 3 to Node(Node { next: None, lo: [], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Index(Index { keys: [[]], pointers: [2] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 3 has log kind: Replace
TRACE main                 logger     reserving buf of len 18
TRACE main                 logger     reserved 18 bytes at lsn 53 lid 53
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 pagecache  cas_page succeeded on pid 3
TRACE main                 segment    mark_link pid 3 at cache info CacheInfo { ts: 1, lsn: 53, pointer: Inline(53), log_size: 18 }
TRACE main                 segment    mark_replace pid 3 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 53, pointer: Inline(53), log_size: 18 } with lsn 0
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
DEBUG main                 meta       allocated pid 3 for root of new_tree [95, 95, 115, 108, 101, 100, 95, 95, 100, 101, 102, 97, 117, 108, 116]
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  cas_page called on pid 0 to Meta(Meta { inner: {[95, 95, 115, 108, 101, 100, 95, 95, 100, 101, 102, 97, 117, 108, 116]: 3} }) with old ts 1
TRACE main                 pagecache  cas_page on pid 0 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 71 lid 71
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 blob_io    successfully wrote blob at "/dev/shm/pagecache.tmp.446285931939910791967784868773889/blobs/71"
TRACE main                 pagecache  cas_page succeeded on pid 0
TRACE main                 segment    mark_link pid 0 at cache info CacheInfo { ts: 2, lsn: 71, pointer: Blob(71, 71), log_size: 16 }
TRACE main                 segment    mark_replace pid 0 from cache infos [CacheInfo { ts: 1, lsn: 20, pointer: Inline(20), log_size: 8 }] to cache info CacheInfo { ts: 2, lsn: 71, pointer: Blob(71, 71), log_size: 16 } with lsn 0
TRACE main                 segment    removing pid 0 at lsn 0 from segment Active(Active { lsn: 0, rss: 67, deferred_replaced_rss: 0, deferred_replaced_pids: {}, pids: {0, 1, 2, 3}, latest_replacement_lsn: 0, can_free_upon_deactivation: {}, deferred_rm_blob: {} })
TRACE main                 pagecache  accessed pid 0 -> paging out pids []
TRACE main                 pagecache  getting page iter for META
TRACE main                 iobuf      sealed iobuf with lsn 0
DEBUG main                 iobuf      advancing offset within the current segment from 0 to 87
TRACE main                 iobuf      asynchronously writing iobuf with lsn 0 to log from maybe_seal
TRACE main                 iobuf      write_to_log log_offset 0 lsn 0 len 87
DEBUG main                 iobuf      wrote lsns 0-86 to disk at offsets 0-86, maxed false complete_len 87
DEBUG main                 iobuf      mark_interval(0, 87)
DEBUG main                 iobuf      new highest interval: 0 - 86
TRACE main                 iobuf      bumping atomic header lsn to -1
TRACE main                 segment    stabilize(0), normalized: -256, last: -1
TRACE main                 segment    expected stabilization lsn -256 to be greater than the previous value of -1
TRACE main                 tree       removing key [0, 175]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       removing key [0, 19]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  dropping pagecache
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  pagecache dropped
DEBUG main                 logger     IoBufs dropped
TRACE main                 context    starting context
TRACE main                 pagecache  starting pagecache
DEBUG main                 snapshot   no previous snapshot found
TRACE main                 iterator   file len: 87 segment len 256 segments: 1
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   SA scanned header at lid 0 during startup: SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
DEBUG main                 iterator   ordering before clearing tears: {0: 0}, max_header_stable_lsn: 0
DEBUG main                 iterator   in clean_tail_tears, found missing item in tail: None and we'll scan segments [(0, 0)] above lowest lsn 0
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 87 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 87 lsn 87
DEBUG main                 iterator   filtering out segments after detected tear at lsn 71 lid 71
TRACE main                 iterator   trying to find the max stable tip for bounding batch manifests with segment iter IntoIter { inner: Item { opt: Some((0, 0)) } } of segments >= max_header_stable_lsn 0
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 87 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 87 lsn 87
TRACE main                 iterator   found max stable tip: 87
TRACE main                 iterator   generated iterator over segments {0: 0} with lsn >= 0
TRACE main                 snapshot   building on top of old snapshot: Snapshot { last_lsn: 0, last_lid: 0, max_header_stable_lsn: 0, pt: [] }
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 87 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 20 ptr Inline(20)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(20) lsn 20
TRACE main                 snapshot   compact of pid 0 at ptr Inline(20) lsn 20
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 28 ptr Inline(28)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(28) lsn 28
TRACE main                 snapshot   compact of pid 1 at ptr Inline(28) lsn 28
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 37 ptr Inline(37)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(37) lsn 37
TRACE main                 snapshot   compact of pid 2 at ptr Inline(37) lsn 37
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 53 ptr Inline(53)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(53) lsn 53
TRACE main                 snapshot   compact of pid 3 at ptr Inline(53) lsn 53
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 71 ptr Blob(71, 71)
TRACE main                 snapshot   trying to deserialize buf for ptr Blob(71, 71) lsn 71
TRACE main                 snapshot   compact of pid 0 at ptr Blob(71, 71) lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 87 lsn 87
TRACE main                 snapshot   wrote snapshot to /dev/shm/pagecache.tmp.446285931939910791967784868773889/snap.0000000000000047.generating
TRACE main                 snapshot   renamed snapshot to /dev/shm/pagecache.tmp.446285931939910791967784868773889/snap.0000000000000047
TRACE main                 snapshot   generated new snapshot: Snapshot { last_lsn: 71, last_lid: 71, max_header_stable_lsn: 0, pt: [Present([(71, Blob(71, 71), 16)]), Present([(28, Inline(28), 9)]), Present([(37, Inline(37), 16)]), Present([(53, Inline(53), 18)])] }
TRACE main                 segment    adding lsn: 71 lid: 71 sz: 16 for pid 0 to segment 0 during SA recovery
TRACE main                 segment    (snapshot) recovering segment with base lsn 0
TRACE main                 segment    setting Segment to Active with new lsn 0
TRACE main                 segment    adding lsn: 28 lid: 28 sz: 9 for pid 1 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 37 lid: 37 sz: 16 for pid 2 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 53 lid: 53 sz: 18 for pid 3 to segment 0 during SA recovery
TRACE main                 segment    raised self.tip to 256 during SA initialization
TRACE main                 segment    initialized self.segments to [Active(Active { lsn: 0, rss: 59, deferred_replaced_rss: 0, deferred_replaced_pids: {}, pids: {0, 1, 2, 3}, latest_replacement_lsn: 0, can_free_upon_deactivation: {}, deferred_rm_blob: {} })]
TRACE main                 segment    initialized self.ordering to {0: 0}
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iobuf      starting IoBufs with next_lsn: 87 next_lid: 87
DEBUG main                 iobuf      starting log at split offset 87, recovered lsn 87
DEBUG main                 blob_io    gc_blobs removing any blob with an lsn above 86
DEBUG main                 pagecache  load_snapshot loading pages from 0..4
TRACE main                 pagecache  load_snapshot pid 0 Present([(71, Blob(71, 71), 16)])
TRACE main                 pagecache  installing page for pid 0
TRACE main                 pagecache  pulling pid 0 lsn 71 pointer Blob(71, 71) from disk
TRACE main                 logger     reading log lsn 71 ptr Blob(71, 71)
TRACE main                 pagecache  load_snapshot pid 1 Present([(28, Inline(28), 9)])
TRACE main                 pagecache  installing page for pid 1
TRACE main                 pagecache  pulling pid 1 lsn 28 pointer Inline(28) from disk
TRACE main                 logger     reading log lsn 28 ptr Inline(28)
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  load_snapshot pid 2 Present([(37, Inline(37), 16)])
TRACE main                 pagecache  installing page for pid 2
TRACE main                 pagecache  load_snapshot pid 3 Present([(53, Inline(53), 18)])
TRACE main                 pagecache  installing page for pid 3
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  pagecache started
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  pulling pid 3 lsn 53 pointer Inline(53) from disk
TRACE main                 logger     reading log lsn 53 ptr Inline(53)
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  fix-up for pid 3 succeeded
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  pulling pid 2 lsn 37 pointer Inline(37) from disk
TRACE main                 logger     reading log lsn 37 ptr Inline(37)
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  fix-up for pid 2 succeeded
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       removing key [0, 155]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       removing key [0, 111]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       setting key [0, 0]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  linking pid 2 with Set([0, 0], [0, 0])
TRACE main                 logger     reserving buf of len 15
TRACE main                 logger     reserved 15 bytes at lsn 87 lid 87
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 pagecache  link of pid 2 succeeded
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       removing key [0, 4]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       setting key [0, 1]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  linking pid 2 with Set([0, 1], [0, 1])
TRACE main                 logger     reserving buf of len 15
TRACE main                 logger     reserved 15 bytes at lsn 102 lid 102
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 pagecache  link of pid 2 succeeded
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       setting key [0, 2]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  linking pid 2 with Set([0, 2], [0, 2])
TRACE main                 logger     reserving buf of len 15
TRACE main                 logger     reserved 15 bytes at lsn 117 lid 117
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 pagecache  link of pid 2 succeeded
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       setting key [0, 3]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 tree       splitting node 2
TRACE main                 pagecache  allocating pid 4 for the first time
TRACE main                 pagecache  cas_page called on pid 4 to Node(Node { next: None, lo: [0, 2], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Leaf(Leaf { keys: [[0, 2]], values: [[0, 2]] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 4 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 132 lid 132
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: BlobNode, segment_number: SegmentNumber(0), pid: 4, len: 8 }
TRACE main                 blob_io    successfully wrote blob at "/dev/shm/pagecache.tmp.446285931939910791967784868773889/blobs/132"
TRACE main                 pagecache  cas_page succeeded on pid 4
TRACE main                 segment    mark_link pid 4 at cache info CacheInfo { ts: 1, lsn: 132, pointer: Blob(132, 132), log_size: 16 }
TRACE main                 segment    mark_replace pid 4 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 132, pointer: Blob(132, 132), log_size: 16 } with lsn 0
TRACE main                 segment    mark_link pid 2 at cache info CacheInfo { ts: 3, lsn: 117, pointer: Inline(117), log_size: 15 }
TRACE main                 segment    mark_link pid 2 at cache info CacheInfo { ts: 2, lsn: 102, pointer: Inline(102), log_size: 15 }
TRACE main                 segment    mark_link pid 2 at cache info CacheInfo { ts: 1, lsn: 87, pointer: Inline(87), log_size: 15 }
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 pagecache  replacing pid 2 with Node { next: Some(4), lo: [], hi: [0, 2], merging_child: None, merging: false, prefix_len: 0, data: Leaf(Leaf { keys: [[0, 0], [0, 1]], values: [[0, 0], [0, 1]] }) }
TRACE main                 pagecache  cas_page called on pid 2 to Node(Node { next: Some(4), lo: [], hi: [0, 2], merging_child: None, merging: false, prefix_len: 0, data: Leaf(Leaf { keys: [[0, 0], [0, 1]], values: [[0, 0], [0, 1]] }) }) with old ts 3
TRACE main                 pagecache  cas_page on pid 2 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 148 lid 148
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: BlobNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 blob_io    successfully wrote blob at "/dev/shm/pagecache.tmp.446285931939910791967784868773889/blobs/148"
TRACE main                 pagecache  cas_page succeeded on pid 2
TRACE main                 segment    mark_link pid 2 at cache info CacheInfo { ts: 4, lsn: 148, pointer: Blob(148, 148), log_size: 16 }
TRACE main                 segment    mark_replace pid 2 from cache infos [CacheInfo { ts: 0, lsn: 37, pointer: Inline(37), log_size: 16 }, CacheInfo { ts: 1, lsn: 87, pointer: Inline(87), log_size: 15 }, CacheInfo { ts: 2, lsn: 102, pointer: Inline(102), log_size: 15 }, CacheInfo { ts: 3, lsn: 117, pointer: Inline(117), log_size: 15 }] to cache info CacheInfo { ts: 4, lsn: 148, pointer: Blob(148, 148), log_size: 16 } with lsn 0
TRACE main                 segment    removing pid 2 at lsn 0 from segment Active(Active { lsn: 0, rss: 136, deferred_replaced_rss: 0, deferred_replaced_pids: {}, pids: {0, 1, 2, 3, 4}, latest_replacement_lsn: 0, can_free_upon_deactivation: {}, deferred_rm_blob: {} })
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  replacing pid 3 with Node { next: None, lo: [], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Index(Index { keys: [[], [0, 2]], pointers: [2, 4] }) }
TRACE main                 pagecache  cas_page called on pid 3 to Node(Node { next: None, lo: [], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Index(Index { keys: [[], [0, 2]], pointers: [2, 4] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 3 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 164 lid 164
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: BlobNode, segment_number: SegmentNumber(0), pid: 3, len: 8 }
TRACE main                 blob_io    successfully wrote blob at "/dev/shm/pagecache.tmp.446285931939910791967784868773889/blobs/164"
TRACE main                 pagecache  cas_page succeeded on pid 3
TRACE main                 segment    mark_link pid 3 at cache info CacheInfo { ts: 1, lsn: 164, pointer: Blob(164, 164), log_size: 16 }
TRACE main                 segment    mark_replace pid 3 from cache infos [CacheInfo { ts: 0, lsn: 53, pointer: Inline(53), log_size: 18 }] to cache info CacheInfo { ts: 1, lsn: 164, pointer: Blob(164, 164), log_size: 16 } with lsn 0
TRACE main                 segment    removing pid 3 at lsn 0 from segment Active(Active { lsn: 0, rss: 152, deferred_replaced_rss: 61, deferred_replaced_pids: {}, pids: {0, 1, 2, 3, 4}, latest_replacement_lsn: 0, can_free_upon_deactivation: {}, deferred_rm_blob: {} })
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 tree       retrying at line 1648 when cursor was 2
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 4
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 pagecache  linking pid 4 with Set([0, 3], [0, 3])
TRACE main                 logger     reserving buf of len 15
TRACE main                 logger     reserved 15 bytes at lsn 180 lid 180
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 pagecache  link of pid 4 succeeded
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 tree       setting key [0, 4]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 4
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 pagecache  linking pid 4 with Set([0, 4], [0, 4])
TRACE main                 logger     reserving buf of len 15
TRACE main                 logger     reserved 15 bytes at lsn 195 lid 195
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 pagecache  link of pid 4 succeeded
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 iobuf      sealed iobuf with lsn 87
DEBUG main                 iobuf      advancing offset within the current segment from 87 to 210
TRACE main                 iobuf      asynchronously writing iobuf with lsn 87 to log from maybe_seal
TRACE main                 iobuf      write_to_log log_offset 87 lsn 87 len 123
DEBUG main                 iobuf      wrote lsns 87-209 to disk at offsets 87-209, maxed false complete_len 123
DEBUG main                 iobuf      mark_interval(87, 123)
DEBUG main                 iobuf      new highest interval: 87 - 209
TRACE main                 iobuf      bumping atomic header lsn to -1
TRACE main                 segment    mark_link pid 4 at cache info CacheInfo { ts: 3, lsn: 195, pointer: Inline(195), log_size: 15 }
TRACE main                 segment    mark_link pid 4 at cache info CacheInfo { ts: 2, lsn: 180, pointer: Inline(180), log_size: 15 }
TRACE main                 segment    stabilize(0), normalized: -256, last: -1
TRACE main                 segment    expected stabilization lsn -256 to be greater than the previous value of -1
TRACE main                 pagecache  dropping pagecache
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  pagecache dropped
DEBUG main                 logger     IoBufs dropped
TRACE main                 context    starting context
TRACE main                 pagecache  starting pagecache
TRACE main                 iterator   file len: 210 segment len 256 segments: 1
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   SA scanned header at lid 0 during startup: SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
DEBUG main                 iterator   ordering before clearing tears: {0: 0}, max_header_stable_lsn: 0
DEBUG main                 iterator   in clean_tail_tears, found missing item in tail: None and we'll scan segments [(0, 0)] above lowest lsn 0
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 210 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 102
TRACE main                 logger     read message header at lid 102: MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 117
TRACE main                 logger     read message header at lid 117: MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 132
TRACE main                 logger     read message header at lid 132: MessageHeader { crc32: 2476586788, kind: BlobNode, segment_number: SegmentNumber(0), pid: 4, len: 8 }
TRACE main                 logger     read a successful blob message for blob 132 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 148
TRACE main                 logger     read message header at lid 148: MessageHeader { crc32: 1994850285, kind: BlobNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful blob message for blob 148 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 164
TRACE main                 logger     read message header at lid 164: MessageHeader { crc32: 365492607, kind: BlobNode, segment_number: SegmentNumber(0), pid: 3, len: 8 }
TRACE main                 logger     read a successful blob message for blob 164 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 180
TRACE main                 logger     read message header at lid 180: MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 195
TRACE main                 logger     read message header at lid 195: MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 210
TRACE main                 logger     read message header at lid 210: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 210 lsn 210
DEBUG main                 iterator   filtering out segments after detected tear at lsn 195 lid 195
TRACE main                 iterator   trying to find the max stable tip for bounding batch manifests with segment iter IntoIter { inner: Item { opt: Some((0, 0)) } } of segments >= max_header_stable_lsn 0
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 210 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 102
TRACE main                 logger     read message header at lid 102: MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 117
TRACE main                 logger     read message header at lid 117: MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 132
TRACE main                 logger     read message header at lid 132: MessageHeader { crc32: 2476586788, kind: BlobNode, segment_number: SegmentNumber(0), pid: 4, len: 8 }
TRACE main                 logger     read a successful blob message for blob 132 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 148
TRACE main                 logger     read message header at lid 148: MessageHeader { crc32: 1994850285, kind: BlobNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful blob message for blob 148 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 164
TRACE main                 logger     read message header at lid 164: MessageHeader { crc32: 365492607, kind: BlobNode, segment_number: SegmentNumber(0), pid: 3, len: 8 }
TRACE main                 logger     read a successful blob message for blob 164 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 180
TRACE main                 logger     read message header at lid 180: MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 195
TRACE main                 logger     read message header at lid 195: MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 logger     reading message from segment: 0 at lid: 210
TRACE main                 logger     read message header at lid 210: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 210 lsn 210
TRACE main                 iterator   found max stable tip: 210
TRACE main                 iterator   generated iterator over segments {0: 0} with lsn >= 0
TRACE main                 snapshot   building on top of old snapshot: Snapshot { last_lsn: 71, last_lid: 71, max_header_stable_lsn: 0, pt: [Present([(71, Blob(71, 71), 16)]), Present([(28, Inline(28), 9)]), Present([(37, Inline(37), 16)]), Present([(53, Inline(53), 18)])] }
TRACE main                 iterator   LogIter::read_segment lsn: 0 cur_lsn: 0
TRACE main                 logger     reading segment header at 0
TRACE main                 iterator   read segment header SegmentHeader { lsn: 0, max_stable_lsn: -1, ok: true }
TRACE main                 iterator   setting stored segment buffer length to 210 after read
TRACE main                 logger     reading message from segment: 0 at lid: 20
TRACE main                 logger     read message header at lid 20: MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 4224745279, kind: InlineMeta, segment_number: SegmentNumber(0), pid: 0, len: 0 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 20 ptr Inline(20)
TRACE main                 snapshot   continuing in advance_snapshot, lsn 20 ptr Inline(20) last_lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 28 ptr Inline(28)
TRACE main                 snapshot   continuing in advance_snapshot, lsn 28 ptr Inline(28) last_lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 37
TRACE main                 logger     read message header at lid 37: MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2709725675, kind: InlineNode, segment_number: SegmentNumber(0), pid: 2, len: 8 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 37 ptr Inline(37)
TRACE main                 snapshot   continuing in advance_snapshot, lsn 37 ptr Inline(37) last_lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 53
TRACE main                 logger     read message header at lid 53: MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1869816561, kind: InlineNode, segment_number: SegmentNumber(0), pid: 3, len: 10 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 53 ptr Inline(53)
TRACE main                 snapshot   continuing in advance_snapshot, lsn 53 ptr Inline(53) last_lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 71
TRACE main                 logger     read message header at lid 71: MessageHeader { crc32: 1393889006, kind: BlobMeta, segment_number: SegmentNumber(0), pid: 0, len: 8 }
TRACE main                 logger     read a successful blob message for blob 71 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 71 ptr Blob(71, 71)
TRACE main                 snapshot   continuing in advance_snapshot, lsn 71 ptr Blob(71, 71) last_lsn 71
TRACE main                 logger     reading message from segment: 0 at lid: 87
TRACE main                 logger     read message header at lid 87: MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2213419210, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 87 ptr Inline(87)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(87) lsn 87
TRACE main                 snapshot   append of pid 2 at lid Inline(87) lsn 87
TRACE main                 logger     reading message from segment: 0 at lid: 102
TRACE main                 logger     read message header at lid 102: MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 1914974692, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 102 ptr Inline(102)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(102) lsn 102
TRACE main                 snapshot   append of pid 2 at lid Inline(102) lsn 102
TRACE main                 logger     reading message from segment: 0 at lid: 117
TRACE main                 logger     read message header at lid 117: MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 3138079959, kind: InlineLink, segment_number: SegmentNumber(0), pid: 2, len: 7 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 117 ptr Inline(117)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(117) lsn 117
TRACE main                 snapshot   append of pid 2 at lid Inline(117) lsn 117
TRACE main                 logger     reading message from segment: 0 at lid: 132
TRACE main                 logger     read message header at lid 132: MessageHeader { crc32: 2476586788, kind: BlobNode, segment_number: SegmentNumber(0), pid: 4, len: 8 }
TRACE main                 logger     read a successful blob message for blob 132 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 132 ptr Blob(132, 132)
TRACE main                 snapshot   trying to deserialize buf for ptr Blob(132, 132) lsn 132
TRACE main                 snapshot   compact of pid 4 at ptr Blob(132, 132) lsn 132
TRACE main                 logger     reading message from segment: 0 at lid: 148
TRACE main                 logger     read message header at lid 148: MessageHeader { crc32: 1994850285, kind: BlobNode, segment_number: SegmentNumber(0), pid: 2, len: 8 }
TRACE main                 logger     read a successful blob message for blob 148 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 148 ptr Blob(148, 148)
TRACE main                 snapshot   trying to deserialize buf for ptr Blob(148, 148) lsn 148
TRACE main                 snapshot   compact of pid 2 at ptr Blob(148, 148) lsn 148
TRACE main                 logger     reading message from segment: 0 at lid: 164
TRACE main                 logger     read message header at lid 164: MessageHeader { crc32: 365492607, kind: BlobNode, segment_number: SegmentNumber(0), pid: 3, len: 8 }
TRACE main                 logger     read a successful blob message for blob 164 in segment number SegmentNumber(0)
TRACE main                 iterator   read blob flush in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 164 ptr Blob(164, 164)
TRACE main                 snapshot   trying to deserialize buf for ptr Blob(164, 164) lsn 164
TRACE main                 snapshot   compact of pid 3 at ptr Blob(164, 164) lsn 164
TRACE main                 logger     reading message from segment: 0 at lid: 180
TRACE main                 logger     read message header at lid 180: MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 180 ptr Inline(180)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(180) lsn 180
TRACE main                 snapshot   append of pid 4 at lid Inline(180) lsn 180
TRACE main                 logger     reading message from segment: 0 at lid: 195
TRACE main                 logger     read message header at lid 195: MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iterator   read inline flush with header MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 } in LogIter::next
TRACE main                 snapshot   in advance_snapshot looking at item with lsn 195 ptr Inline(195)
TRACE main                 snapshot   trying to deserialize buf for ptr Inline(195) lsn 195
TRACE main                 snapshot   append of pid 4 at lid Inline(195) lsn 195
TRACE main                 logger     reading message from segment: 0 at lid: 210
TRACE main                 logger     read message header at lid 210: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 logger     read a corrupted message with Corrupted MessageKind: MessageHeader { crc32: 0, kind: Corrupted, segment_number: SegmentNumber(0), pid: 0, len: 0 }
TRACE main                 iterator   read corrupted msg in LogIter::next as lid 210 lsn 210
TRACE main                 snapshot   wrote snapshot to /dev/shm/pagecache.tmp.446285931939910791967784868773889/snap.00000000000000C3.generating
TRACE main                 snapshot   renamed snapshot to /dev/shm/pagecache.tmp.446285931939910791967784868773889/snap.00000000000000C3
DEBUG main                 snapshot   removing old snapshot file "/dev/shm/pagecache.tmp.446285931939910791967784868773889/snap.0000000000000047"
TRACE main                 snapshot   generated new snapshot: Snapshot { last_lsn: 195, last_lid: 195, max_header_stable_lsn: 0, pt: [Present([(71, Blob(71, 71), 16)]), Present([(28, Inline(28), 9)]), Present([(148, Blob(148, 148), 16)]), Present([(164, Blob(164, 164), 16)]), Present([(132, Blob(132, 132), 16), (180, Inline(180), 15), (195, Inline(195), 15)])] }
TRACE main                 segment    adding lsn: 71 lid: 71 sz: 16 for pid 0 to segment 0 during SA recovery
TRACE main                 segment    (snapshot) recovering segment with base lsn 0
TRACE main                 segment    setting Segment to Active with new lsn 0
TRACE main                 segment    adding lsn: 28 lid: 28 sz: 9 for pid 1 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 148 lid: 148 sz: 16 for pid 2 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 164 lid: 164 sz: 16 for pid 3 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 132 lid: 132 sz: 16 for pid 4 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 180 lid: 180 sz: 15 for pid 4 to segment 0 during SA recovery
TRACE main                 segment    adding lsn: 195 lid: 195 sz: 15 for pid 4 to segment 0 during SA recovery
TRACE main                 segment    raised self.tip to 256 during SA initialization
TRACE main                 segment    initialized self.segments to [Active(Active { lsn: 0, rss: 103, deferred_replaced_rss: 0, deferred_replaced_pids: {}, pids: {0, 1, 2, 3, 4}, latest_replacement_lsn: 0, can_free_upon_deactivation: {}, deferred_rm_blob: {} })]
TRACE main                 segment    initialized self.ordering to {0: 0}
TRACE main                 logger     reading message from segment: 0 at lid: 195
TRACE main                 logger     read message header at lid 195: MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 iobuf      starting IoBufs with next_lsn: 210 next_lid: 210
DEBUG main                 iobuf      starting log at split offset 210, recovered lsn 210
DEBUG main                 blob_io    gc_blobs removing any blob with an lsn above 209
DEBUG main                 pagecache  load_snapshot loading pages from 0..5
TRACE main                 pagecache  load_snapshot pid 0 Present([(71, Blob(71, 71), 16)])
TRACE main                 pagecache  installing page for pid 0
TRACE main                 pagecache  pulling pid 0 lsn 71 pointer Blob(71, 71) from disk
TRACE main                 logger     reading log lsn 71 ptr Blob(71, 71)
TRACE main                 pagecache  load_snapshot pid 1 Present([(28, Inline(28), 9)])
TRACE main                 pagecache  installing page for pid 1
TRACE main                 pagecache  pulling pid 1 lsn 28 pointer Inline(28) from disk
TRACE main                 logger     reading log lsn 28 ptr Inline(28)
TRACE main                 logger     reading message from segment: 0 at lid: 28
TRACE main                 logger     read message header at lid 28: MessageHeader { crc32: 2026283633, kind: Counter, segment_number: SegmentNumber(0), pid: 1, len: 1 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  load_snapshot pid 2 Present([(148, Blob(148, 148), 16)])
TRACE main                 pagecache  installing page for pid 2
TRACE main                 pagecache  load_snapshot pid 3 Present([(164, Blob(164, 164), 16)])
TRACE main                 pagecache  installing page for pid 3
TRACE main                 pagecache  load_snapshot pid 4 Present([(132, Blob(132, 132), 16), (180, Inline(180), 15), (195, Inline(195), 15)])
TRACE main                 pagecache  installing page for pid 4
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  getting page iter for idgen
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  pagecache started
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iter for META
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  pulling pid 3 lsn 164 pointer Blob(164, 164) from disk
TRACE main                 logger     reading log lsn 164 ptr Blob(164, 164)
TRACE main                 pagecache  fix-up for pid 3 succeeded
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 2
TRACE main                 pagecache  pulling pid 2 lsn 148 pointer Blob(148, 148) from disk
TRACE main                 logger     reading log lsn 148 ptr Blob(148, 148)
TRACE main                 pagecache  fix-up for pid 2 succeeded
TRACE main                 pagecache  accessed pid 2 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 4
TRACE main                 pagecache  pulling pid 4 lsn 132 pointer Blob(132, 132) from disk
TRACE main                 logger     reading log lsn 132 ptr Blob(132, 132)
TRACE main                 pagecache  pulling pid 4 lsn 180 pointer Inline(180) from disk
TRACE main                 logger     reading log lsn 180 ptr Inline(180)
TRACE main                 logger     reading message from segment: 0 at lid: 180
TRACE main                 logger     read message header at lid 180: MessageHeader { crc32: 2745352396, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  pulling pid 4 lsn 195 pointer Inline(195) from disk
TRACE main                 logger     reading log lsn 195 ptr Inline(195)
TRACE main                 logger     reading message from segment: 0 at lid: 195
TRACE main                 logger     read message header at lid 195: MessageHeader { crc32: 457643461, kind: InlineLink, segment_number: SegmentNumber(0), pid: 4, len: 7 }
TRACE main                 logger     read a successful inline message
TRACE main                 pagecache  fix-up for pid 4 succeeded
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 tree       removing key [0, 94]
TRACE main                 pagecache  getting page iterator for pid 3
TRACE main                 pagecache  accessed pid 3 -> paging out pids []
TRACE main                 pagecache  getting page iterator for pid 4
TRACE main                 pagecache  accessed pid 4 -> paging out pids []
TRACE main                 tree       splitting node 4
TRACE main                 pagecache  allocating pid 5 for the first time
TRACE main                 pagecache  cas_page called on pid 5 to Node(Node { next: None, lo: [0, 4], hi: [], merging_child: None, merging: false, prefix_len: 0, data: Leaf(Leaf { keys: [[0, 4]], values: [[0, 4]] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 5 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     reserved 16 bytes at lsn 210 lid 210
TRACE main                 serialization serializing MessageHeader { crc32: 0, kind: BlobNode, segment_number: SegmentNumber(0), pid: 5, len: 8 }
TRACE main                 blob_io    successfully wrote blob at "/dev/shm/pagecache.tmp.446285931939910791967784868773889/blobs/210"
TRACE main                 pagecache  cas_page succeeded on pid 5
TRACE main                 segment    mark_link pid 5 at cache info CacheInfo { ts: 1, lsn: 210, pointer: Blob(210, 210), log_size: 16 }
TRACE main                 segment    mark_replace pid 5 from cache infos [] to cache info CacheInfo { ts: 1, lsn: 210, pointer: Blob(210, 210), log_size: 16 } with lsn 0
TRACE main                 pagecache  accessed pid 5 -> paging out pids []
TRACE main                 pagecache  replacing pid 4 with Node { next: Some(5), lo: [0, 2], hi: [0, 4], merging_child: None, merging: false, prefix_len: 1, data: Leaf(Leaf { keys: [[2], [3]], values: [[0, 2], [0, 3]] }) }
TRACE main                 pagecache  cas_page called on pid 4 to Node(Node { next: Some(5), lo: [0, 2], hi: [0, 4], merging_child: None, merging: false, prefix_len: 1, data: Leaf(Leaf { keys: [[2], [3]], values: [[0, 2], [0, 3]] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 4 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     io buffer too full, spinning
TRACE main                 iobuf      sealed iobuf with lsn 210
TRACE main                 iobuf      setting maxed to true for iobuf with lsn 210
DEBUG main                 iobuf      rolling to new segment after clearing 210-226
TRACE main                 segment    evaluating free list {} in SA::next
TRACE main                 segment    advancing file tip from 256 to 512
TRACE main                 segment    setting Segment to Active with new lsn 256
DEBUG main                 segment    segment accountant returning offset: 256 on deck: {}
DEBUG main                 iobuf      storing lsn 256 in beginning of buffer
TRACE main                 iobuf      asynchronously writing iobuf with lsn 210 to log from maybe_seal
TRACE main                 iobuf      write_to_log log_offset 210 lsn 210 len 16
error: Undefined Behavior: reading uninitialized memory at alloc38892809+e2, but this operation requires initialized memory
   --> /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fd.rs:103:13
    |
103 |             libc::write(self.fd, buf.as_ptr() as *const c_void, cmp::min(buf.len(), max_len()))
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ reading uninitialized memory at alloc38892809+e2, but this operation requires initialized memory
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: inside call to `std::sys::unix::fd::FileDesc::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fs.rs:814:9
    = note: inside call to `std::sys::unix::fs::File::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/fs.rs:670:9
    = note: inside call to `<std::fs::File as std::io::Write>::write` at /home/david/Code/sled/src/pagecache/parallel_io_polyfill.rs:88:15
    = note: inside call to `sled::pagecache::parallel_io_polyfill::pwrite_all` at /home/david/Code/sled/src/pagecache/iobuf.rs:664:13
    = note: inside call to `sled::pagecache::iobuf::IoBufs::write_to_log` at /home/david/Code/sled/src/pagecache/iobuf.rs:1096:29
    = note: inside call to closure at /home/david/Code/sled/src/lib.rs:224:29
    = note: inside call to `sled::threadpool::spawn::<[closure@DefId(34:815 ~ sled[c98a]::pagecache[0]::iobuf[0]::maybe_seal_and_write_iobuf[0]::{{closure}}[3]) 0:std::sync::Arc<sled::pagecache::iobuf::IoBufs>, 1:std::sync::Arc<sled::pagecache::iobuf::IoBuf>, 2:i64], ()>` at /home/david/Code/sled/src/pagecache/iobuf.rs:1095:23
    = note: inside call to `sled::pagecache::iobuf::maybe_seal_and_write_iobuf` at /home/david/Code/sled/src/pagecache/logger.rs:285:17
    = note: inside call to `sled::pagecache::logger::Log::reserve_inner::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/logger.rs:143:9
    = note: inside call to `sled::pagecache::logger::Log::reserve::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/mod.rs:1308:21
    = note: inside call to `sled::pagecache::PageCache::cas_page` at /home/david/Code/sled/src/pagecache/mod.rs:1042:13
    = note: inside call to `sled::pagecache::PageCache::replace` at /home/david/Code/sled/src/tree.rs:1440:23
    = note: inside call to `sled::tree::Tree::split_node` at /home/david/Code/sled/src/tree.rs:1647:17
    = note: inside call to `sled::tree::Tree::view_for_key::<&[u8]>` at /home/david/Code/sled/src/tree.rs:433:17
    = note: inside call to `sled::tree::Tree::remove_inner::<&[u8]>` at /home/david/Code/sled/src/tree.rs:415:9
note: inside call to `sled::tree::Tree::remove::<&[u8]>` at tests/test_tree_failpoints.rs:300:27
   --> tests/test_tree_failpoints.rs:300:27
    |
300 |                 fp_crash!(tree.remove(&*vec![0, k]));
    |                           ^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside call to `run_tree_crashes_nicely` at tests/test_tree_failpoints.rs:104:9
   --> tests/test_tree_failpoints.rs:104:9
    |
104 |         run_tree_crashes_nicely(ops.clone(), flusher)
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<[closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool], bool>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<bool, [closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
note: inside call to `std::panic::catch_unwind::<[closure@tests/test_tree_failpoints.rs:103:40: 105:6 ops:&std::vec::Vec<Op>, flusher:&bool], bool>` at tests/test_tree_failpoints.rs:103:15
   --> tests/test_tree_failpoints.rs:103:15
    |
103 |       let res = std::panic::catch_unwind(|| {
    |  _______________^
104 | |         run_tree_crashes_nicely(ops.clone(), flusher)
105 | |     });
    | |______^
note: inside call to `prop_tree_crashes_nicely` at tests/test_tree_failpoints.rs:569:13
   --> tests/test_tree_failpoints.rs:569:13
    |
569 |       assert!(prop_tree_crashes_nicely(
    |  _____________^
570 | |         vec![
571 | |             Del(175),
572 | |             Del(19),
...   |
772 | |         false,
773 | |     ))
    | |_____^
note: inside call to `failpoints_bug_10` at tests/test_tree_failpoints.rs:566:1
   --> tests/test_tree_failpoints.rs:566:1
    |
566 | / fn failpoints_bug_10() {
567 | |     // expected to iterate over 50 but got 49 instead
568 | |     // postmortem 1:
569 | |     assert!(prop_tree_crashes_nicely(
...   |
773 | |     ))
774 | | }
    | |_^
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<[closure@tests/test_tree_failpoints.rs:566:1: 774:2] as std::ops::FnOnce<()>>::call_once - shim` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<fn() as std::ops::FnOnce<()>>::call_once - shim(fn())` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:517:5
    = note: inside call to `test::__rust_begin_short_backtrace::<fn()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:508:30
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libcore/ops/function.rs:232:5
    = note: inside call to `<[closure@DefId(50:631 ~ test[81d1]::run_test[0]::{{closure}}[2]) 0:fn()] as std::ops::FnOnce<()>>::call_once - shim(vtable)` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/liballoc/boxed.rs:1017:9
    = note: inside call to `<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send> as std::ops::FnOnce<()>>::call_once` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:318:9
    = note: inside call to `<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>> as std::ops::FnOnce<()>>::call_once` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>, ()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<(), std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
    = note: inside call to `std::panic::catch_unwind::<std::panic::AssertUnwindSafe<std::boxed::Box<dyn std::ops::FnOnce() + std::marker::Send>>, ()>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:541:18
    = note: inside call to `test::run_test_in_process` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:450:39
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:475:13
    = note: inside call to `test::run_test::run_test_inner` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:505:28
    = note: inside call to `test::run_test` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:284:13
    = note: inside call to `test::run_tests::<[closure@DefId(50:230 ~ test[81d1]::console[0]::run_tests_console[0]::{{closure}}[2]) 0:&mut test::console::ConsoleTestState, 1:&mut std::boxed::Box<dyn test::formatters::OutputFormatter>]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/console.rs:280:5
    = note: inside call to `test::run_tests_console` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:120:15
    = note: inside call to `test::test_main` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libtest/lib.rs:139:5
    = note: inside call to `test::test_main_static`
    = note: inside call to `main` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:67:34
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:52:73
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys_common/backtrace.rs:130:5
    = note: inside call to `std::sys_common::backtrace::__rust_begin_short_backtrace::<[closure@DefId(1:6034 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]::{{closure}}[0]) 0:&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:52:13
    = note: inside call to closure at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:331:40
    = note: inside call to `std::panicking::r#try::do_call::<[closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panicking.rs:274:15
    = note: inside call to `std::panicking::r#try::<i32, [closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe]>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/panic.rs:394:14
    = note: inside call to `std::panic::catch_unwind::<[closure@DefId(1:6033 ~ std[2cf1]::rt[0]::lang_start_internal[0]::{{closure}}[0]) 0:&&dyn std::ops::Fn() -> i32 + std::marker::Sync + std::panic::RefUnwindSafe], i32>` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:51:25
    = note: inside call to `std::rt::lang_start_internal` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/rt.rs:67:5
    = note: inside call to `std::rt::lang_start::<()>`
    = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

Edit: New one with a dump of the relevant allocation

Log, memory dump, and backtrace
TRACE main                 pagecache  accessed pid 5 -> paging out pids []
TRACE main                 pagecache  replacing pid 4 with Node { next: Some(5), lo: [0, 2], hi: [0, 4], merging_child: None, merging: false, prefix_len: 1, data: Leaf(Leaf { keys: [[2], [3]], values: [[0, 2], [0, 3]] }) }
TRACE main                 pagecache  cas_page called on pid 4 to Node(Node { next: Some(5), lo: [0, 2], hi: [0, 4], merging_child: None, merging: false, prefix_len: 1, data: Leaf(Leaf { keys: [[2], [3]], values: [[0, 2], [0, 3]] }) }) with old ts 0
TRACE main                 pagecache  cas_page on pid 4 has log kind: Replace
TRACE main                 logger     reserving buf of len 16
TRACE main                 logger     io buffer too full, spinning
TRACE main                 iobuf      sealed iobuf with lsn 210
TRACE main                 iobuf      setting maxed to true for iobuf with lsn 210
DEBUG main                 iobuf      rolling to new segment after clearing 210-226
TRACE main                 segment    evaluating free list {} in SA::next
TRACE main                 segment    advancing file tip from 256 to 512
TRACE main                 segment    setting Segment to Active with new lsn 256
DEBUG main                 segment    segment accountant returning offset: 256 on deck: {}
DEBUG main                 iobuf      storing lsn 256 in beginning of buffer
TRACE main                 iobuf      asynchronously writing iobuf with lsn 210 to log from maybe_seal
TRACE main                 iobuf      write_to_log log_offset 210 lsn 210 len 16

reading uninitialized memory at alloc38892242+e2, but this operation requires initialized memory

Alloc alloc38892242: (256 bytes, alignment 8192) (Rust)
0x000 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x010 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x020 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x030 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x040 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x050 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x060 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x070 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x080 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x090 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x0a0 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x0b0 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x0c0 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░
0x0d0 │ __ __ 5e 78 dc 37 09 08 00 05 d2 00 00 00 00 00 │ ░░^x.7..........
0x0e0 │ 00 00 __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ..░░░░░░░░░░░░░░
0x0f0 │ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ │ ░░░░░░░░░░░░░░░░

error: Undefined Behavior: reading uninitialized memory at alloc38892242+e2, but this operation requires initialized memory
   --> /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fd.rs:103:13
    |
103 |             libc::write(self.fd, buf.as_ptr() as *const c_void, cmp::min(buf.len(), max_len()))
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ reading uninitialized memory at alloc38892242+e2, but this operation requires initialized memory
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: inside call to `std::sys::unix::fd::FileDesc::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/sys/unix/fs.rs:814:9
    = note: inside call to `std::sys::unix::fs::File::write` at /home/david/.rustup/toolchains/miri/lib/rustlib/src/rust/src/libstd/fs.rs:670:9
    = note: inside call to `<std::fs::File as std::io::Write>::write` at /home/david/Code/sled/src/pagecache/parallel_io_polyfill.rs:88:15
    = note: inside call to `sled::pagecache::parallel_io_polyfill::pwrite_all` at /home/david/Code/sled/src/pagecache/iobuf.rs:664:13
    = note: inside call to `sled::pagecache::iobuf::IoBufs::write_to_log` at /home/david/Code/sled/src/pagecache/iobuf.rs:1096:29
    = note: inside call to closure at /home/david/Code/sled/src/lib.rs:224:29
    = note: inside call to `sled::threadpool::spawn::<[closure@DefId(34:815 ~ sled[c98a]::pagecache[0]::iobuf[0]::maybe_seal_and_write_iobuf[0]::{{closure}}[3]) 0:std::sync::Arc<sled::pagecache::iobuf::IoBufs>, 1:std::sync::Arc<sled::pagecache::iobuf::IoBuf>, 2:i64], ()>` at /home/david/Code/sled/src/pagecache/iobuf.rs:1095:23
    = note: inside call to `sled::pagecache::iobuf::maybe_seal_and_write_iobuf` at /home/david/Code/sled/src/pagecache/logger.rs:285:17
    = note: inside call to `sled::pagecache::logger::Log::reserve_inner::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/logger.rs:143:9
    = note: inside call to `sled::pagecache::logger::Log::reserve::<sled::node::Node>` at /home/david/Code/sled/src/pagecache/mod.rs:1308:21
    = note: inside call to `sled::pagecache::PageCache::cas_page` at /home/david/Code/sled/src/pagecache/mod.rs:1042:13
    = note: inside call to `sled::pagecache::PageCache::replace` at /home/david/Code/sled/src/tree.rs:1440:23
    = note: inside call to `sled::tree::Tree::split_node` at /home/david/Code/sled/src/tree.rs:1647:17
    = note: inside call to `sled::tree::Tree::view_for_key::<&[u8]>` at /home/david/Code/sled/src/tree.rs:433:17
    = note: inside call to `sled::tree::Tree::remove_inner::<&[u8]>` at /home/david/Code/sled/src/tree.rs:415:9
note: inside call to `sled::tree::Tree::remove::<&[u8]>` at tests/test_tree_failpoints.rs:300:27
   --> tests/test_tree_failpoints.rs:300:27
    |
300 |                 fp_crash!(tree.remove(&*vec![0, k]));
    |                           ^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside call to `run_tree_crashes_nicely` at tests/test_tree_failpoints.rs:104:9
   --> tests/test_tree_failpoints.rs:104:9
    |
104 |         run_tree_crashes_nicely(ops.clone(), flusher)
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

@divergentdave
Copy link
Collaborator Author

I think I have identified what is going on with the above uninitialized memory read. IoBufs::write_to_log() is called with an iobuf where maxed is true, and the buffer has data for 0xd2 through 0xe2 out of a 0x100 byte segment. In this case, should_pad is set to false, because this page only has 0x1e bytes left, but MAX_MSG_HEADER_LEN is 0x20. The if should_pad block would have written a MessageHeader of kind Cap, and then filled the rest of the buffer with zeros, but since there's no room for that MessageHeader, the last 0x1e bytes remain uninitialized. Then, total_len gets set to capacity since maxed is true, even though the iobuf only has bytes_to_write of data written, and the last capacity - bytes_to_write bytes (in this case, 0x1e) remain uninitialized. total_len is used to slice the iobuf, the slice is passed to pwrite_all(), and the uninitialized bytes eventually get passed down to libc's write(), at which point Miri raises the UB error.

I think there are two ways this could be solved, either we could make this change to only write the initialized bytes to the file: (potentially creating a small "file hole" or leaving old bytes from the last write of this segment untouched)

- let total_len = if maxed { capacity } else { bytes_to_write };
+ let total_len = if should_pad { capacity } else { bytes_to_write };

Alternately, we could zero out the remaining bytes of the iobuf when maxed && !should_pad holds. This seems reasonable, because LogIter won't try to read messages starting this close to the end of a segment.

@divergentdave
Copy link
Collaborator Author

Some of the bigger tests can eat up all available RAM after a few hours, and trigger the oom-killer if left unchecked, or fail to allocate memory sooner if I set ulimit -v lower. I'm thinking I might add a conditional block in src/sys_limits.rs to compensate for the memory amplification incurred by running in Miri. I have a DHAT output I'm going to look at to determine a factor empirically. (Total memory usage is dominated by stacked borrows metadata) I'm also going to add #[cfg_attr(miri, ignore)] to more tests, with the idea that any test that takes well over an hour should probably be run on its own with --ignored and a filter.

@RalfJung
Copy link

Total memory usage is dominated by stacked borrows metadata

Yeah, stacked borrows accumulates a lot of metadata. I tried at least deduplicating adjacent identical stacks, but there's still a lot of room -- and probably also a lot of just inherent things that need to be stored.

-Zmiri-disable-stacked-borrows is always an option for such extreme cases. ;) Together with -Zmiri-disable-validation, I have seen more than 10x speedups.

@divergentdave divergentdave force-pushed the david_miri_support branch 2 times, most recently from d3745f3 to f2ff764 Compare May 17, 2020 22:32
@divergentdave divergentdave changed the title [WIP] Miri support Miri support May 17, 2020
@divergentdave
Copy link
Collaborator Author

This PR is at a good juncture to merge now. Over the last few days, I rewrote and rebased the commit history to get rid of merges and combine like changes, and added a few small things to help with memory consumption when using Miri. f2ff764 cuts the memory limit when #[cfg(miri)] is on, just in case, to keep the cache from going too overboard. 2d4da73 adds a Cargo feature flag to control two changes: first, it swaps in an unsafe block in the Histogram constructor that uses vec![0usize; BUCKETS], a pointer cast, and from_raw_parts() instead of Vec::resize_with(), and secondly, it always returns from debug_delay() early.

I found that the Metrics constructor caused an early memory allocation peak of ~900MB, because each Vec::resize_with() call would build many stacked borrows stacks with as many items in them as there were elements in the vector. This was caused by way of retagging MIR statements in SetLenOnDrop::increment_len. The Miri-optimized code path sidesteps this, which speeds up test initialization and gets rid of the early memory spike.

Without modification, debug_delay() was inducing stacked borrows to consume more memory each time it was called. As it went through TLS and RefCells, it was making the stacked borrows stacks grow ever bigger, without limit. My hunch is that this would use linear memory and polynomial execution time per debug_delay() call. Since tests are running with only one thread to begin with, returning early from this function is an easy win, and lets us get significantly further before memory exhaustion.

I decided to use a feature flag instead of cfg(miri) because I want to leave the door open to running tests under Miri without these changes, in cases where we can absorb the additional overhead on a smaller test, and want to ensure maximum applicability to real-world use.

Lastly, here are some further notes on memory overhead:

Of the three Histogram tests, it_works passes without issue under Miri, multithreaded isn't applicable, and high_percentiles got bogged down, consuming ~12GB before I stopped it. Presumably something in measure() is causing a lot of bookkeeping-intensive retags. Since the no_metrics feature is on by default, this doesn't require further action, but it's nice to know the shorter test passed.

The interpreter keeps a dead_alloc_map for its memory model which will grow over time. It was ~0.7% during one of my earlier test runs. Linearly-growing overheads like this are a good argument for splitting up tests into many separate miri invocations, so we can start with a clean slate for each one.

With my latest changes, Miri's allocations were dominated by short-lived vectors used for sorting struct members inside a visitor function used in the Retag statement, and the peak memory usage, after a slow and steady rise, came from the stacked borrows stacks. Over 8 hours of runtime on the size_leak test, it churned through 146GB of allocations, and hit a maximum size of 9GB before I stopped it. Individual stacked borrows stacks were much shorter, with an average size around 256 Items. (they use Vec, so almost all must have been of the same capacity) As such, my tracing didn't identify any particular part of sled that was inducing disproportionate memory consumption.

As mentioned above, -Zmiri-disable-stacked-borrows provides a steep discount on memory and CPU overhead, in exchange for losing some aliasing analysis. For example, the high_percentiles test passes within about a minute with this flag.

@spacejam spacejam added this to blocking reliability in v1 May 28, 2020
@divergentdave divergentdave force-pushed the david_miri_support branch 2 times, most recently from 348909e to f57eb28 Compare June 4, 2020 00:49
The pointer `self.current_block` is separately turned into a shared
reference and a Box in the same method, and moreover, the reference is
created before the Box and used after the Box. This results in a unique
reference from the Box and a shared reference from a separate
pointer-to-reference conversion coexisting, with different tags.

To fix this issue, the conversion from a raw pointer to a Box is put off
until the shared reference is no longer in use, so that the two
different tags won't pop each other off the stack.
Can't use compression because zstd relies on calling a native C library
DirEntry::metadata() is built upon dirfd() and openat(), which aren't
yet shimmed in Miri. Instead, get the PathBuf for the DirEntry and call
std::fs::metadata() on that when using Miri.
Early return from debug_delay() and type-punning construction of
Histograms, to reduce the amount of Stacked Borrows overhead
@divergentdave
Copy link
Collaborator Author

All the necessary shims for this are now in the latest nightly builds of Miri! These commands now work.

cargo +nightly miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks --
cargo +nightly miri test --features=testing -- -Zmiri-disable-isolation -Zmiri-ignore-leaks -Zmiri-disable-stacked-borrows -Zmiri-disable-validation --

@spacejam spacejam merged commit c40e0d9 into spacejam:master Jun 30, 2020
@spacejam
Copy link
Owner

Great work, @divergentdave :)

@divergentdave divergentdave deleted the david_miri_support branch November 15, 2020 19:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
v1
  
blocking reliability
Development

Successfully merging this pull request may close these issues.

None yet

4 participants