Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pack-generation MVP #67

Open
33 of 43 tasks
Byron opened this issue Apr 9, 2021 · 9 comments
Open
33 of 43 tasks

pack-generation MVP #67

Byron opened this issue Apr 9, 2021 · 9 comments
Labels
C-tracking-issue An issue to track to track the progress of multiple PRs or issues

Comments

@Byron
Copy link
Owner

Byron commented Apr 9, 2021

gen-pack like plumbing command

Generate a pack using some selection of commits or possibly objects. Drives different kinds of iteration as well as ways of building a pack.

Progress

  • sort out todos in test
  • counting performance
    • things to try is intrusive collections and maybe the caches-rs frequency based LRU
  • gixp pack-create
    • use Easy*
    • allow creating thin packs (really just passing on a flag from options)
    • make cache sizes configurable for easier performance tuning/testing
    • See if ahash can speed up single-threaded counting significantly enough (no, maybe 10%)
    • What's so slow with counting? Tree-traversal and the object looks-up required for that. However, what's really slow is accessing the hashmap - it needs its own key generator similar to this. Even that doesn't bring more than say 15%. The actual object is collects is way bigger than our current one, which may help us when trying to focus on copying packs, without creating new deltas. So delta-handling might be an entirely different story for reasons of memory consumption. However, how can this get faster? Also note that git has a pack cache but it's really simple, without frequency analysis or anything fancy.
    • non-deterministic counting somehow ends up with three times more objects than there are
    • a separate argument to control the amount of threads while counting. I saw only 4x speedup when using 10x cores, dashmap doesn't scale perfectly unfortunately. (there seems to be no way to reasonably speed this up across cores), getting best single-core performance seems key but even there it's not getting faster.
    • try hash_hasher for single-threaded counting to see how it performs compared to the standard hash-set with hasher override.
      • see if their build-hasher implementation can be used in dashmap for the same purpose.
    • Avoid Dashmap in favor of a shared threadsafe tree implementation prodash#11 - prodash should be faster as with these numbers, prodash will cause slowdown.
    • try leapfrog as dashmap replacement for more performance, once it's running on stable.
    • ⚠️ Make use of reachability bitmaps
    • deltification
      • research: check of git-fastimport builds quick packs, see how deltification works there - delta.c is just 500 lines
      • build diff-deltas based on the result when comparing two buffers
      • see how deltification fits into the machinery to find/try comparing two buffers and find good ones
    • validate object replacement is correctly implemented - right now it's forced to be ignored during pack-gen, see ignore_replacements . It's correct to ignore replacements during reachability traversal, i.e. pack
    • obtain a sort key to help with determining bases
    • create an index file from the pack, for now just by validating it aka pack-index-from-data.
    • journey tests

Command-lines

  • create a full pack fast, like clone
cargo build  --release --no-default-features --features max,cache-efficiency-debug --bin gix && /usr/bin/time -lp ./target/release/gix -v free pack create -r  tests/fixtures/repos/rust.git --statistics  --thin -e tree-traversal --pack-cache-size-mb 200 --object-cache-size-mb 100  HEAD --nondeterministic-count
  • create a partial pack for fetches
cargo build  --release --no-default-features --features max,cache-efficiency-debug --bin gix && /usr/bin/time -lp ./target/release/gix -v free pack create -r  tests/fixtures/repos/rust.git --statistics  --thin -e tree-diff --pack-cache-size-mb 400 --object-cache-size-mb 100  < tests/fixtures/repos/rust.git/first-thousand.commits
  • create a pack with git
echo HEAD | /usr/bin/time -lp  git -C .  pack-objects --all-progress --stdout --revs >/dev/null

Out of scope

  • a prototype of a server side sending a pack with only the objects the other side doesn't have.
  • reuse existing deltas - doing so would allow to save a lot of time and avoids the need for implementing actual delta compression. One would only have to get the order of entries right to assure consistency. Without that it's not really usable.
  • a way to write an index file at the same time, ideally so that it's entirely separate there there isn't always a need
  • A gixp subcommand to make the functionality available for stress testing and performance testing
    *Machinery to produce object deltas.

User Stories

I have a system image, which contains potentially 10k to 100k blobs taking up a total of 100M-1G, many of them binary (such as executables and libraries), and I don't know whether the server has any of those blobs or not. I want to turn that into a standalone commit (typically with no parents), push that commit as a compressed pack while (easier) transferring no blobs or trees the server already has, (medium) delta-compressing blobs reasonably against each other, and (hard) delta-compressing any blobs or trees vs the most similar ones the server does have to make a thin-pack. If the server doesn't have anything useful I want to recognize that quickly and just push a reasonably compressed full pack. Metrics I care about: the server spending as little memory as possible incorporating the pack into its repository (to be immediately usable), the transfer being limited only by bandwidth on either fast (100Mbps) or slow (1Mbps) connections to the server, and getting decent compression and delta-compression to help with the slow-connection case.

The user is working in a git repository, such as the Linux kernel repository. As fast as possible, I want to figure out the changes from their most recent commit (ignoring the index), create a new commit with their current commit as a parent, and then push that commit to the server (along with whatever the server doesn't have). Same easy/medium/hard as above, same metrics as above.

Interesting

Archive

Progress

  • stream pack entries (base objects only) from an iterator of input objects

  • write pack entries to a V2 pack

  • clone-like - the other side has no objects

    • write all objects based on unique objects visible in trees using tree-
  • fetch-like - the other side has some objects

    • write only changed objects based on another party which has some of our objects, but not all, employing tree-diff capabilities.
  • Handle tag objects - their pointee must be added to the set as well.

  • Make it so that the total amount of objects to be written is known in advance

    • if the header of the pack wouldn't contain the amount of objects or it would just be bogus, this wouldn't be necessary and allow for a much fast start of the operation. Does git allow invalid counts?
  • Statistics about re-used objects and repacked ones, maybe more. This is critical to eventually avoid repacking most.

    • for counts (note about 4x more decoded objects of diffs compared to traversals)
    • for entries
    • pack-create program writes properly named pack files and outputs statics with --statistics
  • restore chunk-ordering to allow multi-threaded generators to yield the same pack all the time (chunks may be out of order)

  • figure out why the pack hash in parallel mode is still not reproducible

  • write object entries directly, without forcing to copy their compressed data into an output::Entry

    • Let's keep copying it as it simplifies the writer and avoids it having to do pack lookups on a single thread. This also means that more memory is used while chunks of objects are in flight and that more memory allocation is done. Probably worth it.
  • re-use delta objects as well and write their offsets correctly.

  • gixp pack-create

    • figure out why git with one thread is faster when enumerating objects than us with 4 :D (with the Rust repo in particular)
      • dashmap::insert() costs 41% of the runtime apparently. A single-threaded counter might be beneficial for using non-threadsafe types.
      • Using a much bigger pack cache helps a lot with 60s lower counting time, making us just 20s slower than git itself.
      • pack-locations are looked up in single-threaded mode, but that's fast and not doing so doesn't significantly speed things up
      • ⚠️ Note that thus far we were unable to improve counting performance
    • can't pack the Rust repository as it can't find an object that it apparently traverses and that does not exist ( 221483ebaf45df5c956adffee2d4024e7c3b96b8, 6c4f4e1990b76be8a07bde1956d2e3452fd55ee4, 7bda1161a37ff51f254ff0a7862abe6dc54fdb36 ) (it's many objects actually with non-determinisic counting)
    • find a way to not have to collect input objects beforehand, it's something about error handling and types here rather than technical requirements.
    • a list of objects from stdin (like created by git rev-list) (AsIs traversal is default here)
    • a set of tips for internal rev-list traversal of the commit graph with settings for traversal or diff based expansion, with AsIs being an option too
    • tui support
    • A way to place and control object data caches for 4x and more speedups of tree-diffs
    • Add support for returning displaced entries for re-use. servo/uluru#22 - when landed, can bring back 8d49976 for reduction of allocation pressure.
  • Count objects performance improvement #167

  • Improving counting performance #170

Performance Opportunities

  • Need to decompress entries to find their length - it would save a lot of time if we would know where the next entry begins.
    • The index has that information but only if we build a sorted vec of all offsets. How would that fit in? Maybe a high-performance trait with more complex call signatures to allow building caches?
    • No, this doesn't make a difference at all interestingly, maybe 1.5s of 48 at most. Implemented in ad6d007
      .decompress_entry(&entry, buf)

Notes

packs in general

Pack generation

  • They use a list of objects to handle along with workstealing among threads.
    • These threads regularly pause their work to allow the stealing to happen safely which might be why that doesn't scale?(actually that one does seem to scale at least with the amount of core I have, it's pack resolution that doesn't scale)
  • probably a good idea to not try to find deltas for 'big' objects and make the threshold configurable.
@Byron Byron added this to To do in Collaboration Board Apr 9, 2021
@Byron
Copy link
Owner Author

Byron commented Apr 9, 2021

Related to #53

@Byron Byron added this to To do in Open Source Work Apr 9, 2021
@Byron Byron moved this from To do to In progress in Open Source Work Apr 10, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Apr 10, 2021
@Byron
Copy link
Owner Author

Byron commented Apr 18, 2021

In pursuit of better control pack generation and also pave the way for improved async integration, I figured having an Iterator interface would be a good idea.

Now it's possible to step through parallel computations.

However, the respective implementation has to expose unsafe due to the use of a scoped thread which exposes its join handle that can thus be leaked.

pub unsafe fn new<Input, ThreadStateFn, ConsumeFn, I, O, S>(

Depending on where this is exposed, unsafe might bubble up even further - after all, anything that holds the SteppedReduce can also leak it.

My intuition is to stop bubbling this up beyond git-features just to keep it practical, even though technically it's incorrect. What do you think, @joshtriplett?

@joshtriplett
Copy link
Sponsor Contributor

It's a CPU-intensive operation; my first instinct would be to run it normally and use unblock or similar to run it on a blocking thread.

Trying to structure the computation so that it happens incrementally seems incredibly painful. And in particular, trying to adapt an operation that happens in a thread to happen incrementally seems like it's incurring all the pain of async without any language support for async.

I would suggest building the initial MVP in a synchronous fashion, on the theory that it can still be run in a background thread and controlled via an async mechanism.

I definitely don't think it's OK to use a scoped thread and hide the unsafe, if the unsafe isn't truly encapsulated to the point that you can't do anything unsound with the interface.

@joshtriplett
Copy link
Sponsor Contributor

One other thought on that front: compared to the cost of generating a pack, one or two allocations to set up things like channels or Arc will not be an issue.

@Byron
Copy link
Owner Author

Byron commented Apr 19, 2021

Thanks for sharing. The main motivator for using scoped threads is to allow standard stack based operation without any wrapping - it's not at all about allocations, merely about usability and the least surprising behaviour. Truth to be told, I cannot currently imagine how traversal will play into static threads when arcs are involved especially along with traits representing an Object (an attempt to allow things like traversing directory trees).

What I take away is the following

  • let's not hide unsafe unless it's encapsulated
  • let's not make step-wise computation or extreme async friendliness a requirement for the MVP if something like blocking would work, too.

I hope to overcome my 'writers block' and just write the missing bits to be able to see through the whole operation and play with the parts more until I find a version of the API that feels right.

@Byron
Copy link
Owner Author

Byron commented Apr 19, 2021

The unsafe is now legitimately gone due to the usage of standard 'static threads. I could assure myself that the mechanism still works even with Arc's involved, despite being a little more difficult to use on the call site. Callers will need to prepare a little more to start the procedure, which is probably acceptable given how long it runs and how 'important' it is.

let's not make step-wise computation or extreme async friendliness a requirement for the MVP if something like blocking would work, too.

This capability probably doesn't have to be removed just yet as machinery itself is exactly the same as already used in in_parallel(), except that now there is more control on the call site. This comes at the cost of having to deal with Arc for the object database, and of course that the API now has yet another way to call it. Those who don't need fine-grained control will not get the best experience that way.

However, it's possible to eventually provide a non-static variant of pack generation too which works similar to pack verification (it uses the non-static version of the machinery) by factoring out the parts that are similar.

Another argument for trying hard to make pack generation play well in an async context certainly is that it commonly happens as part of network interactions like uploading a pack. Right now much of it is a little hypothetical as actual code to prove it works nicely doesn't actually exist, but I am confident it will work as envisioned.

Finally, since both machines, static and non-static are the same at core it should always be possible to return to the non-static one at very low cost should everything else fail.

@Byron
Copy link
Owner Author

Byron commented Apr 19, 2021

On another note: I am also thinking backpressure and and back-communication. Backpressure is already present as threads will block once the results channel is full. Back-communication should also be possible if the handed-in closures get access to another synchronized channel of sorts to tell them when to deliver the pack entries they have been working on. Such an algorithm would continuously work (probably until it can't meaningfully improve the deltas) until it is told to deliver what's there right now before continuing. Such a message could then be delivered in the moment somebody actually calls .next() on the iterator, which in turn will be based on how fast data can be written to the output (sync or async).

Even though the MVP will not do back-communication I don't see why it shouldn't be possible to implement it. What's neat is that no matter how the machinery operates, in the moment the iterator is dropped it will (potentially with some delay), stop working automatically.

@Byron
Copy link
Owner Author

Byron commented Apr 25, 2021

The first break-through: pack files (base object only) can now be written from object ids.

@Byron Byron moved this from In progress to To do in Collaboration Board May 8, 2021
@Byron Byron mentioned this issue May 8, 2021
3 tasks
@Byron Byron moved this from To do to In progress in Collaboration Board May 18, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board May 18, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board May 19, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board May 21, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Jun 10, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board Jun 10, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Jun 11, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board Jun 12, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Jul 21, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board Jul 22, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Aug 20, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board Aug 21, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Aug 22, 2021
@Byron Byron moved this from In progress to To do in Collaboration Board Aug 22, 2021
Byron added a commit that referenced this issue Sep 21, 2021
…#67)

…which is fair as it contains borrowed RefCells, which really would have
to be owned to work for this, which would in turn require the Ancestor's
struct to be kind of self-referential
Byron added a commit that referenced this issue Sep 21, 2021
…and it's nice to see that overall, it's very flexible even though it
may require acrobatics to get it in and out of Easy mode.
Byron added a commit that referenced this issue Sep 21, 2021
…we could also double-check the produced crc32 values, but
assigning consecutive numbers seems like it's doing the job.
Byron added a commit that referenced this issue Sep 21, 2021
…ing… (#67)

…which is what happens when counting objects for fetches where only
changed objects should be sent.
Byron added a commit that referenced this issue Sep 22, 2021
…to avoid dealing with missing objects.

It's still a good idea to handle these gracefully though, git itself
seems to ignore them.
Byron added a commit that referenced this issue Sep 22, 2021
…for about 10% of performance, speeding up these lookups just a little
bit.
Byron added a commit that referenced this issue Sep 22, 2021
…by implementing the few bits that are needed ourselves.
This might open up future optimizations, if they matter.

Controlling the nom parsers on a token by token basis via iterators
probably already saves a lot of time, and these are used everywhere.
@Byron Byron moved this from In progress to To do in Collaboration Board Sep 22, 2021
Byron added a commit that referenced this issue Sep 22, 2021
@Byron Byron moved this from To do to In progress in Collaboration Board Dec 6, 2021
Byron added a commit that referenced this issue Dec 6, 2021
Intead, share it by reference, it's sync after all.

This issue was introduced when switching to a `Send + Clone` model,
instead of `Send + Sync`, to allow thread-local caches in database
handles of all kinds.
@Byron Byron moved this from In progress to To do in Collaboration Board Dec 6, 2021
@Byron Byron mentioned this issue Jan 22, 2022
16 tasks
Byron added a commit that referenced this issue Feb 15, 2022
…nting (#67)

The efficiency of multi-threaded counting is low per core, and despite
some speedups might be desirable, one might not want to commit all cores
to this amount of waste.
@Byron Byron added the C-tracking-issue An issue to track to track the progress of multiple PRs or issues label May 15, 2022
@Byron Byron moved this from To do to Journeys in Collaboration Board Aug 3, 2022
@Byron
Copy link
Owner Author

Byron commented Nov 28, 2022

About opportunities for performance improvements

@pascalkuthe I have created quick profile from running cargo build --release --no-default-features --features max,cache-efficiency-debug --bin gix && /usr/bin/time -lp ./target/release/gix -v free pack create -r ../../../git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux --statistics --thin -e tree-traversal --pack-cache-size-mb 200 --object-cache-size-mb 100 HEAD (single-threaded), and here is the result:

Screenshot 2022-11-28 at 12 29 04

My takeaways are as follows:

  • indeed, in single-threaded mode the hashset performance doesn't seem to be the issue
  • a little less than half the time of the counting phase is spent with getting objects from the object database
  • a lot of time is spent parsing trees, and memchr seems particularly hot. It's all about finding a null-byte here and I wonder if this can be any faster though.
  • when digging deeper, it shows that many of these ~5s buckets end up spending most of their time in the object database
  • and we have 3 seconds in hashbown::set::HashSet::insert()

This is just a quick summary, and right now I am missing a dataset to compare git with gixof various repos of different sizes to understand the size of the performance gap in single-threaded mode. From there it might be possible to figure out what to focus on.

While at it, profiling git might also be useful, which (I think) I did in the past as well. Unfortunately my memory (as well as the notes about this here) are spotty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-tracking-issue An issue to track to track the progress of multiple PRs or issues
Projects
Open Source Work
  
In progress
Development

No branches or pull requests

2 participants