Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qbvh implementation assumes dense keyspace for indecies; QbvhStorage is not generic for arbitrary keyspace #117

Open
Dessix opened this issue Jan 6, 2023 · 0 comments

Comments

@Dessix
Copy link

Dessix commented Jan 6, 2023

The current QBVH implementation requires that the content type be aware of its own index within the QBVH, which means that any consumer needs to manage these values and correlate them directly.

An alternate perspective on this problem is that GenericQbvh isn't generic enough to meaningfully allow implementation of custom storages beyond "It's an array on (device)".

Problem

If I naively map application keys to arbitrary points in the usize space of IndexedData, memory usage explodes due to the current implementation creating a dense vector where the usize-index is the true offset into the vector, so 4 giga-elements are produced and default-initialized.

With the new "update" semantics from #113, it's possible to update the Qbvh instance on the fly, but any gap left in the index space produces areas in the array that are "gaps" of invalid entries. This isn't a problem, and is probably necessary for maintainable yet effective CUDA and SIMD implementations. This also isn't new- simply inserting values with prior

The problem is that correlation to application state that produces a changing space makes it very difficult to assign and manage keys efficiently, and keep them in sync with a non-fixed set of application-level entities.

Workaround

I've taken to using a SlabMap to correlate my application-specific keys to the slab IDs, and using those as the actual IndexedData key. Unfortunately, this means that I then need to produce custom visitors that resolve such IDs back to references to my actual key-type for further correlation back to application-level entities.

I've attempted to write a wrapper for this, but the generic functions to cross-correlate IDs result in a large amount of duplicate signatures from those in GenericQbvh.

Possible solution

If GenericQbvh were made to work in terms of a trait QbvhStorage<LeafData, LeafId> { ... } that was capable of defining the mapping from LeafId to LeafData references or even literal usize offsets into a vector of LeafData, then this dense representation could either be avoided for cases of truly-arbitrary Index values, or could be wrapped with a way of statefully assigning indexes to translate sparse keys to dense storage.

In other words, one of the following two options:

  1. Allow arbitrary keys by providing a means of binding a HashMap representation or similar (may make SIMD implementation difficult)

  2. Make storage generic enough to allow a definition which automatically manages the index space, given an arbitrary Eq + Ord + Hash key-type, while still providing the current method which "cares" which index it is, and assumes the application will manage these values.

I believe option 2 will be the best one to implement, as it maximizes functionality with minimal increase in maintenance cost. This assumes that it is possible to bake the trait-level mapping operations during insert/removal/update operations to maintain a dense storage efficiently without much impact to the SIMD and CUDA implementations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant