[WIP] Build custom argsort for GPU quantile sketching. #9194
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is to optimize GPU memory usage for GPU input with QuantileDMatrix and RMM. The idea is to use argsort instead of value sort for quantile sketching. In XGBoost GPU-based GK-sketching, we need to sort the input data according to its feature index and value. During the sorting process, we need to copy out the value and feature index, which costs 8 bytes per element, 4 bytes for value, and another 4 bytes for the feature index. Then efficient parallel sort requires a double buffer, leading to a total of 16 bytes overhead per element.
This PR introduces argsort to replace the value sort. By limiting the size of each batch to the maximum of
uint32_t
, we can usestd::uint32_t
as the sorted index. During sorting, we write only to the sorted index and fetch the data in-place without altering it. This way, with a double buffer, the overhead becomes 8 bytes per-element (uint32_t costs 4 bytes), hence halving the peak memory usage inside XGBoost (without accounting for the original input).The optimization is only useful when input is on GPU (cupy, cudf) while using QuantileDMatrix, and RMM is enabled. For normal DMatrix, the data needs to be copied anyway unless it's constructed on GPU, not much optimization can be done there. When RMM is not used, XGBoost can split the data into small batches based on the amount of available memory, although the splitting might negatively affect the sketching result.
An additional benefit is that we can sort with custom iterators like transform iterator, which may or may not return a type that's supported by cub sort, and may cost arbitrarily large memory. I think this can be useful for other projects as well.
The initial benchmark shows some performance degradation using this argsort (~30%) even with merged sort replaced by radix sort, likely due to inefficient global memory access as we are now fetching data from global memory in a completely random manner. Without the argsort, we can almost always fetch contiguous data. For XGBoost, the cost can be justified as we run the sketching only once during training, and it's the bottleneck for memory usage.
I want to eventually upstream the changes to cub, depending on the developers there. At the moment, the customized radix sort is a drastically slimmed-down version of onesweep sort in cub.
Changes in the cub radix sort:
Entry
in XGBoost)Memory usage
summary_ratio.csv