Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support sequence interface like lightgbm.Sequence #10091

Open
xbanke opened this issue Mar 7, 2024 · 5 comments
Open

support sequence interface like lightgbm.Sequence #10091

xbanke opened this issue Mar 7, 2024 · 5 comments

Comments

@xbanke
Copy link

xbanke commented Mar 7, 2024

Is it possible to support sequence interface (an object with __getitem__ and __len__) in DMatrix without copying data.

@trivialfis
Copy link
Member

is there any use case that numpy/pandas and alike is not a better alternative?

@xbanke
Copy link
Author

xbanke commented Mar 7, 2024

For timeseries data like stock exchange data, to predict the next server days return. Saying that there are 100 features, and then rolling the data with 20 days. In order to fit DMatrix, we have to shift the features 20 times, and the memory usage become 20x. Actually most of the data are duplicated. If I can define custom __getitem__, it will highly reduce the memory usage.

BTW, please do me a favor, check #9625.

@trivialfis
Copy link
Member

Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the from_seq with a function named _push_rows. I assume that's similar to the callback function we use in terms of the underlying mechanism.

See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py

@trivialfis
Copy link
Member

BTW, please do me a favor, check #9625.

sure, will look into it.

@xbanke
Copy link
Author

xbanke commented Mar 8, 2024

Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the from_seq with a function named _push_rows. I assume that's similar to the callback function we use in terms of the underlying mechanism.

See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py

I have looked into this demo code. It looks like that the QuantileDMatrix consume all the dataiter more than one time (4 in my case). As a quantile structure, this will save much memory. But for ranking problem, how to set group weight if neccessary. My original demands is that the data consuming at the training stage, not the QuantileDMatrix.__init__ stage.
One more thing, there is a little trap one may fall into if not careful.

# run in version 1.7.6

import numpy as np
import pandas as pd
import xgboost as xgb

np.random.seed(42)

n_groups = 100
group_size = 2000
n_features = 10
n_levels = 20

rows = n_groups * group_size

features = pd.DataFrame(np.random.randn(rows, n_features).astype('float32'), columns=[f'f{i:03d}' for i in range(n_features)])
qids = pd.Series(np.arange(rows, dtype='int') // group_size)
labels = pd.Series(np.random.randn(rows).astype('float32')).groupby(qids).rank(method='first').sub(1) // (group_size // n_levels)
weights = np.arange(1, 101)

# dmatrix = xgb.DMatrix(features, label=labels, qid=qids)
qmatrix = xgb.QuantileDMatrix(features, label=labels, qid=qids)

sub_rows = 10000
sub_qmatrix = xgb.QuantileDMatrix(features.tail(sub_rows))
sub_dmatrix = xgb.DMatrix(features.tail(sub_rows))

params = {
    'objective': 'rank:pairwise',
    # 'objective': 'multi:softprob',
    # 'num_class': n_levels,
    
    'base_score': 0.5,
    # 'lambdarank_pair_method': 'mean',
    # 'lambdarank_num_pair_per_sample': 1,
    'booster': 'gbtree',
    'tree_method': 'hist',
    'verbosity': 1,
    # 'seed': 42,
    'learning_rate': 0.1,
    'max_depth': 6,
    'gamma': 1,
    'min_child_weight': 4,
    'subsample': 0.9,
    'colsample_bytree': 0.7,
    'nthread': 20,
    'reg_lambda': 1,
    'reg_alpha': 1,
    'eval_metric': ['ndcg@100', 'ndcg@500', 'ndcg@1000'],
}

booster = xgb.train(params, qmatrix, 100, verbose_eval=10, evals=[(qmatrix, 'train')]) 


preds_d = booster.predict(sub_dmatrix)
preds_q = booster.predict(sub_qmatrix)
preds_o = booster.predict(qmatrix)[-sub_rows:]

assert np.allclose(preds_d, preds_q)  # False
assert np.allclose(preds_o, preds_q)  # False
assert np.allclose(preds_o, preds_d)  # True

The script above will raise error. So if one train booster with QuantileDMatrix and then predict with QuantileDMatrix that is not origin from the training one, wrong predition might occurs, since the hist seperation points changed I guess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants