You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been moving from R to Python, and get lost in the different naming for same paramteres. also, I noticed that when initiate a xgboost model, it won't check for validate paramters, for example,
both work fine. And I can't find how the "lala" parameter is used after reading the source code.
Not checking validate parameters leaves a door for user making bugs when move between different xgboost wrappers. Previously I was using the sklearn wrapper and the regularisation parameters is called "learning_rate", and in the xgb.cv, the name is "eta". I didn't aware of this bug until I saw someone else's code and the program has been running for 30 hours...
Yi
The text was updated successfully, but these errors were encountered:
As a side note: it doesn't matter if u call your param "eta" or "learning_rate" in the python wrapper. It is not documented (at least I'm not aware of), but the core part of xgb just scans the params for known names.
In the same line, I would like to say that I found important differences in the result when naming the boosting parameters with the prefix "bst:". What's wrong with that? I also tried to add a parameter with a mistake and no error was returned...
@ivallesp bst prefix is deprecated in the newest version. If you find it anywhere in the doc, please point it out and send a PR to fix it. Thanks so much
Hi, thanks for providing this amazing package.
I've been moving from R to Python, and get lost in the different naming for same paramteres. also, I noticed that when initiate a xgboost model, it won't check for validate paramters, for example,
both work fine. And I can't find how the "lala" parameter is used after reading the source code.
Not checking validate parameters leaves a door for user making bugs when move between different xgboost wrappers. Previously I was using the sklearn wrapper and the regularisation parameters is called "learning_rate", and in the
xgb.cv
, the name is "eta". I didn't aware of this bug until I saw someone else's code and the program has been running for 30 hours...Yi
The text was updated successfully, but these errors were encountered: