I'm trying to train a gradient boosting model over 50k examples with 100 numeric features. XGBClassifier handles 500 trees within 43 seconds on my machine, while GradientBoostingClassifier handles only 10 trees(!) in 1 minutes and 2 seconds :( I didn't bother trying to grow 500 trees as it will take hours. I'm using the same learning_rate and max_depth settings, see below.
What makes XGBoost so much faster? Does it use some novel implementation for gradient boosting that sklearn guys do not know? Or is it "cutting corners" and growing shallower trees?
p.s. I'm aware of this discussion: https://www.kaggle.com/c/higgs-boson/forums/t/10335/xgboost-post-competition-survey but couldn't get the answer there...
XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1,
gamma=0, learning_rate=0.05, max_delta_step=0, max_depth=10,
min_child_weight=1, missing=None, n_estimators=500, nthread=-1,
objective='binary:logistic', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=0, silent=True, subsample=1)
GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance',
max_depth=10, max_features=None, max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10,
presort='auto', random_state=None, subsample=1.0, verbose=0,
warm_start=False)