zzd1992/GBDTMO

GBDT-MO or XGBoost?

Closed this issue · 1 comments

I found that you also used the histogram bucketing strategy to subsample instances for splitting the trees as the XGboost already used the histogram and quantile.
Also, the formulations are the same, except for the multi-output vector size of the leaf.
So, in this case, what would be another improvement?
It is also to note that, the TFBT also used histogram as it is based on the XGboost.
Check also this issue

To my knowledge, XGBoost uses quantile instead of the histogram. They are similar but not the same.

As I mentioned. when I build this project, I didn't realize TFBT. The key idea of this project is the same as TFBT's.
The differences are:

  1. this project supports sparse predictions of the leaf.
  2. some implementation details.