15

For neural networks we have the universal approximation theorem which states that neural networks can approximate any continuous function on a compact subset of $R^n$.

Is there a similar result for gradient boosted trees? It seems reasonable since you can keep adding more branches, but I cannot find any formal discussion of the subject.

EDIT: My question seems very similar to Can regression trees predict continuously?, though maybe not asking exactly the same thing. But see that question for relevant discussion.

Imran
  • 2,381
  • 13
  • 22

1 Answers1

2

Yes - create a region for each data point (i.e., memorize the training data).

Thus it is possible for gradient boosted trees fit any training data, but it would have limited generalization to new data.

Brian Spiering
  • 23,131
  • 2
  • 29
  • 113