Skip to content

Commit d4c9e84

Browse files
NicolasHugjnothman
authored andcommitted
DOC minor clarifications in ensemble.rst (scikit-learn#11810)
1 parent bea1eb5 commit d4c9e84

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

doc/modules/ensemble.rst

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -594,21 +594,20 @@ learners. Decision trees have a number of abilities that make them
594594
valuable for boosting, namely the ability to handle data of mixed type
595595
and the ability to model complex functions.
596596

597-
Similar to other boosting algorithms GBRT builds the additive model in
598-
a forward stagewise fashion:
597+
Similar to other boosting algorithms, GBRT builds the additive model in
598+
a greedy fashion:
599599

600600
.. math::
601601
602-
F_m(x) = F_{m-1}(x) + \gamma_m h_m(x)
602+
F_m(x) = F_{m-1}(x) + \gamma_m h_m(x),
603603
604-
At each stage the decision tree :math:`h_m(x)` is chosen to
605-
minimize the loss function :math:`L` given the current model
606-
:math:`F_{m-1}` and its fit :math:`F_{m-1}(x_i)`
604+
where the newly added tree :math:`h_m` tries to minimize the loss :math:`L`,
605+
given the previous ensemble :math:`F_{m-1}`:
607606

608607
.. math::
609608
610-
F_m(x) = F_{m-1}(x) + \arg\min_{h} \sum_{i=1}^{n} L(y_i,
611-
F_{m-1}(x_i) + h(x))
609+
h_m = \arg\min_{h} \sum_{i=1}^{n} L(y_i,
610+
F_{m-1}(x_i) + h(x_i)).
612611
613612
The initial model :math:`F_{0}` is problem specific, for least-squares
614613
regression one usually chooses the mean of the target values.

0 commit comments

Comments
 (0)