CHAPTER 1. INTRODUCTION 8
In chapter 5, we perform a theoretical analysis of NC learning. We find that due to
an assumption made in the original work, the λ parameter in fact is made up of two
components, one of which can be determined analytically for any ensemble architecture.
We give a clear account of the connections between NC, simple ensemble learning, single
network learning, the Ambiguity decomposition, and the bias-variance and bias-variance-covariance decompositions. This gives NC a statistical interpretation, including a grounding
in a large body of literature, which it did not previously hav e. We prove an upper bound on
the strength parameter and show how the ensemble gradient can be understood as a sum
of various components.
CHAPTER 1. INTRODUCTION 8In chapter 5, we perform a theoretical analysis of NC learning. We find that due toan assumption made in the original work, the λ parameter in fact is made up of twocomponents, one of which can be determined analytically for any ensemble architecture.We give a clear account of the connections between NC, simple ensemble learning, singlenetwork learning, the Ambiguity decomposition, and the bias-variance and bias-variance-covariance decompositions. This gives NC a statistical interpretation, including a groundingin a large body of literature, which it did not previously hav e. We prove an upper bound onthe strength parameter and show how the ensemble gradient can be understood as a sumof various components.
การแปล กรุณารอสักครู่..