When the neural element number n of neural networks is larger than the sample size m, the overfitting
problem arises since there are more parameters than actual data (more variable than constraints). In order
to overcome the overfitting problem, we propose to reduce the number of neural elements by using
compressed projection A which does not need to satisfy the condition of Restricted Isometric Property
(RIP). By applying probability inequalities and approximation properties of the feedforward neural networks
(FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain
instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation
error, where the covering number theory is used to estimate the excess error, and an upper
bound of the excess error is given.