In this paper, the architecture of feedforward kernel neural networks (FKNN) is proposed, which can
include a considerably large family of existing feedforward neural networks and hence can meet most
practical requirements. Different from the common understanding of learning, it is revealed that when
the number of the hidden nodes of every hidden layer and the type of the adopted kernel based activation
functions are pre-fixed, a special kernel principal component analysis (KPCA) is always implicitly
executed, which can result in the fact that all the hidden layers of such networks need not be tuned and
their parameters can be randomly assigned and even may be independent of the training data. Therefore,
the least learning machine (LLM) is extended into its generalized version in the sense of adopting
much more error functions rather than mean squared error (MSE) function only. As an additional merit,
it is also revealed that rigorous Mercer kernel condition is not required in FKNN networks. When the
proposed architecture of FKNN networks is constructed in a layer-by-layer way, i.e., the number of the
hidden nodes of every hidden layer may be determined only in terms of the extracted principal components
after the explicit execution of a KPCA, we can develop FKNN’s deep architecture such that its
deep learning framework (DLF) has strong theoretical guarantee. Our experimental results about image
classification manifest that the proposed FKNN’s deep architecture and its DLF based learning indeed
enhance the classification performance.