We introduce in this paper a novel image annotation approach based on maximum margin classification and a new class of kernels. The method goes beyond the naive use of existing kernels and their restricted combinations in order to design “model-free” transductive kernels applicable to interconnected image databases. In a first contribution of the method, we learn both a decision criterion and a kernel map that guarantee linear separability in a high dimensional space and good generalization performance. In the second contribution of this work, we extend this class of kernels in order to include label dependency statistics that model contextual relationships between concepts into images. Experiments conducted on MSRC and Corel5k databases show that our method achieves at least comparable results with related state of the art.