Because active learning based models benefit from the information of 50 manually-labelled examples during their
learning process, we added extra labelled samples (50 samples) from the source language to the training sets of the SCL
and SVM-MT models in order to create the same condition for all comparing models. By comparing all semi-supervised
and active learning based methods with SVM-MT model in Table 2, we can conclude that the incorporation of unlabelled
data from the target language into the learning process can effectively improve the performance of cross-lingual sentiment
classification. Also, as we can see in this table, DBAST and AST models demonstrate better performance in comparison to AL
and ST after the full learning process. This supports the idea that the combination of active learning and self-training processes
can result in a better classification than each individual approach. Moreover, the DBAST model outperforms the
AST model in all datasets. This shows that using the density measure of unlabelled examples has a beneficial effect upon
selecting the most representative examples for manual labelling.