effective and efficient deduction of the optimal architecture for every given benchmark program, which greatly benefits the DSE of microprocessor design. In practice, COAL has been utilized in the design of the next-generation Godson processor core.
Currently in our proposed method, the semisupervised learning and active learning components are executed with almost equal importance. It is possible to introduce a tradeoff parameter to enable them to have different amount of contributions, and this will be one of our future work. Besides, it is interesting to study how much minimum the labeled data would be required. Zhou et al. [2007] disclosed that when there are two sufficient views, semisupervised learning with even a single labeled example is possible. However, for our current proposed method, and particularly for DSE application, this also remains an interesting future issue.
It is worth noting that the current version of COAL is proposed for pre-silicon design phase of microprocessor, that is, deciding an appropriate design configuration before manufacturing the chip. In the future, we will continue to develop new versions of COAL to conduct power-efficient post-silicon microprocessor reconfiguration which can adapt to various programs or even some given program features. This task would be a crucial step towards the development of an elastic processor (by which we call a proces- sor whose architecture parameters can be dynamically reconfigured to suit different programs) and a computer tribe (by which we call a series of downward compatible elastic processors). Unlike the Field Programmable Gate Array (FPGA) whose reconfiguration may have to modify millions of controlling parameters, an elastic processor only employs a moderate number of reconfigurable parameters, which alleviates the problem of dimension explosion when building performance/power regression models for guiding architecture reconfiguration.