In this paper we explored the feasibility of EAs for reverse engineering feature
models from feature sets. We devised two fitness functions that respectively
focused on: i) getting the desired feature sets while disregarding any surplus
(FFRelaxed), ii) getting the desired number of feature sets and then on the
desired feature sets (FFStrict).
With these two functions we were able to identified a trade-off between accuracy
of the obtained feature model (the required feature sets vs of the obtained
feature sets) and number of generations. That is, proper supersets of the the desired
feature sets can be obtained with a small number of generations. However,
these supersets contain a large surplus of feature sets. In contrast, reducing such
surplus does require more generations but still can yield good accuracy results.
Despite this encouraging results, devising a fitness function that can reduce, if
not eliminate, this trade-o is still an open question. We hope that this work
has highlighted some of the the many potential areas where SBSE techniques
can help tackle many open challenges in the realm of variability management.