At the end of the first challenge we have billions of RDF-triples
and we must be able to reason on it. One of the most relevant
works which tackle this problem is [21]. Their work has led
to a tool termed WebPIE (Web-scale Inference Engine). In [21], inference
rules are rewritten and map and reduce functions are
specified for each of them. This work has inspired the work
of [22] who propose a MapReduce-based algorithm for classifying
EL+ ontologies. Another relevant work in this challenge
focuses on efficient RDF repositories partitioning and scalability
of SPARQL queries [85]. We can also add [86] which proposes
a way to store and retrieve large RDF graphs efficiently.
Concerning the (complete) description of entities in the middle
of billion RDF/RDFS triple mentioned in the third challenge,
[38] designed a Semantic Web Search Engine (SWSE)
which has many features including entities description. Here,
this description is obtained by aggregating efficiently descriptions
from many sources