closer to 1 due to the fact that describers tag resources in a more verbose and descriptive
way.
o Overlap Factor: measures the phenomenon of an overlap produced by the assignment of
more than one tag per resource on average. Categorizers are interested in keeping this
overlap relatively low. On the other hand, describers produce a high overlap factor since
they do not use tags for navigation but instead aim to best support later retrieval.
o Tag/Title Intersection Ratio: measures how likely users choose tags from the words of an
educational resource title. Categorizers use tags taken from the title and they score values
closer to 1, whereas describers rarely use tags from the title and they score values closer to
0.
Step 2 – Calculate similarity between social tags and educational metadata: During this step we
calculate the similarity between social tags (offered by end-users, that is teachers) and educational
metadata (offered by metadata experts or content providers). The similarity is calculated for social
tags added by describers, as well as for social tags added by categorizers based on the users’
discrimination performed in step 1. At the end of this step, we expect to identify digital educational
resources enlarged with social tags offered by describers and/or categorizers that are different by the
formal metadata descriptions offered by metadata experts or content providers.
Step 3 - Compare folksonomy with formal vocabularies of educational metadata: During this
step, we compare the resulted folksonomy produced by the social tags with formal structured
vocabularies of educational metadata. The comparison is performed with the folksonomy produced
by describers, as well as with the folksonomy produced by categorizers following the users’
discrimination performed in step 1. At the end of this step, we would be able to identify new tags
offered by describers and/or categorizers that can enlarge the formal structured vocabularies of
educational metadata