Server logs may also overestimate the actual use of the site, by including requests made by ‘spiders’ [34]. Spiders are computer robot programs that scan the web to keep search engine databases up to date. Sometimes the automatic traffic identifies itself via a descriptive field in its own coding and can be filtered out [31]. Another reason for overestimate is that page requests are counted even if the user has already left the page before it has loaded [31, 32]