At many instances, service providers may update their Web
services including description and WSDL information. Crawlers
need to be able to update or revisit Web services periodically in
order to determine changes that may have taken place and update
the collection of downloaded information. In order for a crawler
to achieve a particular refresh status, it needs to determine a
collection of Web services that must be revisited while skipping
those that are considered less frequently updated. To illustrate
how the update rate works, a crawler will take into consideration
those Web services that are often updated in which case it will
revisit them more frequently.