Just to give a brief introduction to Web crawling, this is a technique that collects web content automatically and stores
the useful content into the database after content analysis. A web crawler starts with a list of seed URLs to initially visit.
When the crawler gets all the content of these pages, it will retrieve the links from these pages and placed in a URL queue, to be recursively visited in order.