The main function of a web crawler is to collect
the websites to be monitored after a system user
creates his/her seller accounts. The crawler starts
collecting the contents for all product websites once
per N hours in case the new product is on shelve. The
web crawler also collects the details of the sale data
for products, e.g., buyer ID and sale time. In Figure 3,
more than ten product hyperlinks are shown on a
webpage. Users can click hyperlinks of products
one-by-one to access the detail of the product
webpages. But there is a huge number of websites.
This may make the users to possibly face the risk of
blockade. Therefore, we only collect rough data for
products in this study.