Mistake #3: You blocked search engines from crawling your site
When a website launch goes bad, the main failure is typically due to mistakes that were made in the early planning stages. Not allowing search engines to crawl your new site is a common error that often happens when sites are moved from the staging area to the live server.
Perhaps you used robots.txt to block search engine crawlers while the site was in development, but forgot to update the file when the site went live. Or maybe you accidentally put firewalls in place that are blocking site crawlers.
In this case, a webmaster used a WordPress plugin called Wordfence to prevent bots from crawling the site, in an attempt to reduce server load and fake referrals. This plugin allows you to whitelist certain bots, allowing them to crawl the site. He whitelisted several known Googlebot IPs, but unfortunately Google switched the IPs it was crawling from, causing the new IPs to get blocked.
While these new IPs were only blocked for three or four days, it caused the site’s web traffic to halt. When the mistake was discovered, and traffic started picking up again, it remained sluggish.