Problems-2 : When MySpace reached 3 million accounts some functions grew very large.
Effect was one database server proved insufficient
Resolution : Requirement was scale up strategy. MySpace added many cheaper servers to share the database workload. And Distributed Architecture.
Data of 1 million accounts per separate instance of SQL server.
Problem.3 : Grwoth in accounts led to performance issues and waiting time increased drastically.
Resolution -In 2005 added layer of servers between database servers and web servers. This reduced load on database server.
Problem.4 :When MySpace Crossed 25 million accounts Effect was seen on the performance and I/O speeds
Resolution -Moved to 64-bit SQL server to work around their memory bottleneck issues. Their standard database server configuration uses 64 GB of RAM.
Failure isolation. Segment requests into web server by database. Allow only 7 threads per database. So if the database is slow only those threads will slowdown and the traffic in the other threads will flow.
Further Task for Developers
MySpace still faces overloads more frequently than other sites.
Login errors occurs at 20 to 40%.
Site activity continues to challenge the technology. Developers continue to redesign Database Software and Storage System. Task is never ending
Conclusion :
Since the beginning, MySpace.com has operated in ad-hoc fire-fighting mode, evolving its architecture to oil whatever new squeaks presented themselves. MySpace.com continues to experience significant performance and reliability problems, but they’ve never been showstoppers. Lack of long term planning by the MySpace team gets reflected in the case and this has costed them the loss of leadership advantage. Face Book, Twitter and You Tube with similar business model have marched ahead.
Other Posts :
Handling Multinational market on web by Nokia
Loreal collaboration through MS Share Point