a. HDFS: Hadoop Distributed File System is a File System designed for storage of large files with streaming data access patterns, running on cluster of commodity hardware. HDFS block size is larger by default size is 64 MB compare to normal file systems. The reason for large size blocks is to reduce the number of disk usage and make use of total memory space. HDFS cluster has two types of nodes namenode (the master) and datanode (worker). The name node manages the file system namespace, handle the file system completely and the metadata for all the files and directories in the complete structure. Datanode stores and access the block, as per the instructions given by clients. The data retrieved is reported back to the namenode along with lists of blocks that they are storing in it. Without namenode it is not possible to access the file. So it is very important to make namenode is working properly.