technique for removing coding redundancy. It has been used in various
compression applications, including image compression. It is a simple, yet elegant, compression technique that can
supplement other compression algorithms. It is also used in CCITT Group 3 compression. Group 3 is a compression
algorithm developed by the International Telegraph and Telephone Consultative Committee in 1985 for encoding and
compressing 1-bit (monochrome) image data. Group 3 and 4 compression are most commonly used in the TIFF file
format. It utilizes the statistical property of alphabets in the source stream, and then produces respective codes for these
alphabets. These codes are of variable code length using integral number of bits. The codes for alphabets having higher
probability of occurrence are shorter than those codes for alphabets having lower probability. So , It is based on the
frequency of occurrence of a data item (pixels or small blocks of pixels in images). It uses a lower number of bits to
encode more frequent data. The Codes are stored in a Code Book. Code book is constructed for each image or a set of
images. Huffman coding is the optimal lossless scheme for compressing a bit stream. It works by first calculating
probabilities. Define permutations {0, 1}
n
by assigning symbols, say A;B;C;D. The bit stream might look like AADAC
for example. Now the symbols are assigned new codes, the higher the probability the smaller the number of bits in the
code [3]. The codes are the output of the Huffman coder in the form of a bit stream. Now we need to know where one
code stops and a new one begins. This is solved by enforcing the unique prefix condition: no code is the prefix of any
other code. The first few codes are 01; 11; 001; 101; 0000; 1000; 1001. In Huffman coding, you assign shorter codes to
symbols that occur more frequently and longer codes to those that occur less frequently [1].