Using binary is ideal for computers since it is just a series of ones and zeros, which can
be represented by whether particular circuits are on or off. A problem with representing
numbers in a computer is that each number can only have a finite number circuits, or bits,
assigned to it, and so can only have a finite number of digits. Consider this example: