Decimal computation was carried out in ancient times in many ways, typically in rod calculus, with decimal multiplication table used in ancient China and with sand tables in India and Middle East or with a variety of abaci.
Modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally).[5] For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems.
For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.)
Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal,[6] especially in database implementations, but there are other decimal representations in use (such as in the new IEEE 754 Standard for Floating-Point Arithmetic).[7]
Decimal arithmetic is used in computers so that decimal fractional results can be computed exactly, which is not possible using a binary fractional representation. This is often important for financial and other calculations.[8]
Decimal computation was carried out in ancient times in many ways, typically in rod calculus, with decimal multiplication table used in ancient China and with sand tables in India and Middle East or with a variety of abaci.
Modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally).[5] For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems.
For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.)
Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal,[6] especially in database implementations, but there are other decimal representations in use (such as in the new IEEE 754 Standard for Floating-Point Arithmetic).[7]
Decimal arithmetic is used in computers so that decimal fractional results can be computed exactly, which is not possible using a binary fractional representation. This is often important for financial and other calculations.[8]
การแปล กรุณารอสักครู่..
