Difference between revisions of "Floating point number"
m |
Duck master (talk | contribs) m (fixed links and slight rewriting) |
||
(3 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | In [[Python]] and | + | In [[Python]] and many other programming languages, '''floating point numbers''' are the primitive [[datatype]] used to represent [[rational number]]s and [[real number]]s. Most of the standard numerical [[operator]]s and [[relational operator]]s work on floating point numbers, but see below for a caveat. |
− | + | ==Implementation Details== | |
In the computer floating point numbers are stored as a series of significant digits, and an exponent (similar to [[scientific notation]]). However, unlike how humans write numbers, computers store numbers in [[binary]]. This is usually transparent to the programmer, but leads to several subtleties. | In the computer floating point numbers are stored as a series of significant digits, and an exponent (similar to [[scientific notation]]). However, unlike how humans write numbers, computers store numbers in [[binary]]. This is usually transparent to the programmer, but leads to several subtleties. | ||
− | Unlike [[integer]] | + | Unlike [[integer|integers]] in Python, floating point numbers are always stored in a fixed amount of space (typically 64 bits for Python), which limits their precision to typically about 16 decimal digits. Because the computer has a limited space to store a number it must often round off to a value very close but not exactly equal to the original. Some rational numbers that can be represented as terminating decimals in base 10 (like 0.8) are repeating in base 2 (0.8 in base 2 would be 0.11001100...), and so can't be represented exactly. The range of floating point numbers is also limited, though this usually isn't a problem because it's very large, around <math>10^{300}</math>. |
− | Because of this limited precision, floating point calculations may give results slightly different from what you might expect. In particular, it's bad practice to directly test floating point numbers for [[relational operator|equality]], because they may have been rounded differently. The usual solution to this problem is to test if the numbers in question are within a certain margin of error (which might change depending on the application) of each other, as seen below. | + | Because of this limited precision, floating point calculations may give results slightly different from what you might expect. In particular, it's bad practice to directly test floating point numbers for [[relational operator|equality]], because they may have been rounded differently. The usual solution to this problem is to test if the numbers in question are within a certain margin of error (which might change depending on the application) of each other, as seen below. So this code is bad: |
− | if fpNum1 == fpNum2: | + | if fpNum1 == fpNum2: |
− | if [[abs]](fpNum1-fpNum2) < epsilon: # | + | |
+ | But this code is better: | ||
+ | if [[absolute value|abs]](fpNum1-fpNum2) < epsilon: | ||
+ | |||
+ | ==See Also== | ||
+ | [http://docs.python.org/py3k/library/stdtypes.html#typesnumeric Python 3.2 Documentation] | ||
[[Category:Introduction to Programming]] | [[Category:Introduction to Programming]] | ||
+ | [[Category:Datatypes]] |
Latest revision as of 15:16, 31 July 2020
In Python and many other programming languages, floating point numbers are the primitive datatype used to represent rational numbers and real numbers. Most of the standard numerical operators and relational operators work on floating point numbers, but see below for a caveat.
Implementation Details
In the computer floating point numbers are stored as a series of significant digits, and an exponent (similar to scientific notation). However, unlike how humans write numbers, computers store numbers in binary. This is usually transparent to the programmer, but leads to several subtleties.
Unlike integers in Python, floating point numbers are always stored in a fixed amount of space (typically 64 bits for Python), which limits their precision to typically about 16 decimal digits. Because the computer has a limited space to store a number it must often round off to a value very close but not exactly equal to the original. Some rational numbers that can be represented as terminating decimals in base 10 (like 0.8) are repeating in base 2 (0.8 in base 2 would be 0.11001100...), and so can't be represented exactly. The range of floating point numbers is also limited, though this usually isn't a problem because it's very large, around .
Because of this limited precision, floating point calculations may give results slightly different from what you might expect. In particular, it's bad practice to directly test floating point numbers for equality, because they may have been rounded differently. The usual solution to this problem is to test if the numbers in question are within a certain margin of error (which might change depending on the application) of each other, as seen below. So this code is bad:
if fpNum1 == fpNum2:
But this code is better:
if abs(fpNum1-fpNum2) < epsilon: