Precision
StatsDirect calculates all of its functions in IEEE-754 double precision (8 byte, 64 bit) for floating point arithmetic and 4 byte (32 bit) integers for integer arithmetic.
Results are displayed to the level of precision that you specify under Options in the analysis menu. The default number of decimal places shown is 6. Double precision numbers are accurate up to sixteen decimal places but after calculations have been done there may be some rounding errors to account for. In theory this should affect no more than the last significant digit but in practice it is safer to rely upon fewer decimal places.
Some statistical software applications, including the statistical routines in widely used spreadsheets, produce inaccurate results due to the use of algorithms that do not handle precision properly (McCullough and Wilson, 1999). StatsDirect uses robust and reliable algorithms in order to maximise the accuracy of its results. The American National Institute of Standards and Technology produces reference data sets for testing the accuracy of statistical software (see http://www.nist.gov/itl/div898/strd/).
Double precision specifications:
- minimum = 2.22 * 10^-308
- maximum = 1.79 * 10^308
- closest to 0 without being 0 = +/- 10^323
- precision = 2.22 * 10^-16
- minimum exponent = -1022
- maximum exponent = 1024
Floating Point
In order to understand why rounding errors occur and why precision is an issue with mathematics on computers you need to understand how computers store numbers that are not integers (i.e. real numbers or numbers with a fractional part).
Each number is stored in binary format (a series of 0 or 1 digits/bits). Double precision numbers have 64 of these bits with which to represent a real number. The first bit represents the sign of the number (s), the next 11 bits represent the exponent (E) and the other 53 bits (the mantissa, M) store the detail: A real number is:
- where B (the radix) is 2 for all computers that StatsDirect runs on.
Numbers that do no have a perfect binary representation are stored to the level of precision specified by the floating point model (16 decimal places in double precision). For example 0.1 is stored as 0.100000000000000014. If lots of numbers like this are added together then the sum of the parts beyond the 16th decimal place may add up to an error that creeps into the "accurate" zone causing results of some calculations to have less than 16 decimal places of precision.
For more information see Press et al. (1992).