Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
A floating-point variable can represent a wider range of numbers than a fixed-point variable of the same bit width at the cost of precision. A signed 32-bit integer variable has a maximum value of 2^{31} − 1 = 2,147,483,647, whereas an IEEE 754 32-bit base-2 floating-point variable has a maximum value of (2 − 2^{−23}) × 2^{127} ≈ 3.4028235 × 10^{38}. All integers with 6 or fewer significant decimal digits, and any number that can be written as 2^{n} such that n is a whole number from -126 to 127, can be converted into an IEEE 754 floating-point value without loss of precision.
In the IEEE 754-2008 standard, the 32-bit base-2 format is officially referred to as binary32; it was called single in IEEE 754-1985. IEEE 754 specifies additional floating-point types, such as 64-bit base-2 double precision and, more recently, base-10 representations.
One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the computer manufacturer and computer model, and upon decisions made by programming-language designers. E.g., GW-BASIC's single-precision data type was the 32-bit MBF floating-point format.
Single precision is termed REAL in Fortran,^{[1]} SINGLE-FLOAT in Common Lisp,^{[2]} float in C, C++, C#, Java,^{[3]} Float in Haskell,^{[4]} and Single in Object Pascal (Delphi), Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml and single in versions of Octave before 3.2 refer to double-precision numbers. In most implementations of PostScript, and some embedded systems, the only supported precision is single.
The IEEE 754 standard specifies a binary32 as having:
This gives from 6 to 9 significant decimal digits precision. If a decimal string with at most 6 significant digits is converted to IEEE 754 single-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.^{[5]}
Sign bit determines the sign of the number, which is the sign of the significand as well. Exponent is either an 8-bit signed integer from −128 to 127 (2's complement) or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE 754 binary32 definition. If the unsigned integer format is used, the exponent value used in the arithmetic is the exponent shifted by a bias – for the IEEE 754 binary32 case, an exponent value of 127 represents the actual zero (i.e. for 2^{e − 127} to be one, e must be 127). Exponents range from −126 to +127 because exponents of −127 (all 0s) and +128 (all 1s) are reserved for special numbers.
The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, but the total precision is 24 bits (equivalent to log_{10}(2^{24}) ≈ 7.225 decimal digits). The bits are laid out as follows:
The real value assumed by a given 32-bit binary32 data with a given biased sign, exponent e (the 8-bit unsigned integer), and a 23-bit fraction is
which yields
In this example:
thus:
Note:
The single-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.
Thus, in order to get the true exponent as defined by the offset-binary representation, the offset of 127 has to be subtracted from the stored exponent.
The stored exponents 00_{H} and FF_{H} are interpreted specially.
Exponent | Significand zero | Significand non-zero | Equation |
---|---|---|---|
00_{H} | zero, −0 | denormal numbers | (−1)^{signbit}×2^{−126}× 0.significandbits |
01_{H}, ..., FE_{H} | normalized value | (−1)^{signbit}×2^{exponentbits−127}× 1.significandbits | |
FF_{H} | ±infinity | NaN (quiet, signalling) |
The minimum positive normal value is 2^{−126} ≈ 1.18 × 10^{−38} and the minimum positive (denormal) value is 2^{−149} ≈ 1.4 × 10^{−45}.
In general, refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format.
Here we can show how to convert a base 10 real number into an IEEE 754 binary32 format using the following outline:
Conversion of the fractional part: consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and re-multiply new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format.
0.375 x 2 = 0.750 = 0 + 0.750 => b_{−1} = 0, the integer part represents the binary fraction digit. Re-multiply 0.750 by 2 to proceed
0.750 x 2 = 1.500 = 1 + 0.500 => b_{−2} = 1
0.500 x 2 = 1.000 = 1 + 0.000 => b_{−3} = 1, fraction = 0.000, terminate
We see that (0.375)_{10} can be exactly represented in binary as (0.011)_{2}. Not all decimal fractions can be represented in a finite digit binary fraction. For example, decimal 0.1 cannot be represented in binary exactly. So it is only approximated.
Therefore, (12.375)_{10} = (12)_{10} + (0.375)_{10} = (1100)_{2} + (0.011)_{2} = (1100.011)_{2}
Since IEEE 754 binary32 format requires real values to be represented in format (see Normalized number, Denormalized number), 1100.011 is shifted to the right by 3 digits to become
Finally we can see that:
From which we deduce:
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of 12.375 as: 0-10000010-10001100000000000000000 = 41460000_{H}
Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get 42883EF9_{H} with the last 4 bits being 1001. However, due to the default rounding behaviour of IEEE 754 format, what you get is 42883EFA_{H}, whose last 4 bits are 1010.
Ex 1: Consider decimal 1. We can see that:
From which we deduce:
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 1 as: 0-01111111-00000000000000000000000 = 3f800000_{H}
Ex 2: Consider a value 0.25. We can see that:
From which we deduce:
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.25 as: 0-01111101-00000000000000000000000 = 3e800000_{H}
Ex 3: Consider a value of 0.375. We saw that
Hence after determining a representation of 0.375 as we can proceed as above:
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.375 as: 0-01111101-10000000000000000000000 = 3ec00000_{H}
These examples are given in bit representation, in hexadecimal and binary, of the floating-point value. This includes the sign, (biased) exponent, and significand.
0 00000000 00000000000000000000001_{2} = 0000 0001_{16} = 2^{−126} × 2^{−23} = 2^{−149} ≈ 1.4012984643 × 10^{−45} (smallest positive subnormal number)
0 00000000 11111111111111111111111_{2} = 007f ffff_{16} = 2^{−126} × (1 − 2^{−23}) ≈ 1.1754942107 ×10^{−38} (largest subnormal number)
0 00000001 00000000000000000000000_{2} = 0080 0000_{16} = 2^{−126} ≈ 1.1754943508 × 10^{−38} (smallest positive normal number)
0 11111110 11111111111111111111111_{2} = 7f7f ffff_{16} = 2^{127} × (2 − 2^{−23}) ≈ 3.4028234664 × 10^{38} (largest normal number)
0 01111110 11111111111111111111111_{2} = 3f7f ffff_{16} = 1 − 2^{−24} ≈ 0.9999999404 (largest number less than one)
0 01111111 00000000000000000000000_{2} = 3f80 0000_{16} = 1 (one)
0 01111111 00000000000000000000001_{2} = 3f80 0001_{16} = 1 + 2^{−23} ≈ 1.0000001192 (smallest number larger than one)
1 10000000 00000000000000000000000_{2} = c000 0000_{16} = −2 0 00000000 00000000000000000000000_{2} = 0000 0000_{16} = 0 1 00000000 00000000000000000000000_{2} = 8000 0000_{16} = −0 0 11111111 00000000000000000000000_{2} = 7f80 0000_{16} = infinity 1 11111111 00000000000000000000000_{2} = ff80 0000_{16} = −infinity 0 10000000 10010010000111111011011_{2} = 4049 0fdb_{16} ≈ 3.14159274101 ≈ π ( pi ) 0 01111101 01010101010101010101011_{2} = 3eaa aaab_{16} ≈ 0.333333343267 ≈ 1/3 x 11111111 10000000000000000000001_{2} = ffc0 0001_{16} = qNaN (on x86 and ARM processors) x 11111111 00000000000000000000001_{2} = ff80 0001_{16} = sNaN (on x86 and ARM processors)
By default, 1/3 rounds up, instead of down like double precision, because of the even number of bits in the significand. The bits of 1/3 beyond the rounding point are 1010...
which is more than 1/2 of a unit in the last place.
Encodings of qNaN and sNaN are not specified in IEEE 754 and implemented differently on different processors. The x86 family and the ARM family processors use the most significant bit of the significand field to indicate a quiet NaN. The PA-RISC processors use the bit to indicate a signalling NaN.
We start with the hexadecimal representation of the value, 41c80000, in this example, and convert it to binary:
then we break it down into three parts: sign bit, exponent, and significand.
We then add the implicit 24th bit to the significand:
and decode the exponent value by subtracting 127:
Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows:
bit 23 = 1 bit 22 = 0.5 bit 21 = 0.25 bit 20 = 0.125 bit 19 = 0.0625 bit 18 = 0.03125 . . bit 0 = 0.00000011920928955078125
The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits.
Then we need to multiply with the base, 2, to the power of the exponent, to get the final result:
Thus
This is equivalent to:
where s is the sign bit, x is the exponent, and m is the significand.
The design of floating-point format allows various optimisations, resulting from the easy generation of a base-2 logarithm approximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation to reciprocal square root (fast inverse square root), commonly required in computer graphics.
The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use.
The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs, Google Cloud TPUs, and TensorFlow.
Double-precision floating-point formatDouble-precision floating-point format is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
Floating point is used to represent fractional values, or when a wider range is needed than is provided by fixed point (of the same bit width), even if at the cost of precision. Double precision may be chosen when the range or precision of single precision would be insufficient.
In the IEEE 754-2008 standard, the 64-bit base-2 format is officially referred to as binary64; it was called double in IEEE 754-1985. IEEE 754 specifies additional floating-point formats, including 32-bit base-2 single precision and, more recently, base-10 representations.
One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the computer manufacturer and computer model, and upon decisions made by programming-language implementers. E.g., GW-BASIC's double-precision data type was the 64-bit MBF floating-point format.
IBM hexadecimal floating pointIBM System/360 computers, and subsequent machines based on that architecture (mainframes), support a hexadecimal floating-point format (HFP).In comparison to IEEE 754 floating-point, the IBM floating-point format has a longer significand, and a shorter exponent. All IBM floating-point formats have 7 bits of exponent with a bias of 64. The normalized range of representable numbers is from 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075).
The number is represented as the following formula: (−1)sign × 0.significand × 16exponent−64.
Methods of computing square rootsIn numerical analysis, a branch of mathematics, there are several square root algorithms or methods of computing the principal square root of a non-negative real number. For the square roots of a negative or complex number, see below.
Finding is the same as solving the equation for a positive . Therefore, any general numerical root-finding algorithm can be used. Newton's method, for example, reduces in this case to the so-called Babylonian method:
These methods generally yield approximate results, but can be made arbitrarily precise by increasing the number of calculation steps.
Precision (computer science)In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.
Some of the standardized precision formats are
Half-precision floating-point format
Single-precision floating-point format
Double-precision floating-point format
Quadruple-precision floating-point format
Octuple-precision floating-point formatOf these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.
Radeon HD 5000 SeriesThe Evergreen series is a family of GPUs developed by Advanced Micro Devices for its Radeon line under the ATI brand name. It was employed in Radeon HD 5000 graphics card series and competed directly with Nvidia's GeForce 400 Series.
Uninterpreted | |
---|---|
Numeric | |
Pointer | |
Text | |
Composite | |
Other | |
Related topics | |
See also platform-dependent and independent units of information |
This page is based on a Wikipedia article written by authors
(here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.