JavaScript is frequently ridiculed when developers first encounter this seemingly baffling result:
0.1 0.2 == 0.30000000000000004
Memes about JavaScript's handling of numbers are widespread, often leading many to believe that this behaviour is unique to the language.
However, this quirk isn't just limited to JavaScript. It is a consequence of how most programming languages handle floating-point arithmetic.
For instance, here are code snippets from Java and Go that produce similar results:
Computers can natively only store integers. They don't understand fractions. (How will they? The only way computers can do arithmetic is by turning some lights on or off. The light can either be on or off. It can't be "half" on!) They need some way of representing floating point numbers. Since this representation is not perfectly accurate, more often than not, 0.1 0.2 does not equal 0.3.
All fractions whose denominators are made of prime factors of the number system's base can be cleanly expressed while any other fractions would have repeating decimals. For example, in the number system with base 10, fractions like 1/2, 1/4, 1/5, 1/10 are cleanly represented because the denominators in each case are made up of 2 or 5 - the prime factors of 10. However, fractions like 1/3, 1/6, 1/7 all have recurring decimals.
Similarly, in the binary system fractions like 1/2, 1/4, 1/8 are cleanly expressed while all other fractions have recurring decimals. When you perform arithmetic on these recurring decimals, you end up with leftovers which carry over when you convert the computer's binary representation of numbers to a human readable base-10 representation. This is what leads to approximately correct results.
Now that we've established that this problem is not exclusive to JavaScript, let's explore how floating-point numbers are represented and processed under the hood to understand why this behaviour occurs.
In order to understand how floating point numbers are represented and processed under the hood, we would first have to understand the IEEE 754 floating point standard.
IEEE 754 standard is a widely used specification for representing and performing arithmetic on floating-point numbers in computer systems. It was created to guarantee consistency when using floating-point arithmetic on various computing platforms. Most programming languages and hardware implementations (CPUs, GPUs, etc.) adhere to this standard.
This is how a number is denoted in IEEE 754 format:
Here s is the sign bit (0 for positive, 1 for negative), M is the mantissa (holds the digits of the number) and E is the exponent which determines the scale of the number.
You would not be able to find any integer values for M and E that can exactly represent numbers like 0.1, 0.2 or 0.3 in this format. We can only pick values for M and E that give the closest result.
Here is a tool you could use to determine the IEEE 754 notations of decimal numbers: https://www.h-schmidt.net/FloatConverter/IEEE754.html
IEEE 754 notation of 0.25:
IEEE 754 notation of 0.1 and 0.2 respectively:
Please note that the error due to conversion in case of 0.25 was 0, while 0.1 and 0.2 had non-zero errors.
IEEE 754 defines the following formats for representing floating-point numbers:
Single-precision (32-bit): 1 bit for sign, 8 bits for exponent, 23 bits for mantissa
Double-precision (64-bit): 1 bit for sign, 11 bits for exponent, 52 bits for mantissa
For the sake of simplicity, let us consider the single-precision format that uses 32 bits.
The 32 bit representation of 0.1 is:
0 01111011 10011001100110011001101
Here the first bit represents the sign (0 which means positive in this case), the next 8 bits (01111011) represent the exponent and the final 23 bits (10011001100110011001101) represent the mantissa.
This is not an exact representation. It represents ≈ 0.100000001490116119384765625
Similarly, the 32 bit representation of 0.2 is:
0 01111100 10011001100110011001101
This is not an exact representation either. It represents ≈ 0.20000000298023223876953125
When added, this results in:
0 01111101 11001101010011001100110
which is ≈ 0.30000001192092896 in decimal representation.
In conclusion, the seemingly perplexing result of 0.1 0.2 not yielding 0.3 is not an anomaly specific to JavaScript, but a consequence of the limitations of floating-point arithmetic across programming languages. The roots of this behaviour lie in the binary representation of numbers, which inherently leads to precision errors when handling certain fractions.
Disclaimer: All resources provided are partly from the Internet. If there is any infringement of your copyright or other rights and interests, please explain the detailed reasons and provide proof of copyright or rights and interests and then send it to the email: [email protected] We will handle it for you as soon as possible.
Copyright© 2022 湘ICP备2022001581号-3