Choosing data type for monetary Calculation (In Java)— Float ? Double ? Or Bigdecimal

When I first studied Basic Arithmetic rational numbers, irrational numbers etc it aroused curiosity in me for days (e.g e + pi will be irrational or not), years later when I encountered Floating Point Arithmetic in computer science (CS), I was again curious on the precision/accuracy of base 2 ? Below is an YouTube video tutorial to brush up the calculation: 

Convert Decimal to Binary and vice versa

As per Wikipedia Floating Point Arithmetic is:

Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333…). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers which appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a “1100” sequence continuing endlessly.

Since a prime factor other than 2 will have an infinite binary expansion, the floating point arithmetic in Java which is used by float and double will always result in imprecise results. This can be explained with the help of following java program:

public class MonetaryDemo
public static void main(String[] args)
double total = 0.2;
for (int i = 0; i < 100; i++)
total += 0.2;
System.out.println("double total = " + total);
float floatTotal = 0.2f;
for (int i = 0; i < 100; i++)
floatTotal += 0.2f;
System.out.println("float Total = " + floatTotal);
.println("Sum of 10 (0.1f) = " + (0.1f + 0.1f + 0.1f + 0.1f + 0.1f + 0.1f + 0.1f + 0.1f + 0.1f + 0.1f));
System.out.println("Precision of float 2.9876543218f : " + 2.9876543218f);
System.out.println("Precision of double 2.9876543218d : " + 2.9876543218d);
System.out.println("0.0175 * 100000 = " + 0.0175 * 100000);
System.out.println("0.0175f * 100000 = " + 0.0175f * 100000);

And output is :

double total = 20.19999999999996
float Total = 20.200005
Sum of 10 (0.1f) = 1.0000001
Precision of float 2.9876543218f : 2.9876542
Precision of double 2.9876543218d : 2.9876543218
0.0175 * 100000 = 1750.0000000000002
0.0175f * 100000 = 1750.0

If we analyse the above , output should have been 20.20 , but the floating point calculation in doublemade it 20.19999999999996 and floating point calculation in float made it 20.000004. This is evident from other calculations in the above example as well.

Thus for monetary calculations where high precision is required, float and double doesn’t seem to be correct choice.

A dirty workaround popular among most of the developers is using the smallest unit of currency e.g paise in Indian currency rather than using Rupees, cents in US currency rather than Dollar but this just shifts precision to two or three places at most, doesn't solve the problem though.

What does Scale mean ? It specifies the number of digits after the decimal place. For e.g, 2.8765 has the precision of 5 and the scale of 4.

How does BigDecimal solves the problem ?

It is the most suitable choice as it is base 10.

Precision of Float is 6–7 digits , precision of double is 15–16 digits and BigDecimal scale as per Java 8 docs (source : here):

Immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. The value of the number represented by the BigDecimal is therefore (unscaledValue × 10-scale).

What is the use case of Float ?

Above facts point to the fact that float will have more precision loss than double, thus arising the question why and when to use float ? Use case of float is to save memory and better performance of arithmetic, especially on 32bit architectures. It can be explained as it will show improved performance over doubles for applications which process large arrays of floating point numbers such that memory bandwidth is the limiting factor. By switching to float[]from double[] , thus halving the data size, we effectively double the throughput, because twice as many values can be fetched in a given time. So the applications where performance is higher priority than precision , they should prefer float.

Size of various data types:

  • double: 8 bytes
  • Double: 16 bytes (8 bytes overhead for the class, 8 bytes for the contained double)
  • BigDecimal: 32 bytes
  • long: 8 bytes
  • Long: 16 bytes (8 bytes overhead for the class, 8 bytes for the contained long)
  • BigInteger: 56 bytes

Above size reflects one thing that storage requirements become 4 times than double. Now days systems have cheap ram and enough this is no longer a problem.

Example Usage of BigDecimal is as follows:

public class MonetaryDemo
public static void main(String args[])
double amount1 = 1.15;
double amount2 = 1.10;
System.out.println("Diff between 1.15 and 1.10 using double is: " + (amount1 - amount2));
BigDecimal amount3 = new BigDecimal("1.15");
BigDecimal amount4 = new BigDecimal("1.10");
System.out.println("Diff between 1.15 and 1.10 using BigDecimal is: " + (amount3.subtract(amount4)));
final long iterations = 10000000;

long t = System.currentTimeMillis();
double d = 789.0123456;
for (int i = 0; i < iterations; i++)
final double b = d * ((double) System.currentTimeMillis() + (double) System.currentTimeMillis());
System.out.println("Execution time for 10M iterations double: " + (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
BigDecimal bd = new BigDecimal("789.0123456");
for (int i = 0; i < iterations; i++)
final BigDecimal b = bd.multiply(
System.out.println("Execution time for 10M iterations BigDecimal: " + (System.currentTimeMillis() - t));

output of the above is :

Diff between 1.15 and 1.10 using double is: 0.04999999999999982
Diff between 1.15 and 1.10 using BigDecimal is: 0.05
Execution time for 10M iterations double: 444
Execution time for 10M iterations BigDecimal: 951

Lets analyze the output : execution time of 10 Million iterations of BigDecimal takes more than double time but provides precise results.


If precision is of utmost importance BigDecimal is the way to go even though it has some performance drawbacks.

Points to be Noted:

  • Do not convert double to BigDecimal, instead convert String to BigDecimal when possible because using BigDecimal(double)is unpredictable due to the inability of the double to represent 0.1 as exact 0.1.
  • Rounding mode should be provided while setting the scale, else default rounding mode of HALF_EVEN (also known as bankers’ roundingwill be used.
  • Always use MathContext for BigDecimal multiplication and division in order to avoid ArithmeticException for infinitely long decimal results. Don't use MathContext.UNLIMITEDfor that reason - it is equivalent to no context at all.

Further Reading :

  1. Need for BigDecimal :

2) Decimal to IEEE 754 Floating Point Representation for reference

What am I missing here ? Let me know in comments section and I'll add in!
What’s next? Subscribe Learn INQuiZitively to be the first to read my stories.