I have two BigDecimals Actual and Budgeted. I am sharing Actual Budget to come up with a percentage.
The problem I am facing is that when I build some unit tests, I try to confirm that the resulting BigDecimal is .1, but when try is (the new BigDecimal (.1)), it fails due to double precision problems.
The way I thought about this was to create two BigDecimals - ten and one hundred, separate the two and use them for testing. Thus, I only use fixed-point numbers, and my calculations should work exactly.
So my question is: is there a better way to do this?
source share