Your final result is simply a weighted average accuracy, so you probably don’t need to follow the rules used in calculating account balances, etc. If I'm right about the above, you do not need to use BigDecimal , double will be enough.
The overflow problem can be solved by storing the "current average" and updating with each new record. Namely, let
a_n = (sum_ {i = 1} ^ n x_i * w_i) / (sum_ {i = 1} ^ n w_i)
for n = 1, ..., N. You start with a_n = x_n and then add
d_n: = a_ {n + 1} - a_n
to him. The formula for d_n is
d_n = (x_ {n + 1} - w_ {n + 1} * a_n) / W_ {n + 1}
where W_n: = sum_ {i = 1} ^ n w_n. You need to track W_n, but this problem can be solved by saving it as a double (this will be normal, since we are only interested in the average). You can also normalize your weight if you know that all your weights are a multiple of 1000, just divide them by 1000.
To get extra accuracy, you can use compensated summation .
Preventive explanation: here you can use floating point arithmetic. double has a relative accuracy of 2E-16. OP averages positive numbers, so there will be no undo error. What proponents of arbitrary precision arithmetic do not tell you is that, leaving aside the rounding rules, in cases where this gives you a lot of extra precision compared to IEEE754 floating point arithmetic, it will have significant cost and performance . Floating-point arithmetic was developed by very smart people (for example, Professor Kahan and others), and if there was a way to cheaply increase arithmetic accuracy compared to what is offered with floating-point, they would do it.
Disclaimer: if your weights are completely crazy (one is 1, the other is 10,000,000), then I'm not 100% sure if you get satisfactory accuracy, but you can check it with some example when you know that the answer should be.