In your example
33883.50 * -1 / 1.05
estimated as
33883.50 * (-1 / 1.05)
instead
(33883.50 * -1) / 1.05
resulting in loss of accuracy.
I played a little with him. I used SQL Sentry Plan Explorer to find out how SQL Server evaluates expressions. For instance,
2 * 3 * -4 * 5 * 6
estimated as
((2)*(3)) * ( -((4)*(5))*(6))
I would explain it this way. In the unary minus of T-SQL, the same priority is given as subtraction , which is lower than multiplication. Yes,
When two operators in an expression have the same priority as a level operator, they are evaluated from left to right depending on their position in the expression.
but here we have an expression that mixes operators with different priorities, and the parser follows the letter priorities. Multiplication should go first, so first evaluate 4 * 5 * 6 , and then apply the unary minus to the result.
Usually ( says in C ++ ) the unary minus has a higher priority (for example, bitwise NOT), and such expressions are parsed and evaluated as expected. They should have made unary minus / plus the same priority as bitwise NOT in T-SQL, but they did not, and this is the result. Thus, this is not a mistake, but a poor design decision. It is even documented, although rather obscure.
When you access Oracle, the same example works differently in Oracle than in SQL Server:
- Oracle may have different rules for operator precedence than SQL Server. All that is required is to make unary minus the highest priority as it should.
- When evaluating expressions of type
decimal , Oracle may have different rules for determining the accuracy and scale of the result . - Oracle may have different rules for rounding subtotals. SQL Server "uses rounding when converting a number to a decimal or numeric value with less precision and scaling."
- Oracle can use completely different types for these expressions, rather than
decimal . In SQL Server , a decimal point constant is automatically converted to a numeric data value using minimal precision and scale. the constant 12.345 is converted to a numerical value with an accuracy of 5 and a scale of 3. - The clear definition of
decimal may vary in Oracle. Even in SQL Server, "the default maximum precision for numeric and decimal data is 38 by default. In earlier versions of SQL Server, the default maximum size is 28."
source share