C # Sql The decimal sign of a coup when multiplying small numbers

I seem to have encountered more problems with the 'my favorite data type' SqlDecimal. I am wondering if this should be considered a mistake or not.

When I multiply two small numbers in SQL, I get the expected result. When I run the same numbers using the SQLCLR function, the results are surprisingly surprising.

C # code:

using System.Data.SqlTypes; using Microsoft.SqlServer.Server; namespace TestMultiplySQLDecimal { public static class Multiplier { [SqlFunction(DataAccess=DataAccessKind.None, IsDeterministic = true,IsPrecise = true)] public static SqlDecimal Multiply(SqlDecimal a, SqlDecimal b) { if (a.IsNull || b.IsNull) return SqlDecimal.Null; return a*b; } } } 

SQL code:

 USE tempdb GO IF DB_ID('test') IS NOT NULL DROP DATABASE test GO CREATE DATABASE test GO USE test GO CREATE ASSEMBLY TestMultiplySQLDecimal FROM 'C:\Users\tralalalaa\Documents\visual studio 2015\Projects\TestMultiplySQLDecimal\TestMultiplySQLDecimal\bin\Release\TestMultiplySQLDecimal.dll' WITH PERMISSION_SET = SAFE GO CREATE FUNCTION dbo.fn_multiply(@a decimal(38,8), @b decimal(18,8)) RETURNS decimal(38,8) EXTERNAL NAME TestMultiplySQLDecimal.[TestMultiplySQLDecimal.Multiplier].Multiply GO DECLARE @a decimal(38, 8), @b decimal(18, 8), @c decimal(38, 8), @f decimal(38, 8) SELECT @a = -0.00000450, @b = 0.193, @c = NULL, @f = NULL SELECT @c = @a * @b, @f = dbo.fn_multiply(@a, @b) SELECT multiply = null, c = @c, f = @f 

Result: c = -0.00000100 f = +0.00000100

I know that the “absolute” difference is “minimal”, and I “played out” big mistakes, accusing her of “rounding the differences” ... But it will be difficult to explain to clients that negative positive results are positive. And in the end, T-SQL supports it perfectly ...

I can try to get around this using decimal (28.8) instead of decimal (38.8), but I will run into other (completely unrelated) problems, then = /


The following console application encounters the same problem without having to enable SQL Server / SQLCLR:

 using System; using System.Data.SqlTypes; namespace PlayAreaCSCon { class Program { static void Main(string[] args) { var dec1 = new SqlDecimal(-0.00000450d); var dec2 = new SqlDecimal(0.193d); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8); dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8); Console.WriteLine(dec1 * dec2); Console.ReadLine(); } } } 

Print 0.000001

+7
c # sql-server sqlclr
source share
2 answers

I believe the error is around line 1550 of SqlDecimal :

 ret = new SqlDecimal(rgulRes, (byte)culRes, (byte)ResPrec, (byte)ActualScale, fResPositive); if (ret.FZero ()) ret.SetPositive(); ret.AssertValid(); ret.AdjustScale(lScaleAdjust, true); return ret; 

First, he creates a new decimal number using the last parameter of the scale. Then it checks whether the result is "zero" based on the parameters passed in the constructor.

Then, claiming that everything is valid, it performs a scale adjustment.

While the FZero check is in progress, the result looks like -0.0000008685 . And we know that the final scale will be 6, because we are within the scale and accuracy. Well, the first 6 digits are all zero.

Only after that, when the scale is adjusted, does it take into account rounding and moves a 1 to the final decimal position.

This is mistake. Unfortunately, the source code for the embedded SQL Server decimal implementation is not publicly available, so we cannot compare it with the SqlDecimal managed implementation to see how similar they are and how the source code avoids the same error.

+5
source share

While the difference in behavior between the T-SQL and .NET implementations is “worrying” and indicates an error, while @Damien_The_Unbeliever's excellent investigation work can very well determine the reason for this behavior (it’s difficult to check at the moment, because SqlDecimal implementations have a lot of code), and some of them use inaccurate calculations, using double to bypass .NET without supporting more than 28 digits), here, perhaps, the problem is more ignored: both answers (i.e. c = -0.00000100 f = +0.00000100 ) are incorrect! Perhaps we should not be so hasty to determine the winner between "Sky - Plaid" and "Sky - Polish" ,-)

This is the case when we should probably be a little more pragmatic in our goals, understand the limitations of decimal operations more, and expand our testing area.

To begin with, although it might seem like a good idea to reserve the maximum data space for an unknown set of inputs, using DECIMAL(38, y) similar to using NVARCHAR(MAX) for all rows. Yes, he can usually be suitable for everything that you throw at him, but there are consequences. And there is an additional consequence in decimal operations, given the nature of how the resulting accuracy and scale are calculated, especially when they give little space to the “scale” (ie 8 digits) and at the same time multiply very small numbers. Meaning: if you are not going to use the full range of 30 digits to the left of the decimal (i.e., DECIMAL(38, 8) ), do not specify DECIMAL(38, 8) . For input parameters, simply specify the maximum size allowed for each of these values. Given that both values ​​are below 0, using something like DECIMAL(20, 18) (or even DECIMAL(18, 8) ) will not only be flexible enough, but it will give the correct result. OR, if you really need to allow large values, then give more space to the digits to the right of the decimal (ie, “Scale”), specifying something like DECIMAL(38, 28) , which gives 10 digits to the left of the decimal, and 28 to his right.

Original DECIMAL (38, 8) for everything

 DECLARE @a DECIMAL(38, 8), @b DECIMAL(38, 8), @c DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c]; 

Return:

 RawCalculation c -0.000001 -0.00000100 

Using DECIMAL (18, 8)

 DECLARE @a DECIMAL(18, 8), @b DECIMAL(18, 8), @c DECIMAL(38, 18), @d DECIMAL(20, 18), @e DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b, @d = @a * @b, @e = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e]; 

Return:

 RawCalculation cde -0.0000008685000000 -0.000000868500000000 -0.000000868500000000 -0.00000087 

Using DECIMAL (38, 28)

 DECLARE @a DECIMAL(38, 28), @b DECIMAL(38, 28), @c DECIMAL(38, 18), @d DECIMAL(20, 18), @e DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b, @d = @a * @b, @e = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e]; 

Return:

 RawCalculation cde -0.00000086850000000 -0.000000868500000000 -0.000000868500000000 -0.00000087 

.NET Code Example

The following code is based on a code example added by @Damien to the question. I expanded it to run additional tests to show how changes in accuracy and scale affect calculations, and to display various properties at each step. Note that the literal representation of the decimal number in .NET uses M or M , not d (although this did not affect this test): decimal (C # link)

 using System; using System.Data.SqlTypes; namespace SqlDecimalMultiplication { class Program { private static void DisplayStuffs(SqlDecimal Dec1, SqlDecimal Dec2) { Console.WriteLine("1 ~ {0}", Dec1.Value); Console.WriteLine("1 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec1.Precision, Dec1.Scale, Dec1.IsPositive); Console.WriteLine("2 ~ {0}", Dec2.Value); Console.WriteLine("2 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec2.Precision, Dec2.Scale, Dec2.IsPositive); Console.Write("\nRESULT: "); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine(Dec1 * Dec2); Console.ResetColor(); return; } static void Main(string[] args) { var dec1 = new SqlDecimal(-0.00000450m); var dec2 = new SqlDecimal(0.193m); Console.WriteLine("=======================\n\nINITIAL:"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8); dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8); Console.WriteLine("=======================\n\nAFTER (38, 8) & (18, 8):"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 18, 8); Console.WriteLine("=======================\n\nAFTER (18, 8) & (18, 8):"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 28); dec2 = SqlDecimal.ConvertToPrecScale(dec2, 38, 28); Console.WriteLine("=======================\n\nAFTER (38, 28) & (38, 28):"); DisplayStuffs(dec1, dec2); Console.WriteLine("======================="); //Console.ReadLine(); } } } 

Return:

 ======================= INITIAL: 1 ~ -0.00000450 1 ~ Precision: 8; Scale: 8; IsPositive: False 2 ~ 0.193 2 ~ Precision: 3; Scale: 3; IsPositive: True RESULT: -0.00000086850 ======================= AFTER (38, 8) & (18, 8): 1 ~ -0.00000450 1 ~ Precision: 38; Scale: 8; IsPositive: False 2 ~ 0.19300000 2 ~ Precision: 18; Scale: 8; IsPositive: True RESULT: 0.000001 ======================= AFTER (18, 8) & (18, 8): 1 ~ -0.00000450 1 ~ Precision: 18; Scale: 8; IsPositive: False 2 ~ 0.19300000 2 ~ Precision: 18; Scale: 8; IsPositive: True RESULT: -0.0000008685000000 ======================= AFTER (38, 28) & (38, 28): 1 ~ -0.0000045000000000000000000000 1 ~ Precision: 38; Scale: 28; IsPositive: False 2 ~ 0.1930000000000000000000000000 2 ~ Precision: 38; Scale: 28; IsPositive: True RESULT: -0.00000086850000000 ======================= 

Conclusion

There is probably an error in SqlDecimal , you probably won't run into it if you set the "Accuracy and scale" of the input parameters "correctly." Unless, of course, you really need 38 digits for the input values, but most use cases will never be needed.

ALSO, the reason for highlighting the “input parameters” in the paragraph above is to indicate that the return value should naturally be more accurate (and scale) in order to provide increased accuracy and / or scale as a result of certain operations. Therefore, there is nothing wrong with keeping DECIMAL(38, 28) or DECIMAL(38,18) as the return type.


Note:

For SQLCLR UDF (i.e. scalar functions) do not use this template if it covers all input parameters:

 if (a.IsNull || b.IsNull) return SqlDecimal.Null; 

IF the idea is to return NULL , if any of the input parameters is NULL , then you should use the following option in the CREATE FUNCTION statement:

 WITH RETURNS NULL ON NULL INPUT 

as this will completely eliminate .NET code!

+2
source share

All Articles