While the difference in behavior between the T-SQL and .NET implementations is “worrying” and indicates an error, while @Damien_The_Unbeliever's excellent investigation work can very well determine the reason for this behavior (it’s difficult to check at the moment, because SqlDecimal implementations have a lot of code), and some of them use inaccurate calculations, using double to bypass .NET without supporting more than 28 digits), here, perhaps, the problem is more ignored: both answers (i.e. c = -0.00000100 f = +0.00000100 ) are incorrect! Perhaps we should not be so hasty to determine the winner between "Sky - Plaid" and "Sky - Polish" ,-)
This is the case when we should probably be a little more pragmatic in our goals, understand the limitations of decimal operations more, and expand our testing area.
To begin with, although it might seem like a good idea to reserve the maximum data space for an unknown set of inputs, using DECIMAL(38, y) similar to using NVARCHAR(MAX) for all rows. Yes, he can usually be suitable for everything that you throw at him, but there are consequences. And there is an additional consequence in decimal operations, given the nature of how the resulting accuracy and scale are calculated, especially when they give little space to the “scale” (ie 8 digits) and at the same time multiply very small numbers. Meaning: if you are not going to use the full range of 30 digits to the left of the decimal (i.e., DECIMAL(38, 8) ), do not specify DECIMAL(38, 8) . For input parameters, simply specify the maximum size allowed for each of these values. Given that both values are below 0, using something like DECIMAL(20, 18) (or even DECIMAL(18, 8) ) will not only be flexible enough, but it will give the correct result. OR, if you really need to allow large values, then give more space to the digits to the right of the decimal (ie, “Scale”), specifying something like DECIMAL(38, 28) , which gives 10 digits to the left of the decimal, and 28 to his right.
Original DECIMAL (38, 8) for everything
DECLARE @a DECIMAL(38, 8), @b DECIMAL(38, 8), @c DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c];
Return:
RawCalculation c -0.000001 -0.00000100
Using DECIMAL (18, 8)
DECLARE @a DECIMAL(18, 8), @b DECIMAL(18, 8), @c DECIMAL(38, 18), @d DECIMAL(20, 18), @e DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b, @d = @a * @b, @e = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e];
Return:
RawCalculation cde -0.0000008685000000 -0.000000868500000000 -0.000000868500000000 -0.00000087
Using DECIMAL (38, 28)
DECLARE @a DECIMAL(38, 28), @b DECIMAL(38, 28), @c DECIMAL(38, 18), @d DECIMAL(20, 18), @e DECIMAL(38, 8); SELECT @a = -0.00000450, @b = 0.193; SELECT @c = @a * @b, @d = @a * @b, @e = @a * @b; SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e];
Return:
RawCalculation cde -0.00000086850000000 -0.000000868500000000 -0.000000868500000000 -0.00000087
.NET Code Example
The following code is based on a code example added by @Damien to the question. I expanded it to run additional tests to show how changes in accuracy and scale affect calculations, and to display various properties at each step. Note that the literal representation of the decimal number in .NET uses M or M , not d (although this did not affect this test): decimal (C # link)
using System; using System.Data.SqlTypes; namespace SqlDecimalMultiplication { class Program { private static void DisplayStuffs(SqlDecimal Dec1, SqlDecimal Dec2) { Console.WriteLine("1 ~ {0}", Dec1.Value); Console.WriteLine("1 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec1.Precision, Dec1.Scale, Dec1.IsPositive); Console.WriteLine("2 ~ {0}", Dec2.Value); Console.WriteLine("2 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec2.Precision, Dec2.Scale, Dec2.IsPositive); Console.Write("\nRESULT: "); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine(Dec1 * Dec2); Console.ResetColor(); return; } static void Main(string[] args) { var dec1 = new SqlDecimal(-0.00000450m); var dec2 = new SqlDecimal(0.193m); Console.WriteLine("=======================\n\nINITIAL:"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8); dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8); Console.WriteLine("=======================\n\nAFTER (38, 8) & (18, 8):"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 18, 8); Console.WriteLine("=======================\n\nAFTER (18, 8) & (18, 8):"); DisplayStuffs(dec1, dec2); dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 28); dec2 = SqlDecimal.ConvertToPrecScale(dec2, 38, 28); Console.WriteLine("=======================\n\nAFTER (38, 28) & (38, 28):"); DisplayStuffs(dec1, dec2); Console.WriteLine("=======================");
Return:
======================= INITIAL: 1 ~ -0.00000450 1 ~ Precision: 8; Scale: 8; IsPositive: False 2 ~ 0.193 2 ~ Precision: 3; Scale: 3; IsPositive: True RESULT: -0.00000086850 ======================= AFTER (38, 8) & (18, 8): 1 ~ -0.00000450 1 ~ Precision: 38; Scale: 8; IsPositive: False 2 ~ 0.19300000 2 ~ Precision: 18; Scale: 8; IsPositive: True RESULT: 0.000001 ======================= AFTER (18, 8) & (18, 8): 1 ~ -0.00000450 1 ~ Precision: 18; Scale: 8; IsPositive: False 2 ~ 0.19300000 2 ~ Precision: 18; Scale: 8; IsPositive: True RESULT: -0.0000008685000000 ======================= AFTER (38, 28) & (38, 28): 1 ~ -0.0000045000000000000000000000 1 ~ Precision: 38; Scale: 28; IsPositive: False 2 ~ 0.1930000000000000000000000000 2 ~ Precision: 38; Scale: 28; IsPositive: True RESULT: -0.00000086850000000 =======================
Conclusion
There is probably an error in SqlDecimal , you probably won't run into it if you set the "Accuracy and scale" of the input parameters "correctly." Unless, of course, you really need 38 digits for the input values, but most use cases will never be needed.
ALSO, the reason for highlighting the “input parameters” in the paragraph above is to indicate that the return value should naturally be more accurate (and scale) in order to provide increased accuracy and / or scale as a result of certain operations. Therefore, there is nothing wrong with keeping DECIMAL(38, 28) or DECIMAL(38,18) as the return type.
Note:
For SQLCLR UDF (i.e. scalar functions) do not use this template if it covers all input parameters:
if (a.IsNull || b.IsNull) return SqlDecimal.Null;
IF the idea is to return NULL , if any of the input parameters is NULL , then you should use the following option in the CREATE FUNCTION statement:
WITH RETURNS NULL ON NULL INPUT
as this will completely eliminate .NET code!