Let's say I want to convert Double x to decimal y . There are many ways to do this:
1. var y = Convert.ToDecimal(x); // Dim y = Convert.ToDecimal(x) 2. var y = new Decimal(x); // Dim y = new Decimal(x) 3. var y = (decimal)x; // Dim y = CType(x, Decimal) 4. -- no C
Functionally, all of the above does the same (as far as I can tell). Besides personal taste and style, is there a definite reason to choose one option over another?
EDIT . This is an IL generated by compiling three C # parameters in a Release configuration:
1. call valuetype [mscorlib]System.Decimal [mscorlib]System.Convert::ToDecimal(float64) --> which calls System.Decimal::op_Explicit(float64) --> which calls System.Decimal::.ctor(float64) 2. newobj instance void [mscorlib]System.Decimal::.ctor(float64) 3. call valuetype [mscorlib]System.Decimal [mscorlib]System.Decimal::op_Explicit(float64) --> which calls System.Decimal::.ctor(float64)
This is an IL generated by compiling four VB parameters in a Release configuration:
1. call valuetype [mscorlib]System.Decimal [mscorlib]System.Convert::ToDecimal(float64) --> which calls System.Decimal::op_Explicit(float64) --> which calls System.Decimal::.ctor(float64) 2. call instance void [mscorlib]System.Decimal::.ctor(float64) 3. newobj instance void [mscorlib]System.Decimal::.ctor(float64) 4. newobj instance void [mscorlib]System.Decimal::.ctor(float64)
So it all ends with System.Decimal::.ctor(float64)
Heinzi
source share