How does value type really work in .net?

A somewhat academic question, but: How do value types such as Int actually work?

I used Reflector on mscorlib to find out how System.Int32 is implemented, and it's just Struct, which inherits from System.ValueType. I was looking for something among the bits of an array of bits containing a value, but I only found the field that declared int - what does it mean that it is a circular reference?

I mean, I can write "int i = 14;", but number 14 needs to be saved somehow, but I could not find the "32-bit array" or pointer or something else.

Is this some kind of magic that the compiler does, and are these magic types part of the specification? (Just as System.Attribute or System.Exception are "special" types)

Edit: If I declare my own structure, I add fields to it. These fields have a built-in type, for example int. So the CLR knows that I am holding an int. But how does he know that int is 32 bits signed? Does the specification simply define certain basic types and, therefore, make them "magical" or is there a technical mechanism? A hypothetical example: if I wanted to declare Int36, that is, Integer with 36 bits, can I create a type that works just like Int32 (except for 4 additional bits ofc), indicating "Okay, postpone 36 bits", or the built-in the primitives installed in the stone, and I have to somehow get around this (for example, using Int64 and code that sets only the last 36 bits)?

As said, everything is very academic and hypothetical, but I always thought about it.

+5
2

, , CLI. , IL, ldc.i4, IL, add, . ( int i = 14 ldc.i4 14, 14 MSIL.)

. IIa CLI, 7.2, " ". ( , ). : bool, char, object, string, float32, float64, int [8 | 16 | 32 | 64], unsigned int [ 8 | 1632 | 64], int (IntPtr), unsigned int typedref. , " , ", , Int32 "" int32, VES.

, System.Decimal, System.Drawing.Point , , .

+3

Int32 - , , . Int32 .

, Int16, Int32, UInt16, UInt32, Single, Double .. , x86 ( ). Int36 .

+1

All Articles