In our application, we currently live with a legacy of the solution to store all engineering data in our database in SI.
I am worried that we might risk not having enough precision and precision in our database or in numerical .NET types. I am also worried that we can see artifacts of floating point math (although this is probably a question for everyone).
For example, raw data may have been expressed (and read with some third-party service) in Psi (pounds per square inch). Engineers chose this unit of measure because (for a pronounced amount) it will tend to give easily digestible, human-readable numbers, without requiring scientific notation.
When we "standardize" a number, i.e. when we convert this value for our own perseverance, we can convert it to Pa (Pascals), which will require either multiplication or division of the number by some other potentially large number.
We often end up storing very large or very small numbers, and worse, we can do further calculations on these numbers.
We are currently using the ORACLE float and System.Double.
What do people think about this?
UPDATE
Further research revealed support for units in the upcoming F # language (in CTP when recording).
It seems we can understand that F # is user input, for example:
9.81<n/s^2>
We will also be able to create our own derived units and unit systems.
creating a derived unit for Newtons in F # http://blogs.msdn.com/blogfiles/andrewkennedy/WindowsLiveWriter/UnitsofMeasureinFPartOneIntroducingUnits_A131/image_thumb_11.png