(This was supposed to be a general hypothetical question, I squealed that .NET was a pig and begging for reasons. Actually, this was not a question about my specific application.)
I am currently rewriting old C ++ code in C #. We migrate all outdated applications. I have C ++ applications that use MAX 3% CPU. Mostly they do not use them. Then I take the code, copy and paste, and then reformat the C # syntax and the .NET and BAM libraries! 50% processor. What is the reason for this? At first I thought it was JIT, but even after each code path was an exercise, and it was all JIT ed, the same problem.
I also noticed that huge memory is increasing. Applications that run a full 9 MB download now start at 10 MB and run at 50 MB. I understand that equipment is cheap, but I want to understand what causes this. Is this cause for alarm, or is it just such a pig?
Update 1 Reply to Skeet
I am familiar with C #. I change things to Linq and so on. I usually take the code and reduce the number of lines, etc. Could you give some more examples of what C ++ people did wrong in .NET?
Update 2
This should have been a general question, but the specific application that has the problem is as follows.
It has a stream that uses the ODBC driver to retrieve data from the db paradox. He then uses Linq to convert it to SQL db and publish it. I run it through the ANTS profiler, and it seems that filling the data set takes the most time. Then comes the Linq wiring. I know that some of my areas are using reflection, but I don’t see how to do what I need to exclude it. I plan to change my string to string collectors. Is there a difference between the two?
(int)datarow["Index"]
and
ConvertTo.Int32(datarow["Index"])
I changed all string concatenation to format strings. It has not diminished on the head. Does anyone know the difference between a data reader and data adapter and datasets?