.Net ORM / Business Object Performance

Currently, I am working with a custom layer of a business object (accepting a facade template), in which the properties of the object are loaded from stored procedures and also provide a place for business logic. This works well in an attempt to port our code base to a more layered and standardized application model, but feel that this approach is more likely evolutionary than permanent.

I am currently moving on to a more formal structure, so some architectural decisions should not be my own. I used to work with CSLA and Linq with SQL, and although I like a lot of design solutions in CLSA, I find it a bit bloated for my tastes and that Linq to SQL may not have the performance that I want. I was interested in the popularity of NHibernate and clicking Linq on Entity, however, performance is a key issue, because there are cases when you need a large number of entries (> 15k) (please do not discuss the reason for this) and I wonder how much performance looks like the best choice for making formal file .NET systems.

NOTE. . This will be used mainly in Winform and WPF applications.

Duplicate: https://stackoverflow.com/questions/146087/best-performing-orm-for-net

+4
source share
3 answers

http://ormbattle.net - a performance test seems almost what you want to see.

You should look at the materialization test (the performance of sampling a large number of elements is exactly what it shows); in addition, you can compare the performance of ORM with the performance of almost perfect SQL on simple ADO.NET by doing the same.

+8
source

With any ORM, you will get a promotion out of the box through level 1 in the proc cache. Especially with goods, if he is already there, he will not go to Pluto (DB). Most ORMs have the ability to enter the L2 out proc cache. The best part is that they just connect to the ORM. Checkout NCache for NHibernate.

+1
source

O / R mapping performance will largely depend on how your application is developed and how you map business objects. For example, you can easily kill performance by lazily loading a child in a loop so that 1 choice for 1000 objects is included in 1001 choices (google n + 1 choice).

One of the biggest performance gains with o / r mappers is developer productivity, which is often more important than application performance. Application performance is generally acceptable to end users for most applications running on the latest hardware. Developer productivity is still a bottleneck, no matter how Mountain Dew applies to the problem. :-)

+1
source

All Articles