Is the ADO.NET Entity Framework (with ASP.NET MVC v2) a viable option when writing custom and constantly updated websites?

I just finished the MvcMusicStore tutorial which was found here . This is a great tutorial with working source code. One of my favorite MVC v2 tutorials so far.

This tutorial is my first introduction to using the ADO.NET Entity Framework, and I have to admit that most of it was very quick and direct. However, I am concerned about maintainability. How is this structure set up when a client requests additional functions on his site that require new fields, tables and relations?

I am very worried that I cannot efficiently execute customer change orders because Entity models are mostly drag and drop computer code. My experience with code generators is not very good. What if something goes awry in the guts of the model, and I can’t combine the humpty-swamps again?

Ultimately, I wonder if using custom models that people can read and edit is more effective than using the Entity Framework.

Has anyone worked with the entity infrastructure enough to say that they are comfortable using it in a very fluid development environment?

+6
asp.net-mvc ado.net-entity-data-model
source share
4 answers

I have been using the framework (V1.0) entity for about a year in my current project. We have 100 tables, all of them added to edmx. The problems we face (although we are not sure that the new entity structure solves these problems)

  • When you are used to the VS.net IDE, you will be used to perform all drag and drop operations from your development environment. the problem is, once your edmx hosts are 100 tables, the IDE really nods and you have to wait 3-4 a few minutes before it becomes responsive

  • With so many tables, any changes you make to edmx take a long time.

  • When you are going to use version control, comparing 10,000 lines of XML is pretty painful. Consider merging 2 branches, each with a line of 10,000 edmx, tables, a new association between tables, remote associations, and back and forth comparing XMLs. You will need a good xml comparison tool if you are serious about merging two large edmx files

  • For performance reasons, we had to make csdl, msl and ssdl as embedded resources

  • Your edmx should be synchronized with your database all the time, or at least when you try to update edmx, it will try to synchronize and may throw some obscure errors if they are not synchronized.

  • Remember that your objects (tables / views) should always have a primary key, otherwise you will get unclear errors. See my other question here

Things we did / I might consider in the future when using EF

  • Use multiple edmx with 1 edmx for tables logically grouped / linked together. Remember that if you do this, each edmx must live in its own namespace. if you try to add 2 related tables (say, face and address) up to 2 edmx in the same namespace, you will get a compiler error indicating that the foreign key relationships are already defined. (Tip: create a folder and create edmx in this folder. If you try to change the namespace in edmx without a folder, it does not properly save the namespace the next time you open / edit it)

    fewer tables in edmx => less heavy container => good 

    fewer tables in edmx => easier to merge when merging two branches

  • Remember that the context object is not thread safe.

  • Your repository (or whatever DAO you ever use) should be responsible for creating and deleting the container you are creating. Using DI frameworks, especially in a web application, makes it difficult for us. Web requests are submitted from the thread pool, and the container was not installed correctly after the web request was sent because the thread itself was not deleted. The container was reused (when the thread was reused) and created many concurrency problems

  • Do not trust your VS IDE. Get a good XML Editor and know how to edit edmx (although you don't need to edit the constructor). Hold your hands.

  • ALWAYS ALWAYS ALWAYS (you just can't stress it) run the SQL profiler (and I mean every step of your code) when you complete your queries. As complicated as the query may look, you will be surprised to find how many times you hit DB Example: (sorry, could not get the code in the right format, can someone format it?)

     var myOrders = from t in context.Table where t.CustomerID=123 

    select t; // the above request is not executed yet

    if(myOrders.Count>0)//DB query to find count { var firstOrder = myOrders.First()//DB query to get first result }

    Best approach

    // the request is materialized, only 1 hit DB, because we use ToList () var myOrders = (from t to Context.tables where t.customerID = 123 select t) .ToList ();

     if(myOrders.Count>0)//no DB hit { //do something var myOrder = myOrders[0];//no DB hit } 
  • Know when to use tracking and not. tracking (read-only) and web applications do a lot of reading than they write. Set them correctly when initializing your container

  • Did I forget the compiled queries? Look here for more goodies.

  • When you get 1000 rows from your database, make sure you use IQueryable and detach the objectContext so that you do not run out of memory

Update:

Julia Lerman addresses the same issue with a similar solution . Her message also tells Ward to work with a huge number of tables.

+7
source share

I am not very familiar with the Entity Framework, but I believe that it simply generates an EDM file that can be edited manually. I know that I did this quite often with the Linq-to-SQL DBML files that the developer generates (often faster to manually edit them than using the constructor for small settings).

+1
source share

You know, I would be interested if any developers can give some idea about this. Any examples of Entity Framework, apparently, consist of only ten to twenty tables, which are really small-scale.

How about using EF in a database with hundreds or even thousands of tables?

Personally, I know several developers and organizations that burned LINQ-to-SQL and held out for a year or so to find out which direction EF was heading.

+1
source share

Starting with Entity Framework 4 (since Visual Studio 2010), the generated code is output from T4 (Text Template Transformation Toolkit) files that you can edit to have full control over what is generated. See Oleg Sych's blog , which is my information on T4. Code generation is not a problem, and T4 offers so many perspectives that I can no longer live.

I am currently working on a project in which we use Entity Framework 4 for the data access layer, and Scrum as a flexible project management method. From one sprint to another, several tables are added, new modified new requirements are added. When you run once every potential EF problem (for example, knowing that the default values ​​from the database are not saved by default in the .edmx file or changing a column with a null value to a non-empty value and updating the constructor does not change the displayed property of the state), good to go.

Edit: to answer your question, this is EF 4, whose code generation is based on T4, not T4, which supports EF. In EF 3.5 (or, if you want, EF 1.0) it is theoretically possible to use T4 by writing them from scratch, looking at the EDMX file in the T4 code and generating your entities. It would be quite a bit of work, considering that all this has already been done by EF 4. In addition, Entity Framework 3.5 supports only one type of authority, while EF 4 as built-in or downloadable templates for POCO objects (which do not know anything about persistence), Self -Tracking Entities ...

Given the Entity Framework itself, I think that its first release lacked many features, and while they were useful, it was quite difficult to use. EF4 is greatly improved. It still lacks some basic functions (for example, support for enumerations), but now it has become an affordable level of access to data for me.

+1
source share

All Articles