Distributed .NET architecture issues using CRUD style

Context: creating an intelligent client application on the .NET platform, where you have a complex database model with a large number of columns involved. The natural style of the application is typical data-driven CRUD. In some cases, there is also fair server-side bitkar and somewhat complex checks. You have full control over the client and server, so the need for interoperability is minimal.


This question has a lot of details, an apology for this, but this is because I want to establish the right context for the answers.


A few more assumptions
- As it is not uncommon in the Microsoft world, most of the previous applications were written using DataSets, so this is the most famous technology for developers. But let's say developers are well versed in OO thinking.
- You will need to perform checks on both the client and the server.
- You do not show most of the data in tabular form.
- This is not an intranet application, so you cannot talk too much about bandwidth


The biggest question: datasets or objects?


If you go to data sets, you have several positive and negative sides.
- In terms of positive results: you get a bit of Microsoft support in terms of retrieving data from a database, retrieving data over a network, and returning changed data over a network in smaller parts - because you can only specify to send changes. Sending less data is good because they have potentially quite a lot of data. - Disadvantages: in terms of validation, business logic, etc. You get a procedural form of code, and you donโ€™t get the benefits of object-oriented code โ€” behavior and data together, a more natural working style and reflections on what you do and, possibly, closer connections with the validation logic. You can also take your mind off the benefits of placing a dataset in a grid, as this is not an ordinary use case.

If you go for objects, then this is the same exercise, but there are more options:
Positive: behavior and data together. Verification logic closer. It is easier to see and understand the relationship between objects. More readable code. Lighter unit test. But you have quite a few options and work that you also need to do:


<i> OR / Mapping
- Getting data from a relational model into objects. OR-mappers arent that are complex and will be able to handle it well. But this adds to the development time.


Contract matching
- As a rule, it is good practice to compare data from server objects with contract objects, probable DTOs. Since this application is well adapted to the CRUD style architecture, DTOs do not really attach much importance to the picture, but simply do the cartographic work.


Common code
- You can go to the joint code scenario, where the assembly with data and domain logic is available both on the client side and on the server side. Thats a tight connection, but its not necessarily bad when you have a naturally tightly coupled client-server application.


Regardless of whether you choose to add a contract layer or not, you have large objects that must be sent over the wire. Since they controlled both the client and the server, transport and encoding must be binary encoding over TCP. This will help. With data sets, you only have the option to send changes back. A probable problem with the transfer of the entire structure of the object back and forth. The ability to send the entire structure of the object is somehow to identify the related changes (Create, Update, Delete) and send only information about it. Theoretically, itโ€™s not so difficult to send the aggregated root identifier to the server, as well as the changes, ask the server to be lazy to load the root node, make the changes and then save it again. But the big difficulty is associated with identifying the changes made. Do you ever go for this approach? What for? How exactly do you do it?

<i> Presentation
The exact user interface technology is not really that important to the question, maybe WinForms, Silverlight, or WPF. Suppose we use WPF from our new smart client. This means that we have two-way binding and can use MVVM correctly.

Objects bound in the user interface will need to implement INotifyPropertyChanged and raise an event each time the property is updated. How do you solve this? If you intend to use a script with shared code, you can add it to domain objects, but this will include adding server-side code and logic that should never be used there. Separation is more natural if you go to contracted objects, but this is not much added value, just to add a display layer.

<i> Technology
There are several technologies that can help solve some problems, but this often complicates others. Do you use them or do you build things yourself?
**
โ€œCSLA is possible, but it complicates unit testing and seems to add a tighter link to data access.โ€ This helps to solve a number of problems, but personally I do not have competence in relation to this technology, so it is very difficult to say how great it is. - WCF RIA Services can be used for Silverlight, but there are certain limitations. The data size is one. - WCF Data Services is another way to quickly get something, but REST doesn't help much, and you also lack validation support in RIA services.

Summary
If you get to this, I hope you have an idea of โ€‹โ€‹where I am going with this. Ive tried to cover it so as not to talk about everything at once, but distributed development is complex, so you have to consider many parts.


Update

Thanks for the responses guys! I tried to ask a question open enough to open for different answers, but specific enough to deal with several non-standard requirements.

There are different considerations that have different pros and cons and that vary from system to system. Each of them usually complicates the search for a solution. One of the points of this question was to get answers especially with a few additional requirements that do not necessarily apply directly to one answer, which is often the right one today - with a task-based user interface. I'm not a โ€œCRUD guyโ€ if you want. But several systems, for various reasons (most often obsolete), are well suited for CRUD.

Many business applications have similar requirements that pull in different directions:

Business related
- View: Display data to the user and update the same data (Read and CUD - Create, update, delete)
- Validation: Business Rules

User interface | - Verification: user interface rules
- User Interface Updates: code specific only for the user interface to be updated when the object changes (INotifyPropertyChanged)

Network related
- Data size: the amount of data sent over the cable.

Database Related - Lazy Loading

SRP / reuse

- Display: caused by multiple layers of objects / sharing concerns

Service / Change Related
- Changes: adding new information (columns / fields)
- Number of codes
- Reuse and "reasons for change"

Technical limitations
- Change tracking

But these are just some of the very specific ones. You always need to know which "outputs" are most important to you, and therefore what degree of scalability, availability, extensibility, interoperability, usability, maintainability and testability you will need.

If I tried to generalize something to most situations, I would say something like:

Client
- Use MVVM to separate and verify - Create a virtual machine on top of DTOs
- Implement INotifyPropertyChanged in the virtual machine.
- Using XamlPowerToys, Postsharp, or some other ways to help with this can be useful. - Separate readings and CUD in the user interface
- Make a CUD-based task and use commands or similar to send these operations to the server side


Server - Tailor-make dto on screen
- OR use the multi-query approach described by Ayende at http://msdn.microsoft.com/en-us/magazine/ff796225.aspx
- Use automation to avoid the tedious, manual and completely unrelated to the problem that you are trying to solve the step that the display is - Let the domain model primarily relate to business operations, including operations related to CUD, and not read - Avoid reuse that adds to the reasons for change
- Avoid encapsulation issues
- (And thereby enable the CQRS style architecture and possibly separate the read and CUD scaling over time)
- Try to find a validation approach that works well for what needs to be done (Read well: http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/02/15/validation-in-a-ddd -world.aspx )

Is this a necessary approach to this particular situation?

Ok, this is what I wanted to start the discussion :) But it seemed to be harder than I had hoped (besides both of you).

+4
source share
2 answers

I can only answer from my own experience. We tried different structures (WCF RIA, Ideblade) and came to the conclusion that the framework will only worsen the situation. I will explain further.

First of all, you should forget about CRUD. Only demo applications have CRUD - real world applications have behavior.

I do not recommend displaying the entire graph object on the client side. These are two separate issues.

You must create an individual Dto for each context. For instance. let's say you have an OrderSearchView, then you create an OrderSearchDto and target only the fields you need. In EditOrderView you should use EditOrderDto instead - which contains only the fields you need.

I would not recommend using an auto-tuning tool between objects and dto. Because often there is no one-to-one relationship between dto and entity. Dto is often built by various several backend objects. In any case, the mapping is so simple, so I don't see the point in the display structure. And the task is not a display โ€” it writes a unit test โ€” which you still have to do (with or without a display map).

Dtos should be agnostic regarding client-side technology. And introducing INotifyPropertyChanged on dto violates the principle of single responsibility. There is a resion, they are called Data Transfer Objects. Instead, you create client-side presenters. You create an EditOrderPresenter, which is a wrapper around EditOrderDto. This way, dto will be just the user's private field inside EditOrderPresenter. The presenter is intended for editing at the client level - therefore, it usually implements INotifyPropertyChanged. EditOrderPresenter usually has the same property names as dto.

You must physically separate client verification from server-side entity verification. Beware of the share! I think client validation is just a GUI setup - to improve gui performance. Do not make much sense to have a common verification code between dto and the entity - this can cause more headaches than utility. Just make sure that you always check on the server side no matter what check is performed on the client side. There are two types of validations: simple property checking and integrity checking of the entire object (the same applies to dto). Authentication of an object should only be performed upon state transition. Check out Jimmy Nilssons Domain Driven Design for basic knowledge. I would not recommend using a validation mechanism - just use the status template.

Then what about updates, inserts, deletes? In our implementations, we use WCF, and the WCF API has only one method: IResponse [] Process (parameters IRequest []); What does this really mean? This means that the client issues a packet of requests to the server. On the server, you implement RequestHandler for each request that is defined on the system. Then you return a list of answers. Make sure that the Process () method is one unit of work (~ one transaction). This means that if one of the requests in the package fails - they all fail - and this leads to the rollback of the transaction - and there will be no harm to db. (Do not use error codes in response handlers - throw exceptions instead.)

I would recommend you look into the Agatha messaging server. Davy Brion has great blog posts about the messaging layer. In our company, we decided to implement our own messaging server, because we did not need everything that Agata offered, we made some syntax improvements. In any case, implementing a messaging server is not very difficult - and it is a good learning experience. Link http://davybrion.com/blog/

Then what are you doing with Dot. Well, you never update them, but you change them on the client side to get the correct feedback from gui. So, you are doing the speakers to track everything that happens with dto (reqest) - in the correct order. This will be your requestBatch. Then send requestbatch to the process team in WCF - then the requests will be "replayed" on the server and processed by the request handlers. This actually means that you never update dto. But facilitators can edit dto on the client side to give the correct feedback to gui. The task of the speakers is also to track all the changes made, to return them to the server as requestbatch (with requests in the same order in which they are edited). Think of the following scenario: you get an existing order that you are editing, then you make the changes back to db. This will lead to two batches: one for receiving an order and one for making changes back.
RequestBatch 1: GetOrderByIdRequest

(.. then the user edits the data ..)

ReqeuestBatch 2:
StartEditOrderRequest, state change for modus editing, relaxed check
AddConsigneeToOrderRequest
ChangeEarliestETDOnOrderRequest, no need to validate again in the latest ETD!
DeleteOrderlineRequest
ChangeNumberOfUnitsOnOrderlineRequest
EndEditOrderRequest, change of state to initial state, check object here!
GetOrderByIdRequest to update gui with the latest changes.

On service we use NHibernate. Nhibernate uses first level cache to avoid heavy db load. Thus, all requests in one block (requestbatch) will use the cache.

Each request should contain only a minimum amount of data. This means using OrderId + of some other properties instead of the whole dto. As for the optimistic update, you can send some of the old Values โ€‹โ€‹along with the request - this is called the Concurrency Set. Remember that the Concurrency set usually does not contain many fields. Since the update order that was changed at the same time does not necessarily mean that you will have a promotion condition. For instance. adding and organizing while the recipient has been edited by another user does not mean that you have a promotion condition.

Well, doesn't that lead to terrible work. You will, of course, have much more lessons, but each class will be small and will be responsible.

By the way, we tried WCF RIA services in a medium-sized project. And this is not so good. We needed to find ways (hacks) around the frame to do what we wanted. And it is also based on code generation, which is pretty bad for the build server. In addition, you should never make visibility through layers. You should be able to modify supported objects without affecting the client level. With RIA it is very difficult. I think OData is in the same category as WCF RIA.

If you need to create queries on the client side, you use the specification template - do not use iqueryable - then you will be independent of third-party objects.

Good luck.
twitter: @lroal

+7
source

Interesting problem :)

If you start with a few principles:

  • Try to reduce the amount of data sent by cable.
  • Try to minimize the amount of time spent writing a plumbing code.
  • Try to improve testability.

Based on this, I would:

  • Use POCO objects to transfer data. DataSets contain a lot of information that you might not need.
  • Use Entity Framework POCO to access the database, preserves the mapping of contract objects to data objects.
  • Check place in helper classes, easily test and maintain a common code model

In our projects, we saved time using the Entity Framework compared to the corporate library and data sets.

On objects on the server side and on the client side, you can try:

  • The client object inherits the server-side object and implements INotifyPropertyChanged
  • Place the client and server side object in a separate dll so that there is no unused code on the server.
  • use Automapper to map between two types. (there may be a better way to use interfaces)
+1
source

All Articles