Is Open / Closed a good idea?

This question is not about what OCP is. And I'm also not looking for simplified answers.

So, that’s why I ask about it. OCP was first described in the late 80s. It reflects the thinking and context of the time. It turned out that changing the source code to add or change functionality after the code has already been tested and put into production will be too risky and expensive. Therefore, the idea was not to modify the existing source files as much as possible and add only to the code base in the form of subclasses (extensions).

Maybe I'm wrong, but I got the impression that network version control systems (VCS) were not widespread at that time. The fact is that VCS is necessary for managing source code changes.

The idea of ​​refactoring is much later. At that time, sophisticated IDEs that allowed automating refactoring operations were clearly inconsequential. Even today, many developers do not use the best refactoring tools. The fact is that such modern tools allow the developer to change literally thousands of lines of code safely in a few seconds.

Finally, today the idea of ​​automated developer testing (unit / integration tests) is widespread. There are many free and sophisticated tools that support it. But what good does a large automated test suite create and support if we never / rarely modify existing code? The new code, as OCP requires, will only require new tests.

So, does OCP really make sense today? I do not think so. Instead, I would rather change the existing code when adding new functions if the new functionality does not require new classes. This will keep the code base simpler, smaller, and much easier to read and understand. The risk of breaking previous functionality will be implemented using VCS tools, refactoring and automated test suites.

+11
ocp
Sep 12 '09 at 23:44
source share
6 answers

OCP makes a lot of sense when you are not a consumer of your code. If I write a class, and I or my team write all the classes that consume it, I agree. Refactoring as things change is not a huge deal.

If, on the other hand, I am writing an API for my clients, or I have several customers in a large organization with different interests, OCP is crucial because I cannot reorganize as easily.

In addition, if you simply reorganize your class to meet all your needs, you end up with a bloated class. If you designed a class to allow consumers to expand your class rather than modify it, you really would not have this problem.

+7
Sep 12 '09 at 23:59
source share

I have never heard of this. You may be referring to something else, but OCP, which I know, says: "The module / class should be open for extension, but closed for modification, that is, you should not change the source code of the module to improve it, but the module or The object should be easy to expand.

Think of eclipse (or any other plugin-based software). You do not have source code, but anyone can write a plugin to extend the behavior or to add another function. You did not change the eclipse, but you expanded it.

So, yes, the principle of Open / Closed is really very effective and a good idea.

UPDATE:

I see that the main conflict here is between the code that is still under development and the code that has already been sent and used by someone. So I went and checked Bertrand Mayer , the author of this principle. He says:

A module is called closed if it is available for use by other modules. This suggests that the module received a clear, stable description (its interface in a sense of information secrecy). At the implementation level, closing for a module also implies that you can compile it, possibly save it in a library and make it available to others (your customers) to use .

So, the Open / Closed principle applies only to stable ones, ready for compilation and use of objects.

+5
Sep 13 '09 at 1:38
source share

Ok, so here is my answer.

I cannot attest to the historical origin of the principle, but it is still often used today. I don’t think it’s dangerous to change the existing code (although it certainly is) to let you highlight ideas.

Suppose we have a component

public class KnownFriendsFilter{ IList<People> _friends; public KnownFriendsFilter(IList<People> friends) { _friends = friends; } public IList<Person> GetFriends(IList<Person> people) { return people.Where(p=>_friends.Contains(p)).ToList(); } } 

Now tell me the algorithm in which this particular component needs a little modification - for example, you want to make sure that the original list passed to contains individual people. This is what would be a problem for KnownFriendsFilter, so be sure to change the class.

However, there is a difference between this class and the supported function.

  • This class is really designed to filter many people for famous friends.
  • The function that he supports is to find all friends from an array of people.

The difference is that a function is associated with a function, while a class is associated with an implementation. Most of the requests we receive to change this function go beyond the specific responsibility of the class.

For example, let's say we want to add a blacklist of any names that start with the letter “X” (because these people are obviously cosmic, not our friends). Something that supports this function, but is not really part of what this class is, sticking to it in a class would be inconvenient. How about when the next request arrives, now the application is used exclusively by misogynists, and any female names should also be excluded? Now you need to add logic to decide if the name is a man or a woman in the class — or at least know about some other class that knows how to do this — the class grows in responsibilities and becomes very bloated! What about cross-cutting issues? Now we want to register whenever we filter an array of people, does this also happen?

It would be better to split the IFriendsFilter interface and wrap this class in a decorator or reinstall it as an IList responsibility chain. Thus, you can place each of these responsibilities in a class that supports this particular problem. If you add dependencies, then any code that uses this class (and is centrally used in our application) should not change at all!

So, the principle is not to never change existing code - it does not end in a situation where you are faced with a decision between inflating the responsibilities of a commonly used class or editing every place that it uses.

+2
Sep 13 '09 at 0:14
source share

So, does OCP really make sense today? I do not think so.

Sometimes this happens:

  • When you released the base class for clients and cannot easily modify it on all computers of your clients (see, for example, "DLL Hell")

  • When you are a client who himself did not write a base class and does not support it

  • More generally, any situation where a base class is used by more than one team and / or more than for a project

See also Conway Law .

+1
Sep 13 '09 at 0:03
source share

An interesting question, based on a strict definition of an open, closed principle, I see where you are from.

I came to determine the principle of open closure in a slightly different way, and this principle, which, I think, should be applied, and this apply it much more widely.

I like to say that all my classes (in general) involved in the application should be closed for change and open for extension. Thus, the principle is that if I need to change the behavior and / or operation of the application, I do not actually modify the class, but add a new one and then change the relationship to indicate this new one (depending on the size of the change). If I fulfill the sole responsibility and use control inversion, this should happen. What happens then is that all changes become extensions. Now they can act both in a new and a new way and change between them = a change in relations.

+1
Sep 13 '09 at 0:04
source share

The fact is that such modern tools allow the developer to change literally thousands of lines of code, safely, in a few seconds.

This is fine if you have a "developer." If you work in teams, of course, with version control, probably branching and merging, then the ability to ensure that changes from different people are usually concentrated in different files is very important in order to control what happens.

You could introduce language-specific merge / branch tools that could do three refactoring in parallel and merge them as easily as modifying individual files. But such tools do not exist, and if they did, I would not want to rely on them.

0
Sep 13 '09 at 0:09
source share



All Articles