An alternative to polymorphism in programming other than OOP?

Suppose we have a drawing program with various elements such as a circle, rectangle, triangle, etc. Various types of objects that will need a similar function, for example draw() to display themselves.

It is interesting how a programmer would approach a problem that is currently usually solved by polymorphism, i.e. looks through a collection of non-identical elements and calls common functions for different objects.

One of the ways that comes to mind is to have a structure with a pointer to a function (or an index in an array of function pointers), as well as a void pointer to the actual instance and pass in the pointer, the correct type of function. But it’s just that I - a guy who is clueless on this matter, will do it.

I understand that this may be a Nobel question, but since I was not in the "old" days, I really wonder how this problem was solved. Which approach was used in procedural programming and did it have a performance advantage, since we all know that polymorphism has overhead even in fast languages ​​like C ++, due to the search for a virtual method.

+4
source share
2 answers

In procedural languages, such as C, this will be solved by defining separate implementations of the draw() function for each user-defined data type (possibly represented as a structure). Any general functionality will be taken into account in a separate function that works with the common elements of each structure (such as the x and y coordinates of the center of the object, which will be displayed in each of them). In terms of code and functionality, this is not much different from an OOP layout that uses polymorphism, where you still need to implement the general draw() method in the base class and override it in a specific subclass. In the case of a procedural language, we simply would not separate these definitions of functions into separate “objects”.

There are several bizarre ways to get object-like behavior from a procedural language, such as a union type or a single monolithic type with additional logic parameters, to determine if a particular element is being used. This will allow you to write a single draw() function that could perform a logical switch based on which elements were included. In practice, the only place I've seen a lot is on CORBA-based systems, where a program written in C had to imitate some OOP language behavior that was distributed through IDL (i.e., porting Java objects to constructs that could be decoded into C-style structures).

As for the overhead of finding a virtual method in languages ​​such as C ++ and Java, this is something that cannot be completely avoided in an object-oriented language. This can be pretty well mitigated by using the final keyword correctly (which allows the compiler / JVM to optimize method lookup tables).

0
source

This is not a direct answer to your example, but the address of your comment, which shows the wrong perspective IMHO

I just wondered about this particular problem, mostly interested if there is a more efficient way to avoid the overhead of virtual method performance

There is something to understand. Everyone has a compromise. Design models and OO have all the known advantages that we love, but they also have disadvantages, for example, too many classes, memory overhead, performance overhead due to many method calls, etc.

On the other hand, the old “procedural” method had some advantages and was objective; it was “easy” for the code (there is no need to think about how to create a system, just put everything in the main one) and have less overhead in many aspects (less memory overhead, since less classes and more compact objects are required - no need in virtual tables, etc., and fewer method calls, therefore, possibly higher performance, without the overhead of performance for dynamic binding - no matter what the costs are currently anyway ...-).

But this is not that the trade-offs of a particular instance of the problem, it is what experience has shown, which is the right way to create software. Reusing code is modular and helps in separate testing ( quality ), readable , supported , flexible for expansion - these are well-understood attributes that should be the main driver of software development.

Thus, there are some cases where a really good C / C ++ programmer could have done the “old way”, as you say, but this performance advantage that it bears for this particular program is worth the fact that no one able to support or support it afterwards?

To give another similar example: can you ask in the same way?
Why are layered architectures in web development? Just put everything in one server and it will be LOT FASTER, since there will be no latency when requesting the internal and all layers of the user interface data or network delay to request a remote database, etc.
Of course you have a point. But then ask yourself, can this scale increase the load? The answer is no. Is scalability important to you, or do you want to keep the idea of ​​"put everything on one server"? If your income comes from electronic sites, the fact that you cannot serve more customers will not make your customer happy just because you submitted the first 100 really fast ... Anyway, this is my opinion.

0
source

All Articles