My questions are mostly completely outlined in the title, however let me clarify.
Question: Maybe itโs worth rephrasing how complicated / simple the virtual method is to make the mechanism a significant overhead? Are there any rules of thumb? For example. If it takes 10 minutes, it uses I / O, complex if , memory operations, etc. This is not a problem. Or, if you write virtual get_r() { return sqrt( x*x + y*y); }; virtual get_r() { return sqrt( x*x + y*y); }; and call it in a loop, you will have problems.
I hope this question is not too general, as I am looking for some general, but specific technical answers. Either it's hard / impossible to say, or virtual calls take up so many resources of time / cycles, and math takes it, I / O it.
Perhaps some technical people know some common figures to compare or do some analysis, and can share general conclusions. Embarrassingly, I do not know how to make these fantasies asm analysis = /.
I would also like to give some explanation for this, as well as my use case.
I think I saw more than a few questions when people refused to use virtual games, such as open fire in the forest during a drought, for the sake of productivity, and so many people asked them: "Are you absolutely sure that virtual overheads really are problem in your case? "
In my recent work, I encountered a problem, which, in my opinion, can be placed on both sides of the river.
Also keep in mind, I am not asking how to improve the implementation of the interface. I seem to know how to do this. I ask if it is possible to say when to do this or to choose the right bits.
Use case:
I am running some simulations. I have a class that basically provides a runtime. There is a base class and several derived classes that define several different workflows. The base collects material as general logic and assigns sources and receivers of input-output. Derivatives define specific workflows, more or less, by implementing RunEnv::run() . I think this is the right design. Now suppose that objects that are subjects of a workflow can be placed in a 2D or 3D plane. Workflows are common / interchangeable in both cases, so the objects we are working on can have a common interface, albeit very simple methods like Object::get_r() . In addition, a statistical log can be defined for the environment.
Initially, I wanted to provide some code snippets, but in the end it had 5 classes and 2-4 methods, each of which constituted a wall of code . I can send it on request, but this will extend the question to two different current sizes.
Key points: RunEnv::run() - the main loop. Usually a very long time (5 minutes-5 hours). It provides base time, calls to RunEnv::process_iteration() and RunEnv::log_stats() . All are virtual. There is a justification. I can get RunEnv , a run() redesign, for example, for different stop conditions. I can redesign process_iteration() , for example, use multithreading if I need to process a pool of objects, process them in various ways. Also, various workflows will want to record various statistics. RunEnv::log_stats() is just a call that displays the already computed interesting statistics in std::ostream . I assume that I am using virtual machines and have no real effect.
Now let's say that iteration works by calculating the distance of objects to the origin. So, we have the interface double Obj::get_r(); . Obj are an implementation for 2D and 3D cases. The getter in both cases is a simple math with 2-3 multiplications and additions.
I also experimented with different memory processing. For example. sometimes the data coordinates were stored in private variables and sometimes in a common pool, so even get_x() could be made virtual with the implementation of get_x(){return x;}; or get_x(){ return pool[my_num*dim+x_offset]; }; get_x(){ return pool[my_num*dim+x_offset]; }; . Imagine calculating something with get_r(){ sqrt(get_x()*get_x() + get_y()*get_y()) ;}; . I suspect that virtuality here will kill performance.