When implementing most algorithms (sorting, searching, bypassing the graph, etc.), a compromise often arises that can be made in reducing access to memory due to additional ordinary operations.
Knuth has a useful method for comparing the complexity of various implementations of algorithms, abstracting it from specific processors and distinguishing only ordinary operations (oops) and memory operations (mems).
In compiled programs, the compiler usually manages to organize low-level operations and hopes that the operating system will decide whether the data is stored in the cache (faster) or in virtual memory (slower). In addition, the exact amount / cost of instructions is encapsulated by the compiler.
With Forth, there is no longer such encapsulation, and one is much closer to the machine, although perhaps to a stacked machine running on top of the registrar.
Ignoring the influence of the operating system (so that there are no memory stops, etc.) and assuming at the moment a simple processor
(1) Can anyone advise how regular stack operations in Forth (e.g. dup, rot, over, swap, etc.) are compared to the cost of Fetch memory access fetch (@) or store (!)
(2) Is there a thumb rule I can use to decide how many common operations to trade off while maintaining memory access?
What I'm looking for is something like "access to memory, equal to 50 normal operations, or 500 ordinary operations, or 5 ordinary" Ballpark ", is absolutely normal.
I am trying to understand the relative consumption of extraction and storage vs. rot, swap, dup, drop, over, correct by an order of magnitude.