The terms are apparently not widely used, perhaps because often a process or system uses both without distinction. These concepts are very general, covering much more than the scope of MPI or openmp.
Vertical parallelism is the ability of a system to use several different devices at the same time. For example, a program may have a thread performing heavy computations, while another handles database queries, while a third handles IO. Most operating systems naturally demonstrate this ability.
Horizontal parallelism occurs when a single device is used or an operation is performed on several similar data elements. This is a type of parallelism that occurs, for example, when multiple threads start on the same code fragment, but with different data.
In the software world, an interesting example is actually a map reduction algorithm that uses both:
horizontal parallelism occurs at the map stage, when data is split and scattered across several processors for processing,
vertical parallelism occurs between the map and the reduction scene, where the data is first divided into pieces, then processed by the map flows and accumulated by the reduced stream,
Similarly, in the hardware world, superscalar pipelined processors use both options, where pipelining is a concrete example of vertical parallelization (just like the map / reduce process, but with a few steps).
The reason for using this terminology probably comes from the same reasons why it is used with supply chains: values are produced by combining different stages or levels of processing. The final product can be considered as the root of an abstract tree of constructions (bottom to top) or dependency (top to bottom), where each node is the result of an intermediate level or step. You can easily see the analogy between supply chains and calculations here.
source share