The question suggests that shared memory and distributed computing are opposites. This is a bit like the question: do RAM and LAN have opposites? It would be clearer to distinguish between common concurrency memory in a CPU / memory node and between CPU / memory nodes.
This is part of a broader picture of parallel processing research. There have been many research projects, including:
the development of computers other than Von-Neumann, which have several processors that use a single memory, to which some form of switching structure is attached (often the Clos network). OpenMP is great for them.
development of parallel computers, consisting of a set of processors, each of which has its own separate memory, as well as some communication structure between nodes. This is usually the house of MPI.
The first case specializes in the brotherhood of high-performance computing. This is the last case that is familiar to most of us. In this case, usually these days, comms are simply via Ethernet, but for some niches, (successfully) faster alternatives with less delay have been developed (for example, IEEE1355 SpaceWire , which appeared from the serial channels of Transputer).
For many years, the dominant view was that effective parallelism would only be possible if the memory were shared, since the cost of transmitting a message by sending messages (naively) was considered prohibitive. With concurrency shared memory, the complexity lies in the software: since everything is interdependent, concurrency development becomes more complex and more complex as systems grow larger. Serious experience is needed.
For the rest of us, Go follows Erlang, Limbo and, of course, with Occam in promoting messaging as a means of choreography of the work to be done. This is due to the algebra "Transfer of sequential processes" , which provides the basis for creating parallel systems of any size. CSP designs are composite: each subsystem itself can be a component of a larger system without a theoretical limit.
Your question mentions OpenMP (shared memory) and MPI (messaging with distributed memory), which can be used together. Go can be considered approximately equivalent to MPI, as it promotes messaging. However, it also allows you to lock and share memory. Go is different from MPI and OpenMP, as it clearly does not apply to multiprocessor systems. To go into the world of parallel processing with Go, you need a network messaging infrastructure such as OpenCL , for which someone runs the Go API.