Although I think it depends on the context, I prefer the first alternative. Here are my random thoughts:
I like my tests so that they are accurately matched against possible use cases (BDD style) (with the caveat that I am using the term use case incorrectly). These use cases may span multiple applications and subsystems.
Example: a back office administrator can view a transaction made by a user from an open interface.
Here, the back office admin interface and the open interface are different applications, but they are included in the same use case.
Comparing these thoughts with your problem, when you have subsystems deployed on different hosts, I would say that it depends on how it is used, from the point of view of the user / actor. Do multiple subsystems use cases?
In addition, it is possible that the fact that the system is deployed on multiple hosts does not matter for the tests. You can replace interprocess communication with method calls in your tests and the entire system as part of a single process during tests, reducing complexity. Complement this with tests that verify only interprocess communication.
Edit:
I understand that I forgot to indicate why I prefer to test the entire system.
Your asset is functions, that is, behavior, and code is a commitment. So you want to test the behavior, not the code (BDD style).
If you test each subsystem separately, you are testing code, not functions. What for? When you divided your system into subsystems, you did this for some technical reasons. When you learn more, you will find that the selected seam is suboptimal and would like to assign some responsibility from one subsystem to another. And you will have to modify the test and production code at the same time, leaving you without a protective grid. This is a typical sign of testing granularity.
However, these tests are too dumb to test everything. Therefore, you need to have additional tests for details where necessary.