I will provide a hypothetical guide since you decided to ignore the answers to my questions.
When it comes to using a log (time-based indexes), you need to have some data on future plans at hand: how long do you want to keep logging data around (storage period), what will be the use of the template for the collected data (query frequency, indexing frequency), how much data will be every day (see here data on disk, as well as font size). Before thinking about the "per-app-index" or "single-index" problem, consider the tips below. After you do the math regarding the size of the fragments, how much time will be for the selected storage period, you can think of each application or a single index.
Depending on the size of the fragments, especially the storage period, secondly, you need to consider whether the indexes are based on time, daily, weekly or monthly. A good rule of large size for a fragment size is a maximum of 30-50 GB, any recovery, moving fragments, searching will be potentially slower and potentially affect cluster stability.
If your applications are capable of generating large amounts of data daily that exceed the number mentioned above, do not select indexes for each application. If the size is smaller, then again it depends. A huge number of fragments on one node consumes resources and makes searching slow. Each shard has a fixed set of memory, which is used only because it exists. In addition, when performing a search, each shard will perform a search on a single thread. One thread is basically one CPU core. The higher the time interval used in search queries (the greater the number of indexes), the greater the number of simultaneous searches occur, the higher the context switching at the OS level between several threads trying to use CPU cores. In general, do not try to squeeze hundreds of fragments into one node , if only some of them will be used at any given time. If you plan most often to request all the data in your cluster, the number of skulls that you would like to have on each node is drastically reduced. Otherwise, your cluster will not be able to handle the load.
If your example of using the journal is one that basically has high activity according to the most recent data (from the last few days to one week), consider the approach of a warm warm architecture: https://www.elastic.co/blog/hot-warm -architecture-in-elasticsearch-5-x
The exercise of creating and configuring a cluster always includes testing. Therefore, please try to check the effectiveness of your queries on pieces of data that are as identical as possible to real data. Also, do this on a single node that has hardware specifications for nodes in the production cluster.