No, I do not think that there is any generally applicable way to determine the minimum requirements that are not related to testing on any specific reference equipment.
You may be able to find some limitations with some kind of virtual machines - it’s easier to change the parameters of a virtual machine than to modify the hardware. But there are artifacts created by the interaction between the host and the virtual machine that can affect your results.
It is also difficult to define the criteria for “acceptable performance” in general, without knowing the use cases.
Many programs will use more resources if available, but can also get along with less.
For example, consider a program that uses a thread pool of size a based on the number of CPU cores. When working on a processor with a large number of cores, more work can be done in parallel, but at the same time, overhead due to the creation of threads, synchronization and aggregation of results increases. The effects are non-linear in the number of processors and are highly dependent on the actual program and data. Likewise, the effects of decreasing the available memory range from the potential outflow of OutOfMemory-Errors for some inputs (but maybe not for others) to simply starting the GC are slightly more frequent (and the consequences depend on the GC strategy, from noticeable freezes to just a little more CPU load) )
All that is not even considered that programs usually do not live in isolation - they run in the operating system in parallel with other tasks that also consume resources.
Hulk
source share