For such a general F challenge is to develop an optimized algorithm, far from brute force approach.
For example, since we want to find the CS link where F ( CS ) maximizes, should we assume that we really want to find max (Σ F ( CS )) for all CS or the highest value of F from all possible CS, max (F (cs i ))? We do not know for sure.
Moreover, F is arbitrary; we cannot estimate the probability of the presence of F(cs+p1) > F(cs) => F(cs+p1+p2) > F(cs) .
However, we can still discuss this:
It seems that we can deduce from this problem that we can consider each CS independently, that is, if n = F(cs1) adding any cs2 (being disjoint from cs1) does not affect the value of n .
It also seems plausible, and it is here that we should get some gain so that the calculation of F can be done starting from any point CS, and, generally speaking, if CS = cs1+cs2 , F(CS) = F(cs1+cs2) = F(cs2+cs1) .
Then we want to introduce memoization into the algorithm in order to speed up the process when CS gradually grows to find max (F (cs)) [considering F general, an approach to dynamic programming, for example, starting with CS made from all points and then decreasing it little by little seems to be of little interest].
Ideally, we could start with a CS consisting of a point, increasing it by one, checking and saving the values of F (for each subset). Each test first checks if the value of F exists so as not to calculate it; then repeat the process for another point, etc., find the best subsets that maximize F. For a large number of points, this is a very long experience.
A more reasonable approach would be to try out random points and increase the CS to a given size, and then try a different area than the larger CS obtained in the previous step. You can try to evaluate the probability described above and direct the algorithm in a certain way depending on the result.
But, again, due to the lack of properties of F, we can expect that the exponential space will be needed through memoization (for example, saving F (p1, ..., pn) for all subsets). And exponential complexity.