tl; dr: This is not that dynamic memory allocation is inherently not deterministic (as you defined it in terms of identical execution paths); this is what usually makes your program unpredictable . In particular, you cannot predict whether a distributor can fail due to an arbitrary sequence of inputs.
You may have a non-deterministic distributor. This is actually common outside of your real-time world, where operating systems use things like address location randomization. Of course, this will make your program non-deterministic.
But this is not an interesting case, therefore we can assume a completely deterministic distributor: the same sequence of distributions and deallocations will always lead to the same blocks in the same places, and these distributions and releases will always have a limited run time.
Now your program can be deterministic: the same set of inputs will lead to exactly the same execution path.
The problem is that if you allocate and free memory in response to the input, you cannot predict whether the allocation will ever fail (and failure is not an option).
Firstly, your program may leak memory. Therefore, if it is necessary to run indefinitely, the distribution will ultimately fail.
But even if you can prove that there are no leaks, you need to know that there will never be an input sequence that could require more memory than is available.
But even if you can prove that the program will no longer need more memory than is available, the allocator may, depending on the sequence of allocating and freeing the memory of the fragments, and therefore, ultimately will not be able to find an adjacent block to satisfy the distribution, even if it has enough free memory.
It is very difficult to prove that there is no sequence of inputs that leads to pathological fragmentation.
You can design distributors to ensure that there is no fragmentation (for example, by allocating blocks of only one size), but this creates a significant limitation for the caller and possibly increases the amount of memory required due to waste. And the caller must still prove that there are no leaks and that full memory requires a saturated upper bound, regardless of the sequence of inputs. This burden is so great that itβs actually easier to create a system so that it does not use dynamic memory allocation.