I just ran both the @Oliver multi-component approach and the @mgilson code a million times each, for a length of-3, adding up to 10, and looked at how many times each possible result. Both of them are extremely heterogeneous:

(I'm going to show an indexing approach.)
Is there any? It depends on whether you want an “arbitrary vector with this property, which is usually different each time,” and each real vector will be equally likely.
In the multi-component approach, of course, 3 3 4 will be much more likely than 0 0 10 (4200 times more often, as it turns out). biases are less obvious to me, but 0 0 10 and its rearrangements were the least probable (only ~ 750 times each of a million); the most common were 1 4 5 and its rearrangements; I don’t know why, but they were certainly the most common, and then 1 3 6 . This usually starts with the amount too high in this configuration (expect 15), although I'm not sure why the reduction works that way.
One way to obtain a uniform conclusion on possible vectors would be a deviation scheme. To get a vector of length K with the sum N , you must:
- A sample of a vector of length
K with integer elements is uniformly and independently between 0 and N - Repeat until the sum of the vector is
N
Obviously, this will be very slow for non-small K and N
Another approach would be to assign numbering to all possible vectors; there are (N + K - 1) choose (K - 1) such vectors, so just select a random integer in this range to decide which one you want. One reasonable way to number them is lexicographic ordering: (0, 0, 10), (0, 1, 9), (0, 2, 8), (0, 3, 7), ...
Note that the last ( K th) element of the vector is uniquely determined by the sum of the first K-1 .
I'm sure there is a good way to jump directly to any index on this list, but I can’t think about it right now .... listing the possible results and walking on them will work, but will probably be slower than necessary. Here is the code for this (although we are actually using reverse lexicographic ordering here ...).
from itertools import islice, combinations_with_replacement from functools import reduce from math import factorial from operator import mul import random def _enum_cands(total, length):
As shown in the histogram above, this is virtually uniform in the possible results. It also easily adapts to upper / lower bounds for any single element; just add the condition to _enum_cands .
This is slower than any other answer: for the sum 10 length 3 I get
- 14.7 us using
np.random.multinomial , - 33.9 us using mgilson's,
- 88.1 us with this approach
I expect the difference to worsen as the number of possible outcomes increases.
If someone comes up with a great formula for indexing into these vectors, it will be much better ...