I need to use celery to parallelize the stochastic gradient descent algorithm, although this might not be the best choice to use with celery, this is still my question =)
The algorithm looks like where datas is a matrix of samples:
x_current = np.random.random(n_dim)
for i in range(max_iter):
np.random.shuffle(datas)
for batch in range(datas.shape[0] / n)
delta = gradient(x_current, datas[(batch*n):(batch*(n+1)),:])
x_current += delta
Gradient is a function that should be distributed as a task. Let's say I have 10 workers, first I create 10 gradients of tasks with 10 first mini-reels.
When someone finishes, I want a new task to be created with the next mini-camera (it doesn’t matter if iterations have changed and returned to the first mini-bowl) and the current version of x_current (it doesn’t matter if this is not the latest version).
, x_current delta. "" x_current ( ) .
: ?
:)