Your samples of beta, gamma and kappa implementations, as if they are white (for example, their values for each (x, y, t) are independent). Descriptions of terms suggest that this is not so. It seems that the delta should capture white noise, while other terms capture correlations in their respective regions. for example, there is a non-zero correlation between gamma (t_1) and gamma (t_1 + 1).
If you want to model gamma as a stationary Gaussian Markov process with var_g dispersion and cor_g correlation between gamma (t) and gamma (t + 1), you can use something like
gamma_t = nan( nT, 1 ); gamma_t(1) = sqrt(var_g)*randn(); K_g = cor_g/var_g; K_w = sqrt( (1-K_g^2)*var_g ); for t = 2:nT, gamma_t(t) = K_g*gamma_t(t-1) + K_w*randn(); end gamma_t = reshape( gamma_t, [ 1 1 nT ] );
The formulas that I used for the K_g and K_w gains in the above code (and the initialization gamma_t (1)) create the desired stationary variance \ sigma ^ 2_0 and one-step covariance \ sigma ^ 2_1:

Note that the implementation above assumes that you will later summarize the terms with bsxfun to do a “repmat” for you:
Y = bsxfun( @plus, deterministic_mu + kappa_st + delta_st, beta_s ); Y = bsxfun( @plus, Y, gamma_t );
Please note that I have not tested the above code, so you must confirm with a sample that it really produces zero noise with a given dispersion and covariance between adjacent samples. To try the beta version, the same procedure can be extended to two dimensions, but the principles are essentially the same. I suspect that kappa should be similarly modeled as a Markov Gaussian process, but in all three dimensions and with lower dispersion to represent higher order effects not captured in mu, beta and gamma.
The delta should be zero mean stationary white noise. Assuming this is Gaussian with noisePower dispersion, you could try it out with
delta_st = sqrt(noisePower)*randn( [ nRows nCols nT ] );