Your calculations seem to give plausible answers, but it doesn't make sense (for me, I'm ready to be corrected / enlightened ...). Suppose cov = corr*var1*var2 . Assume that ci(.) Is the (lower or upper) confidence limit for the quantity. It is by no means true that ci(cov) = ci(corr)*ci(var1)*ci(var2) (it is interesting that you get reasonable answers, I think this is most likely to happen when the amount is approximately uncorrelated. ..) If you had deviations from each component and the covariance among them (I do not mean the variance of random effects and the covariances themselves, but their sampling / covariance sampling), you can distribute them approximately using the delta method, but there are enough of them hard to get (see here ).
The βrightβ way to do this, as far as I can tell, is to calculate the likelihood profile on the variance-covariance scale instead of the standard deviation scale. It was not possible, but now (with the development version on Github).
Install the latest version:
library(ghit) ## for install_github (or library(devtools)) install_github("lme4/lme4")
Qualifying:
chap12 <- foreign::read.dta(file = "ch12.dta") library(lme4) snijders <- lmer(prox_pup ~ 1 + prox_sel + (1 + occ|teacher), data = chap12) as.data.frame(VarCorr(snijders)) ## grp var1 var2 vcov sdcor ## 1 teacher (Intercept) <NA> 0.15617962 0.3951957 ## 2 teacher occ <NA> 0.01205317 0.1097869 ## 3 teacher (Intercept) occ -0.03883458 -0.8950676 ## 4 Residual <NA> <NA> 0.04979762 0.2231538
We have to be a little careful when comparing the results, because the profile.merMod , which we will use in the near future, will automatically (and silently!) Convert the settings from REML from default to maximum likelihood (since a profile based on REML can be statistically risky ); however, it does not look like it makes a huge difference.
s2 <- refitML(snijders) as.data.frame(VarCorr(s2)) ## grp var1 var2 vcov sdcor ## 1 teacher (Intercept) <NA> 0.15426049 0.3927601 ## 2 teacher occ <NA> 0.01202631 0.1096645 ## 3 teacher (Intercept) occ -0.03884427 -0.9018483 ## 4 Residual <NA> <NA> 0.04955549 0.2226106 p.sd <- profile(s2,which="theta_", signames=FALSE) p.vcov <- profile(s2,which="theta_",prof.scale="varcov", signames=FALSE)
We get some warnings about nonmonotonic profiles ...
confint(p.vcov)
What if we check the square of the corresponding (sd / variance) elements?
confint(p.sd)[c(1,3,4),]^2
They combine well, with the exception of the lower boundary of the variance occ ; they also match your results above. However, the result of the covariance (which, it seems to me, is difficult) gives me (-0.0755, -0.0159) for me, against (-0.0588, -0.0148) for you, about a 20% difference. This may not be very important, depending on what you are trying to do.
Try brute force as well:
sumfun <- function(x) { vv <- as.data.frame(VarCorr(x),order="lower.tri")[,"vcov"]
The "gold standard" here seems to be partially between the result of my profile and your result: the lower limit of the covariance is -0.067 here versus -0.0755 (profile) or -0.0588.