From your first predict(.) Model predict(.) get the following:
# fit lwr upr # 1 18436.18 2339.335 34533.03
Following ζε²ζΊ , we can also achieve these results manually.
beta.hat.1 <- coef(model1) # save coefficients # model matrix: age=40, nodeg = 0, marr=1: X.1 <- cbind(1, matrix(c(40, 0, 1), ncol=3)) pred.1 <- as.numeric(X.1 %*% beta.hat.1) # prediction V.1 <- vcov(model1) # save var-cov matrix se2.1 <- unname(rowSums((X.1 %*% V.1) * X.1)) # prediction var alpha.1 <- qt((1-0.95)/2, df = model1$df.residual) # 5 % level pred.1 + c(alpha.1, -alpha.1) * sqrt(se2.1) # 95%-CI # [1] 18258.18 18614.18 sigma2.1 <- sum(model1$residuals ^ 2) / model1$df.residual # sigma.sq PI.1 <- pred.1 + c(alpha.1, -alpha.1) * sqrt(se2.1 + sigma2.1) # prediction interval matrix(c(pred.1, PI.1), nrow = 1, dimnames = list(1, c("fit", "lwr", "upr"))) # fit lwr upr # 1 18436.18 2339.335 34533.03
Now your related example applies to multiple FEs, we get the following results:
lm.model <- lm(data=demeanlist(cps1[, c(8, 2)], list(as.factor(cps1$nodeg), as.factor(cps1$marr))), re74 ~ age) fe <- getfe(model2) predict(lm.model, newdata = data.frame(age = 40)) + fe$effect[fe$idx=="1"]
The first value is c, and the second without FE (try fe$effect[fe$idx=="1"] ).
Now we follow the manual approach above.
beta.hat <- coef(model2) # coefficient x <- 40 # age = 40 pred <- as.numeric(x %*% beta.hat) # prediction V <- model2$vcv # var/cov se2 <- unname(rowSums((x %*% V) * x)) # prediction var alpha <- qt((1-0.95)/2, df = model2$df.residual) # 5% level pred + c(alpha, -alpha) * sqrt(se2) # CI # [1] 9599.733 10630.697 sigma2 <- sum(model2$residuals ^ 2) / model2$df.residual # sigma^2 PI <- pred + c(alpha, -alpha) * sqrt(se2 + sigma2) # PI matrix(c(pred, PI), nrow = 1, dimnames = list(1, c("fit", "lwr", "upr"))) # output # fit lwr upr # 1 10115.21 -5988.898 26219.33
As we can see, the fit is the same as the associated exemplary approach, but now with a prediction interval. (Disclaimer: the logic of the approach should be simple, PI values ββshould still be evaluated, for example, in Stata with reghdfe .)
Edit: If you want to get exactly the same result from felm() that predict.lm() gives with linear model1 , you just need to re-enable the βincludedβ fixed effects in your model (see model3 below). Just follow the same approach. For greater convenience, you can easily turn it into a function.
library(DAAG) library(lfe) model3 <- felm(data = cps1, re74 ~ age + nodeg + marr) pv <- c(40, 0, 1) # prediction x-values predict0.felm <- function(mod, pv.=pv) { beta.hat <- coef(mod) # coefficient x <- cbind(1, matrix(pv., ncol=3)) # prediction vector pred <- as.numeric(x %*% beta.hat) # prediction V <- mod[['vcv'] ] # var/cov se2 <- unname(rowSums((x %*% V) * x)) # prediction var alpha <- qt((1-0.95)/2, df = mod[['df.residual']]) # 5% level CI <- structure(pred + c(alpha, -alpha) * sqrt(se2), names=c("CI lwr", "CI upr")) # CI sigma2 <- sum(mod[['residuals']] ^ 2) / mod[['df.residual']] # sigma^2 PI <- pred + c(alpha, -alpha) * sqrt(se2 + sigma2) # PI mx <- matrix(c(pred, PI), nrow = 1, dimnames = list(1, c("PI fit", "PI lwr", "PI upr"))) # output list(CI, mx) } predict0.felm(model3)[[2]] # PI fit PI lwr PI upr # 1 18436.18 2339.335 34533.03
With felm() you can achieve the same prediction interval as with predict.lm() .