Get odds, estimated by maximum probability, in the stargazer table

Stargazer produces very pleasant latex tables for lm objects (and others). Suppose I placed the model at the maximum probability. I would like stargazer to create an lm-like table for my ratings. How can i do this?

Although this is a bit hacky, one way would be to create a โ€œfakeโ€ lm object containing my grades โ€” I think this will work as long as summary (my.fake.lm.object) works. Is it easy to do?

Example:

library(stargazer) N <- 200 df <- data.frame(x=runif(N, 0, 50)) df$y <- 10 + 2 * df$x + 4 * rt(N, 4) # True params plot(df$x, df$y) model1 <- lm(y ~ x, data=df) stargazer(model1, title="A Model") # I'd like to produce a similar table for the model below ll <- function(params) { ## Log likelihood for y ~ x + student t errors params <- as.list(params) return(sum(dt((df$y - params$const - params$beta*df$x) / params$scale, df=params$degrees.freedom, log=TRUE) - log(params$scale))) } model2 <- optim(par=c(const=5, beta=1, scale=3, degrees.freedom=5), lower=c(-Inf, -Inf, 0.1, 0.1), fn=ll, method="L-BFGS-B", control=list(fnscale=-1), hessian=TRUE) model2.coefs <- data.frame(coefficient=names(model2$par), value=as.numeric(model2$par), se=as.numeric(sqrt(diag(solve(-model2$hessian))))) stargazer(model2.coefs, title="Another Model", summary=FALSE) # Works, but how can I mimic what stargazer does with lm objects? 

To be more precise: with lm objects, stargazer beautifully prints the dependent variable at the top of the table, includes SE in brackets below the corresponding estimates and has R ^ 2 and the number of observations at the bottom of the table. Is there a (simple way) to get the same behavior with a โ€œcustomizableโ€ model rated at maximum probability, as described above?

Here are my weak attempts to come up with my optimal output as an lm object:

 model2.lm <- list() # Mimic an lm object class(model2.lm) <- c(class(model2.lm), "lm") model2.lm$rank <- model1$rank # Problematic? model2.lm$coefficients <- model2$par names(model2.lm$coefficients)[1:2] <- names(model1$coefficients) model2.lm$fitted.values <- model2$par["const"] + model2$par["beta"]*df$x model2.lm$residuals <- df$y - model2.lm$fitted.values model2.lm$model <- df model2.lm$terms <- model1$terms # Problematic? summary(model2.lm) # Not working 
+83
optimization r lm stargazer
Jan 24 '14 at 17:17
source share
3 answers

I donโ€™t know how much you are committed to using stargazer, but you can try using broomsticks and xtable packages, the problem is that it will not give you standard errors for an optimal model

 library(broom) library(xtable) xtable(tidy(model1)) xtable(tidy(model2)) 
+1
Jun 29 '15 at 18:54
source share

I had this problem and overcame this with the coef se and omit functions within stargazer ... for example.

 stargazer(regressions, ... coef = list(... list of coefs...), se = list(... list of standard errors...), omit = c(sequence), covariate.labels = c("new names"), dep.var.labels.include = FALSE, notes.append=FALSE), file="") 
+1
Dec 09 '15 at 0:37
source share

First you need to create an instance of the dummy object lm , and then dress it:

 #... model2.lm = lm(y ~ ., data.frame(y=runif(5), beta=runif(5), scale=runif(5), degrees.freedom=runif(5))) model2.lm$coefficients <- model2$par model2.lm$fitted.values <- model2$par["const"] + model2$par["beta"]*df$x model2.lm$residuals <- df$y - model2.lm$fitted.values stargazer(model2.lm, se = list(model2.coefs$se), summary=FALSE, type='text') # =============================================== # Dependent variable: # --------------------------- # y # ----------------------------------------------- # const 10.127*** # (0.680) # # beta 1.995*** # (0.024) # # scale 3.836*** # (0.393) # # degrees.freedom 3.682*** # (1.187) # # ----------------------------------------------- # Observations 200 # R2 0.965 # Adjusted R2 0.858 # Residual Std. Error 75.581 (df = 1) # F Statistic 9.076 (df = 3; 1) # =============================================== # Note: *p<0.1; **p<0.05; ***p<0.01 

(and, of course, make sure that the remaining summary statistics are correct)

+1
Dec 19 '16 at 12:32
source share



All Articles