Multidimensional logit in R: mlogit vs nnet

I want to run multifaceted logits in R and use two libraries: nnet and mlogit , which give different results and report different types of statistics. My questions:

  • What is the source of the discrepancy between the odds and the standard errors reported by nnet and the messages reported by mlogit ?

  • I would like to report my results to a Latex file using stargazer . This causes a problematic compromise:

    • If I use the results from mlogit , then I get the statistics I need, for example, the psuedo R square, but the output is in a long format (see the example below).

    • If I use the results from nnet , then the format will be as expected, but it reports statistics that I am not interested in, such as AIC, but does not include, for example, psuedo R squared.

    I would like the statistics displayed by mlogit in nnet formatting when I use stargazer .

Here is a reproducible example with three alternatives:

 library(mlogit) df = data.frame(c(0,1,1,2,0,1,0), c(1,6,7,4,2,2,1), c(683,276,756,487,776,100,982)) colnames(df) <- c('y', 'col1', 'col2') mydata = df mldata <- mlogit.data(mydata, choice="y", shape="wide") mlogit.model1 <- mlogit(y ~ 1| col1+col2, data=mldata) 

The compilation tex output refers to what I call the "long format", which I find undesirable:

enter image description here

Now using nnet :

 library(nnet) mlogit.model2 = multinom(y ~ 1 + col1+col2, data=mydata) stargazer(mlogit.model2) 

Gives tex output:

enter image description here

which has the β€œwide” format that I desire. Note the various odds and standard errors.

+7
r non-linear-regression stargazer multinomial mlogit
source share
2 answers

As far as I know, there are three R packages that allow you to evaluate the multinomial logistic regression model: mlogit , nnet and globaltest (from Bioconductor). I do not consider the mnlogit package mnlogit , the faster and more efficient implementation of mlogit .
All of the above packages use different algorithms that give different results for small samples. These differences disappear with moderate sample sizes (try n <- 100 ).
Consider the following process for generating data from James Cairsted ’s blog :

 n <- 40 set.seed(4321) df1 <- data.frame(x1=runif(n,0,100), x2=runif(n,0,100)) df1 <- transform(df1, y=1+ifelse(100 - x1 - x2 + rnorm(n,sd=10) < 0, 0, ifelse(100 - 2*x2 + rnorm(n,sd=10) < 0, 1, 2))) str(df1) 'data.frame': 40 obs. of 3 variables: $ x1: num 33.48 90.91 41.15 4.38 76.35 ... $ x2: num 68.6 42.6 49.9 36.1 49.6 ... $ y : num 1 1 3 3 1 1 1 1 3 3 ... table(df1$y) 1 2 3 19 8 13 

Model parameters evaluated by three packages, respectively:

 library(mlogit) df2 <- mlogit.data(df1, choice="y", shape="wide") mlogit.mod <- mlogit(y ~ 1 | x1+x2, data=df2) (mlogit.cf <- coef(mlogit.mod)) 2:(intercept) 3:(intercept) 2:x1 3:x1 2:x2 3:x2 42.7874653 80.9453734 -0.5158189 -0.6412020 -0.3972774 -1.0666809 ####### library(nnet) nnet.mod <- multinom(y ~ x1 + x2, df1) (nnet.cf <- coef(nnet.mod)) (Intercept) x1 x2 2 41.51697 -0.5005992 -0.3854199 3 77.57715 -0.6144179 -1.0213375 ####### library(globaltest) glbtest.mod <- globaltest::mlogit(y ~ x1+x2, data=df1) (cf <- glbtest.mod@coefficients ) 1 2 3 (Intercept) -41.2442934 1.5431814 39.7011119 x1 0.3856738 -0.1301452 -0.2555285 x2 0.4879862 0.0907088 -0.5786950 

The mlogit globaltest is suitable for a model without using the original result category, so the usual parameters can be calculated as follows:

 (glbtest.cf <- rbind(cf[,2]-cf[,1],cf[,3]-cf[,1])) (Intercept) x1 x2 [1,] 42.78747 -0.5158190 -0.3972774 [2,] 80.94541 -0.6412023 -1.0666813 

Regarding the evaluation of parameters in three packages, the method used in mlogit::mlogit is explained in detail here . In nnet::multinom model is a neural network without hidden layers, offset nodes, and a softmax output layer; in our case there are 3 input blocks and 3 output units:

 nnet:::summary.nnet(nnet.mod) a 3-0-3 network with 12 weights options were - skip-layer connections softmax modelling b->o1 i1->o1 i2->o1 i3->o1 0.00 0.00 0.00 0.00 b->o2 i1->o2 i2->o2 i3->o2 0.00 41.52 -0.50 -0.39 b->o3 i1->o3 i2->o3 i3->o3 0.00 77.58 -0.61 -1.02 

Maximum conditional likelihood is the method used by multinom to fit the model.
Parameters of multidimensional logical models are evaluated in globaltest::mlogit using maximum likelihood and work with the equivalent logarithmic model and Poisson probability. The method is described here .

For multinom priced models, the multinom pseudo-R-square can be easily calculated as follows:

 nnet.mod.loglik <- nnet:::logLik.multinom(nnet.mod) nnet.mod0 <- multinom(y ~ 1, df1) nnet.mod0.loglik <- nnet:::logLik.multinom(nnet.mod0) (nnet.mod.mfr2 <- as.numeric(1 - nnet.mod.loglik/nnet.mod0.loglik)) [1] 0.8483931 

At this point, using stargazer , I create a report for the model evaluated by mlogit::mlogit , which is as similar as possible to the multinom report.
The main idea is to substitute the estimated coefficients and probabilities in the object created by multinom with the corresponding mlogit estimates.

 # Substitution of coefficients nnet.mod2 <- nnet.mod cf <- matrix(nnet.mod2$wts, nrow=4) cf[2:nrow(cf), 2:ncol(cf)] <- t(matrix(mlogit.cf,nrow=2)) # Substitution of probabilities nnet.mod2$wts <- c(cf) nnet.mod2$fitted.values <- mlogit.mod$probabilities 

Here is the result:

 library(stargazer) stargazer(nnet.mod2, type="text") ============================================== Dependent variable: ---------------------------- 2 3 (1) (2) ---------------------------------------------- x1 -0.516** -0.641** (0.212) (0.305) x2 -0.397** -1.067** (0.176) (0.519) Constant 42.787** 80.945** (18.282) (38.161) ---------------------------------------------- Akaike Inf. Crit. 24.623 24.623 ============================================== Note: *p<0.1; **p<0.05; ***p<0.01 

Now I am working on the last question: how to visualize loglik, pseudo R2 and other information in stargazer output.

+4
source share

If you use stargazer, you can use omit to remove unwanted lines or links. Here is a quick example, I hope it will point you in the right direction.

pi My guess is that you are using Rstudio and rmarkdown with knitr.

 ```{r, echo=FALSE} library(mlogit) df = data.frame(c(0,1,1,2,0,1,0), c(1,6,7,4,2,2,1), c(683,276,756,487,776,100,982)) colnames(df) <- c('y', 'col1', 'col2') mydata = df mldata <- mlogit.data(mydata, choice = "y", shape="wide") mlogit.model1 <- mlogit(y ~ 1| col1+col2, data=mldata) mlogit.col1 <- mlogit(y ~ 1 | col1, data = mldata) mlogit.col2 <- mlogit(y ~ 1 | col2, data = mldata) ``` # MLOGIT ```{r echo = FALSE, message = TRUE, error = TRUE, warning = FALSE, results = 'asis'} library(stargazer) stargazer(mlogit.model1, type = "html") stargazer(mlogit.col1, mlogit.col2, type = "html", omit=c("1:col1","2:col1","1:col2","2:col2")) ``` 

Result:

enter image description here

Note that in the second image, 1: col1, 2: col2, 1: col2 and 2: col2 are missing

enter image description here

+2
source share

All Articles