The two proposed approaches cannot cope with large data sets, because (among other memory problems) they create is.na(df) , which will have the same size as df .
Here are two approaches that are more effective in terms of memory and time.
Filter Approach
Filter(function(x)!all(is.na(x)), df)
and data table approach (for total time and memory efficiency)
library(data.table) DT <- as.data.table(df) DT[,which(unlist(lapply(DT, function(x)!all(is.na(x))))),with=F]
examples using big data (30 columns, 1e6 rows)
big_data <- replicate(10, data.frame(rep(NA, 1e6), sample(c(1:8,NA),1e6,T), sample(250,1e6,T)),simplify=F) bd <- do.call(data.frame,big_data) names(bd) <- paste0('X',seq_len(30)) DT <- as.data.table(bd) system.time({df1 <- bd[,colSums(is.na(bd) < nrow(bd))]}) # error -- can't allocate vector of size ... system.time({df2 <- bd[, !apply(is.na(bd), 2, all)]}) # error -- can't allocate vector of size ... system.time({df3 <- Filter(function(x)!all(is.na(x)), bd)}) ## user system elapsed ## 0.26 0.03 0.29 system.time({DT1 <- DT[,which(unlist(lapply(DT, function(x)!all(is.na(x))))),with=F]}) ## user system elapsed ## 0.14 0.03 0.18
mnel Sep 27 2018-12-12T00: 00Z
source share