I have a question about deletion leading trailing spaces in data.frame or data.table.
I have working solutions, but I'm trying to speed up my code.
Here are some sample data:
number_strings <- paste(" ",seq(from=1, to=100000, by=1)," ",sep="")
data <- as.data.frame(matrix(number_strings,nrow=length(number_strings),ncol=10),stringsAsFactors=FALSE)
colnames(data) <- paste("Col",seq(from=1, to=ncol(data), by=1),sep="")
Here are some columns that I would like to crop:
odd_columns <- paste("Col",seq(from=1, to=ncol(data), by=2),sep="")
Here are three options that I still have:
f_trim_for <- function(x,cols){
for(i in 1:length(cols))
{
x[,cols[i]] = trim(x[,cols[i]])
}
return(x)
}
system.time(data1 <- f_trim_for(data,odd_columns))
f_gsub_for <- function(x,cols){
for(i in 1:length(cols))
{
x[,cols[i]] <- gsub("^\\s+|\\s+$", "", x[,cols[i]], perl = TRUE)
}
return(x)
}
system.time(data2 <- f_gsub_for(data,odd_columns))
f_trim_dt <- function(x,cols){
data.table(x)[, (cols) := trim(.SD), .SDcols = cols]
}
system.time(data3 <- f_trim_dt(data,odd_columns))
Here are some examples:
user system elapsed
f_trim_for 1.50 0.08 1.92
f_gsub_for 0.75 0.00 0.74
f_trim_dt 0.81 0.00 1.17
My question is: Are there other ways that I donβt think about that could be faster?
The reason is because my actual data is 1.5 million rows and 110 columns. Consequently, speed is a serious problem.
I tried other options, but they do not work:
f_gsub_dt <- function(x,cols){
data.table(x)[, (cols) := gsub("^\\s+|\\s+$", "", .SD, perl = TRUE), .SDcols = cols]
}
f_set_dt <- function(x,cols){
for (j in cols)
{
set(x,x[[j]],j,gsub("^\\s+|\\s+$", "", j, perl = TRUE))
}
return(x)
}