I have a fix, possibly based on mnel comments
dat<-readLines(paste("sfa", '0910', ".csv", sep="")) ncommas<-sapply(seq_along(dat),function(x){sum(attributes(gregexpr(',',dat[x])[[1]])$match.length)}) > head(ncommas) [1] 450 451 451 451 451 451
all columns after the first have an extra delimiter that excel ignores.
for(i in seq_along(dat)[-1]){ dat[i]<-gsub('(.*),','\\1',dat[i]) } write(dat,'temp.csv') tmp<-read.table('temp.csv',header=T, stringsAsFactors=F, sep=",") > tmp[1:5,1:7] UNITID XSCUGRAD SCUGRAD XSCUGFFN SCUGFFN XSCUGFFP SCUGFFP 1 100654 R 4496 R 1044 R 23 2 100663 R 10646 R 1496 R 14 3 100690 R 380 R 5 R 1 4 100706 R 6119 R 774 R 13 5 100724 R 4638 R 1209 R 26
moral of the story ... listen to Joshua Ulrich;)
Quick fix. Open the file in excel and save it. This will also remove the extra delimiters.
As an alternative
dat<-readLines(paste("sfa", '0910', ".csv", sep=""),n=1) dum.names<-unlist(strsplit(dat,',')) tmp <- read.table(paste("sfa", '0910', ".csv", sep=""), header=F, stringsAsFactors=F,col.names=c(dum.names,'XXXX'),sep=",",skip=1) tmp1<-tmp[,-dim(tmp)[2]]