At first, I would try to simply clear the links to the corresponding data files and use the information obtained to build a complete download path, including user logins, etc. As others suggested, lapply would be handy for batch downloading.
Here is an easy way to extract the urls. Obviously, modify the example to suit your actual scenario.
Here we are going to use the XML package to identify all the links available in the CRAN archives for the Amelia package ( http://cran.r-project.org/src/contrib/Archive/Amelia/ ).
> library(XML) > url <- "http://cran.r-project.org/src/contrib/Archive/Amelia/" > doc <- htmlParse(url) > links <- xpathSApply(doc, "//a/@href") > free(doc) > links href href href "?C=N;O=D" "?C=M;O=A" "?C=S;O=A" href href href "?C=D;O=A" "/src/contrib/Archive/" "Amelia_1.1-23.tar.gz" href href href "Amelia_1.1-29.tar.gz" "Amelia_1.1-30.tar.gz" "Amelia_1.1-32.tar.gz" href href href "Amelia_1.1-33.tar.gz" "Amelia_1.2-0.tar.gz" "Amelia_1.2-1.tar.gz" href href href "Amelia_1.2-2.tar.gz" "Amelia_1.2-9.tar.gz" "Amelia_1.2-12.tar.gz" href href href "Amelia_1.2-13.tar.gz" "Amelia_1.2-14.tar.gz" "Amelia_1.2-15.tar.gz" href href href "Amelia_1.2-16.tar.gz" "Amelia_1.2-17.tar.gz" "Amelia_1.2-18.tar.gz" href href href "Amelia_1.5-4.tar.gz" "Amelia_1.5-5.tar.gz" "Amelia_1.6.1.tar.gz" href href href "Amelia_1.6.3.tar.gz" "Amelia_1.6.4.tar.gz" "Amelia_1.7.tar.gz"
To demonstrate, imagine that in the end we only need links for version 1.2 of the package.
> wanted <- links[grepl("Amelia_1\\.2.*", links)] > wanted href href href "Amelia_1.2-0.tar.gz" "Amelia_1.2-1.tar.gz" "Amelia_1.2-2.tar.gz" href href href "Amelia_1.2-9.tar.gz" "Amelia_1.2-12.tar.gz" "Amelia_1.2-13.tar.gz" href href href "Amelia_1.2-14.tar.gz" "Amelia_1.2-15.tar.gz" "Amelia_1.2-16.tar.gz" href href "Amelia_1.2-17.tar.gz" "Amelia_1.2-18.tar.gz"
Now you can use this vector as follows:
wanted <- links[grepl("Amelia_1\\.2.*", links)] GetMe <- paste(url, wanted, sep = "") lapply(seq_along(GetMe), function(x) download.file(GetMe[x], wanted[x], mode = "wb"))
Update (to clarify your question in the comments)
The last step in the example above loads the specified files into your current working directory (use getwd() to check where it is). If instead you know for sure that read.csv works with data, you can also try changing the anonymous function to read files directly:
lapply(seq_along(GetMe), function(x) read.csv(GetMe[x], header = TRUE, sep = "|", as.is = TRUE))
However, I think a safer approach might be to first load all the files into one directory and then use read.delim or read.csv or something that works for reading in data, similar to how it @Andreas has been suggested, I am speaking more securely because it gives you more flexibility in case the files are not fully downloaded and so on. In this case, instead of having to restart everything, you only need to download files that have not been fully downloaded.