Scrambling javascript website

I can clear the data from the basic html pages, but I am having trouble scraping from the site below. It seems that the data is presented through javascript, and I'm not sure how to approach this problem. I would prefer to use R to scratch if possible, but can also use Python.

Any ideas / suggestions?

Edit: I need to capture the year / manufacturer / model, S / N, price, location and a brief description (starting with "Auction:") for each listing.

http://www.machinerytrader.com/list/list.aspx?bcatid=4&DidSearch=1&EID=1&LP=MAT&ETID=5&catid=1015&mdlx=Contains&Cond=All&SO=26&btnSearch=Search&units=imperial

+6
source share
2 answers
library(XML) library(relenium) ##downloading website website<- firefoxClass$new() website$get("http://www.machinerytrader.com/list/list.aspx?pg=1&bcatid=4&DidSearch=1&EID=1&LP=MAT&ETID=5&catid=1015&mdlx=Contains&Cond=All&SO=26&btnSearch=Search&units=imperial") doc <- htmlParse(website$getPageSource()) ##reading tables and binding the information tables <- readHTMLTable(doc, stringsAsFactors=FALSE) data<-do.call("rbind", tables[seq(from=8, to=56, by=2)]) data<-cbind(data, sapply(lapply(tables[seq(from=9, to=57, by=2)], '[[', i=2), '[', 1)) rownames(data)<-NULL names(data) <- c("year.man.model", "sn", "price", "location", "auction") 

This will give you what you want for the first page (only the first two lines are shown here):

 head(data,2) year.man.model sn price location auction 1 1972 AMERICAN 5530 GS14745W US $50,100 MI Auction: 1/9/2013; 4,796 Hours; .. 2 AUSTIN-WESTERN 307 307 US $3,400 MT Auction: 12/18/2013; AUSTIN-WESTERN track excavator. 

To get all the pages, just flip them by inserting pg=i into the address.

+3
source

Using Relenium :

 require(relenium) # More info: https://github.com/LluisRamon/relenium require(XML) firefox <- firefoxClass$new() # init browser res <- NULL pages <- 1:2 for (page in pages) { url <- sprintf("http://www.machinerytrader.com/list/list.aspx?pg=%d&bcatid=4&DidSearch=1&EID=1&LP=MAT&ETID=5&catid=1015&mdlx=Contains&Cond=All&SO=26&btnSearch=Search&units=imperial", page) firefox$get(url) doc <- htmlParse(firefox$getPageSource()) res <- rbind(res, cbind(year_manu_model = xpathSApply(doc, '//table[substring(@id, string-length(@id)-15) = "tblListingHeader"]/tbody/tr/td[1]', xmlValue), sn = xpathSApply(doc, '//table[substring(@id, string-length(@id)-15) = "tblListingHeader"]/tbody/tr/td[2]', xmlValue), price = xpathSApply(doc, '//table[substring(@id, string-length(@id)-15) = "tblListingHeader"]/tbody/tr/td[3]', xmlValue), loc = xpathSApply(doc, '//table[substring(@id, string-length(@id)-15) = "tblListingHeader"]/tbody/tr/td[4]', xmlValue), auc = xpathSApply(doc, '//table[substring(@id, string-length(@id)-9) = "tblContent"]/tbody/tr/td[2]', xmlValue)) ) } sapply(as.data.frame(res), substr, 0, 30) # year_manu_model sn price loc auc # [1,] " 1972 AMERICAN 5530" "GS14745W" "US $50,100" "MI " "\n\t\t\t\t\tAuction: 1/9/2013; 4,796" # [2,] " AUSTIN-WESTERN 307" "307" "US $3,400" "MT " "\n\t\t\t\t\tDetails & Photo(s)Video(" # ... 
+2
source

All Articles