I have been studying Scrapy for the past few days and am having trouble getting all the list items on the page.
Thus, the page has a similar structure:
<ol class="list-results">
<li class="SomeClass i">
<ul>
<li class="name">Name1</li>
</ul>
</li>
<li class="SomeClass 0">
<ul>
<li class="name">Name2</li>
</ul>
</li>
<li class="SomeClass i">
<ul>
<li class="name">Name3/li>
</ul>
</li>
</ol>
In the Parse Scrapy function, I get all the list items like this:
def parse(self, response):
sel = Selector(response)
all_elements = sel.css('.SomeClass')
print len(all_elemts)
I know that on the test page I request there are about 300 list items with this class , however after printing len (all_elements) I get only 61 .
I tried using xpaths like:
sel.xpath("//*[contains(concat(' ', @class, ' '), 'SomeClass')]")
And yet I get as 61 elements instead of 300, which I should be.
I also use try and except claws if one element should give me an exception.
, :
https://search.msu.edu/people/index.php?fst=ab&lst=&nid=&filter=
, , !
, ! ! , !