To summarize the comments:
- select finds several instances and returns a list; find finds the first, so they do not do the same. select_one will be the equivalent of find.
- I almost always use the css selector when tagging or using tag.classname if you are looking for one element without a class, which I use find. In essence, it comes down to precedent and personal preferences.
- As for flexibility, I think you know the answer,
soup.select("div[id=foo] > div > div > div[class=fee] > span > span > a") would look pretty ugly using multiple calls find / find_all. - The only problem with css selectors in bs4 is the very limited support, nth-of-type is the only pseudo-class that implements and binds attributes like [href] [src], it is also not supported, like many other parts of css selectors. But things like [href = ..] *, a [href ^ =], a [href $ =], etc., I think is much nicer than
find("a", href=re.compile(....)) , but again, this is a personal preference.
For performance, we can run some tests, I changed the code from the answer here , working on 800 + html files taken from here is not exhaustive, but should give the key to the readability of some parameters and performance:
Modified Functions:
from bs4 import BeautifulSoup from glob import iglob def parse_find(soup): author = soup.find("h4", class_="h12 talk-link__speaker").text title = soup.find("h4", class_="h9 m5").text date = soup.find("span", class_="meta__val").text.strip() soup.find("footer",class_="footer").find_previous("data", { "class": "talk-transcript__para__time"}).text.split(":") soup.find_all("span",class_="talk-transcript__fragment") def parse_select(soup): author = soup.select_one("h4.h12.talk-link__speaker").text title = soup.select_one("h4.h9.m5").text date = soup.select_one("span.meta__val").text.strip() soup.select_one("footer.footer").find_previous("data", { "class": "talk-transcript__para__time"}).text soup.select("span.talk-transcript__fragment") def test(patt, func): for html in iglob(patt): with open(html) as f: func(BeautifulSoup(f, "lxml")
Now for the timings:
In [7]: from testing import test, parse_find, parse_select In [8]: timeit test("./talks/*.html",parse_find) 1 loops, best of 3: 51.9 s per loop In [9]: timeit test("./talks/*.html",parse_select) 1 loops, best of 3: 32.7 s per loop
As I said, not exhaustive, but I think we can confidently say that css selectors are definitely more efficient.
Padraic cunningham
source share