Parse xml with lxml - extract element value

Suppose we have an XML file with the structure as follows.

<?xml version="1.0" ?> <searchRetrieveResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/zing/srw/ http://www.loc.gov/standards/sru/sru1-1archive/xml-files/srw-types.xsd" xmlns="http://www.loc.gov/zing/srw/"> <records xmlns:ns1="http://www.loc.gov/zing/srw/"> <record> <recordData> <record xmlns=""> <datafield tag="000"> <subfield code="a">123</subfield> <subfield code="b">456</subfield> </datafield> <datafield tag="001"> <subfield code="a">789</subfield> <subfield code="b">987</subfield> </datafield> </record> </recordData> </record> <record> <recordData> <record xmlns=""> <datafield tag="000"> <subfield code="a">123</subfield> <subfield code="b">456</subfield> </datafield> <datafield tag="001"> <subfield code="a">789</subfield> <subfield code="b">987</subfield> </datafield> </record> </recordData> </record> </records> </searchRetrieveResponse> 

I need to parse:

  • The contents of the "subfield" (e.g. 123 in the above example) and
  • Attribute values ​​(e.g., 000 or 001)

I wonder how to do this using lxml and XPath. Nested below is my original code, and I ask someone to explain to me how to parse the values.

 import urllib, urllib2 from lxml import etree url = "https://dl.dropbox.com/u/540963/short_test.xml" fp = urllib2.urlopen(url) doc = etree.parse(fp) fp.close() ns = {'xsi':'http://www.loc.gov/zing/srw/'} for record in doc.xpath('//xsi:record', namespaces=ns): print record.xpath("xsi:recordData/record/datafield[@tag='000']", namespaces=ns) 
+8
python xml xpath lxml
source share
3 answers

I would become more direct in your XPath: go straight to the elements you need, in this case datafield .

 >>> for df in doc.xpath('//datafield'): # Iterate over attributes of datafield for attrib_name in df.attrib: print '@' + attrib_name + '=' + df.attrib[attrib_name] # subfield is a child of datafield, and iterate subfields = df.getchildren() for subfield in subfields: print 'subfield=' + subfield.text 

Also, lxml allows you to ignore the namespace, perhaps because your example uses only one namespace?

+16
source share

Try the following working code:

 import urllib2 from lxml import etree url = "https://dl.dropbox.com/u/540963/short_test.xml" fp = urllib2.urlopen(url) doc = etree.parse(fp) fp.close() for record in doc.xpath('//datafield'): print record.xpath("./@tag")[0] for x in record.xpath("./subfield/text()"): print "\t", x 
+6
source share

I would just go with

 for df in doc.xpath('//datafield'): print df.attrib for sf in df.getchildren(): print sf.text 

Also you don't need urllib, you can parse XML directly with HTTP

 url = "http://dl.dropbox.com/u/540963/short_test.xml" #doesn't work with https though doc = etree.parse(url) 
+5
source share

All Articles