Process XML chunks in Python

I have a series of large XML files (~ 3 GB each) that I am trying to process. Rough XML format

<FILE>
<DOC>
    <FIELD1>
        Some text.
    </FIELD1>
    <FIELD2>
        Some text. Probably some more fields nested within this one.
    </FIELD2>
    <FIELD3>
        Some text.
    </FIELD3>
    <FIELD4>
        Some text. Etc.
    </FIELD4>
</DOC>
<DOC>
    <FIELD1>
        Some text.
    </FIELD1>
    <FIELD2>
        Some text. Probably some more fields nested within this one.
    </FIELD2>
    <FIELD3>
        Some text.
    </FIELD3>
    <FIELD4>
        Some text. Etc.
    </FIELD4>
</DOC>
</FILE>

My current approach (mimicking the code visible at http://effbot.org/zone/element-iterparse.htm#incremental-parsing ):

#Added this in the edit.
import xml.etree.ElementTree as ET

tree = ET.iterparse(xml_file)
tree = iter(tree)
event, root = tree.next()

for event, elem in tree:
    #Need to find the <DOC> elements
    if event == "end" and elem.tag == "DOC":
        #Code to process the fields within the <DOC> element. 
        #The code here mainly just iterates through the inner 
        #elements and extracts what I need
        root.clear()

It explodes, although it uses all of my system memory (16 GB). At first I thought it was a position root.clear(), so I tried to move it after an if statement, but it showed no effect. Given this, I am quite sure how to proceed further than "get more memory."

EDIT

Removed previous edit because it was wrong.

+4
source share
1 answer

, , , lxml , ...

from lxml import etree
context = etree.iterparse(xmlfile)  # can also limit to certain events and tags
for event, elem in context:
    # do some stuff here with elem
    elem.clear()
    while elem.getprevious() is not None:
        del elem.getparent()[0]

, , .

+4

All Articles