Recursion is "bad" in Python because it is usually slower than an iterative solution, and because the depth of the Python stack is not unlimited (and there is no tail call optimization). For the sum function, yes, you probably want unlimited depth, since it would be wise to want to summarize a list of a million numbers, and the performance delta would become a problem with so many elements. In this case, you should not use recursion.
If you walk through the DOM tree readable from an XML file, on the other hand, you are unlikely to exceed the Python recursion depth (default is 1000). This, of course, could, but, in fact, it probably would not. When you know what data you will work ahead of time, you can be sure that you will not overflow the stack.
Recursive tree-like movement, in my opinion, is much more natural to write and read than iterative, and recursion overheads, as a rule, make up a small part of the execution time. If it really matters to you that instead of 14 it takes 16 seconds to throw PyPy , it might be better to use your time.
Recursion seems like natural adaptability to the problem you posted, and if you think the code is easier to read and maintain in this way and the performance is adequate, then look for it.
I grew up writing code on computers that, for practical reasons, limited the recursion depth to about 16, if it was provided at all, so 1000 seems luxurious to me. :-)
kindall
source share