It's not so easy.
First of all, keep in mind that SSIS "aligns" XML, so there is one output from the XML source for each path through XML. Trivial example:
<Parent><Child><Grandchild/></Child></Parent>
will produce three outputs and three error outputs. Deteriorating:
<Parent><Child><Grandchild><Notes/></Grandchild><Notes/></Child><Notes/></Parent>
This will lead to the conclusions of the parent, child, grandson, parent-child-grandson-notes, parent-child notes and parent notes, both normal and error.
The project I was working on had about 203 outputs. I was able to smooth out the XML schema and create only 19 or so. This is still a lot, considering that each output should have its own processing.
In addition, the XML task cannot process 1 GB or more XML. It really loads the entire document into memory. Try making XmlDocument.Load such a file and see what happens - what happens with SSIS.
I had to create an “XML element source” of my own, which processed the children of the root element one at a time. This allowed me to flatten XML, and also process large documents (a 10 GB test document worked).
There is more fun depending on what you want to do with the data received. In my case, we had to send each of the outputs to staging tables. This is not bad, but you should understand that the data on the outputs is asynchronous. One child (with descendants) will end the output paths a bit, and you will never know when all descendants have finished processing. This does not allow processing processing on a transactional basis one element at a time.
Instead, SSIS adds a surrogate key (I think it called) to each child element. Would a ParentID be added to the parent, the child of the child, and a child of the ChildParentID would also be added to the child to refer to the parent of the child. They can be used to "re-join the element again", but only after all the data has been written to the staging tables. This is the only time you can be sure that any given element has been completely processed - when they are all there!