Online xml validating parser how to stop twitter from updating your facebook status
You only have a tiny part in memory at any time and you "sniff" the XML stream by implementing callback code for events like etc.
It uses almost no memory, but you can't do "DOM" stuff, like use xpath or traverse trees.
I read some articles about the XML parsers and came across SAX and DOM.
SAX is event-based and DOM is tree model -- I don't understand the differences between these concepts.
You can blow memory with even medium sized documents. You are correct in your understanding of the DOM based model.So in general, SAX is suitable for combing through potentially large amounts of data you receive with a specific "query" in mind, but need not modify, while DOM is more aimed at giving you full flexibility in changing structure and contents, at the expense of higher resource demand. One way to parse would be to read in the entire document and recursively build up a tree structure in memory, and finally expose the entire result to the user.(I suppose you could call these parsers "DOM parsers".) That would be very handy for the user (I think that's what PHP's XML parser does), but it suffers from scalability problems and becomes very expensive for large documents.From what I have understood, event-based means some kind of event happens to the node.Like when one clicks a particular node it will give all the sub nodes rather than loading all the nodes at the same time.
The benefit of this approach is that you can easily query any part of the document, and freely manipulate all the nodes in the tree.