- 浏览: 25909 次
最新评论
-
stone2oo6:
cloverprince 写道天哪,解析器默认情况下真的会去读 ...
XML Attack -
cloverprince:
天哪,解析器默认情况下真的会去读取外部DTD吗?别的解析器试过 ...
XML Attack -
stone2oo6:
关于堆和栈的关系,可以参考这篇文章http://hllvm.g ...
JVM笔记(1):JVM内存模型
本文转自http://java.dzone.com/articles/xml-unmarshalling-benchmark,主要比较了JAXB/STAX1.0/Woodstox在解析多节点XML文件时内存和时间使用上的性能差异,遗憾的是缺少CPU使用的对比图。
XML unmarshalling benchmark in Java: JAXB vs STax vs Woodstox
Towards the end of last week I started thinking how to deal with large amounts of XML data in a resource-friendly way.The main problem that I wanted to solve was how to process large XML files in chunks while at the same time providing upstream/downstream systems with some data to process.
Of course I've been using JAXB technology for few years now; the main advantage of using JAXB is the quick time-to-market; if one possesses an XML schema, there are tools out there to auto-generate the corresponding Java domain model classes automatically (Eclipse Indigo, Maven jaxb plugins in various sauces, ant tasks, to name a few). The JAXB API then offers a Marshaller and an Unmarshaller to write/read XML data, mapping the Java domain model.
When thinking of JAXB as solution for my problem I suddendlly realised that JAXB keeps the whole objectification of the XML schema in memory, so the obvious question was: "How would our infrastructure cope with large XML files (e.g. in my case with a number of elements > 100,000) if we were to use JAXB?". I could have simply produced a large XML file, then a client for it and find out about memory consumption.
As one probably knows there are mainly two approaches to processing XML data in Java: DOM and SAX. With DOM, the XML document is represented into memory as a tree; DOM is useful if one needs cherry-pick access to the tree nodes or if one needs to write brief XML documents. On the other side of the spectrum there is SAX, an event-driven technology, where the whole document is parsed one XML element at the time, and for each XML significative event, callbacks are "pushed" to a Java client which then deals with them (such as START_DOCUMENT, START_ELEMENT, END_ELEMENT, etc). Since SAX does not bring the whole document into memory but it applies a cursor like approach to XML processing it does not consume huge amounts of memory. The drawback with SAX is that it processes the whole document start to finish; this might not be necessarily what one wants for large XML documents. In my scenario, for instance, I'd like to be able to pass to downstream systems XML elements as they are available, but at the same time maybe I'd like to pass only 100 elements at the time, implementing some sort of pagination solution. DOM seems too demanding from a memory-consumption point of view, whereas SAX seems to coarse-grained for my needs.
I remembered reading something about STax, a Java technology which offered a middle ground between the capability to pull XML elements (as opposed to pushing XML elements, e.g. SAX) while being RAM-friendly. I then looked into the technology and decided that STax was probably the compromise I was looking for; however I wanted to keep the easy programming model offered by JAXB, so I really needed a combination of the two. While investigating STax, I came across Woodstox; this open source project promises to be a faster XML parser than many othrers, so I decided to include it in my benchmark as well. I now had all elements to create a benchmark to give me memory consumption and processing speed metrics when processing large XML documents.
The benchmark plan
In order to create a benchmark I needed to do the following:
- Create an XML schema which defined my domain model. This would be the input for JAXB to create the Java domain model
- Create three large XML files representing the model, with 10,000 / 100,000 / 1,000,000 elements respectively
- Have a pure JAXB client which would unmarshall the large XML files completely in memory
- Have a STax/JAXB client which would combine the low-memory consumption of SAX technologies with the ease of programming model offered by JAXB
- Have a Woodstox/JAXB client with the same characteristics of the STax/JAXB client (in few words I just wanted to change the underlying parser and see if I could obtain any performance boost)
- Record both memory consumption and speed of processing (e.g. how quickly would each solution make XML chunks available in memory as JAXB domain model classes)
- Make the results available graphically, since, as we know, one picture tells one thousands words.
The Domain Model XML Schema
<?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://uk.co.jemos.integration.xml/large-file" xmlns:tns="http://uk.co.jemos.integration.xml/large-file" elementFormDefault="qualified"> <complexType name="PersonType"> <sequence> <element name="firstName" type="string"></element> <element name="lastName" type="string"></element> <element name="address1" type="string"></element> <element name="address2" type="string"></element> <element name="postCode" type="string"></element> <element name="city" type="string"></element> <element name="country" type="string"></element> </sequence> <attribute name="active" type="boolean" use="required" /> </complexType> <complexType name="PersonsType"> <sequence> <element name="person" type="tns:PersonType" maxOccurs="unbounded" minOccurs="1"></element> </sequence> </complexType> <element name="persons" type="tns:PersonsType"> </element> </schema>
I decided for a relatively easy domain model, with XML elements representing people, with their names and addresses. I also wanted to record whether a person was active.
Using JAXB to create the Java model
I am a fan of Maven and use it as my default tool to build systems. This is the POM I defined for this little benchmark:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>uk.co.jemos.tests.xml</groupId> <artifactId>large-xml-parser</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging> <name>large-xml-parser</name> <url>http://www.jemos.co.uk</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.5</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resources</schemaDirectory> <includeSchemas> <includeSchema>**/*.xsd</includeSchema> </includeSchemas> <extension>true</extension> <args> <arg>-enableIntrospection</arg> <arg>-XtoString</arg> <arg>-Xequals</arg> <arg>-XhashCode</arg> </args> <removeOldOutput>true</removeOldOutput> <verbose>true</verbose> <plugins> <plugin> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-basics</artifactId> <version>0.6.1</version> </plugin> </plugins> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.1</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>uk.co.jemos.tests.xml.XmlPullBenchmarker</mainClass> </manifest> </archive> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.2</version> <configuration> <outputDirectory>${project.build.directory}/site/downloads</outputDirectory> <descriptors> <descriptor>src/main/assembly/project.xml</descriptor> <descriptor>src/main/assembly/bin.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.5</version> <scope>test</scope> </dependency> <dependency> <groupId>uk.co.jemos.podam</groupId> <artifactId>podam</artifactId> <version>2.3.11.RELEASE</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.0.1</version> </dependency> <!-- XML binding stuff --> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1.3</version> </dependency> <dependency> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-basics-runtime</artifactId> <version>0.6.0</version> </dependency> <dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>stax2-api</artifactId> <version>3.0.3</version> </dependency> </dependencies> </project>
Just few things to notice about this pom.xml.
- I use Java 6, since starting from version 6, Java contains all the XML libraries for JAXB, DOM, SAX and STax.
- To auto-generate the domain model classes from the XSD schema, I used the excellent maven-jaxb2-plugin, which allows, amongst other things, to obtain POJOs with toString, equals and hashcode support.
I have also declared the jar plugin, to create an executable jar for the benchmark and the assembly plugin to distribute an executable version of the benchmark. The code for the benchmark is attached to this post, so if you want to build it and run it yourself, just unzip the project file, open a command line and run:
$ mvn clean install assembly:assembly
This command will place *-bin.* files into the folder target/site/downloads. Unzip the one of your preference and to run the benchmark use (-Dcreate.xml=true will generate the XML files. Don't pass it if you have these files already, e.g. after the first run):
$ java -jar -Dcreate.xml=true large-xml-parser-1.0.0-SNAPSHOT.jar
Creating the test data
To create the test data, I used PODAM, a Java tool to auto-fill POJOs and JavaBeans with data. The code is as simple as:
JAXBContext context = JAXBContext .newInstance("xml.integration.jemos.co.uk.large_file"); Marshaller marshaller = context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE); marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8"); PersonsType personsType = new ObjectFactory().createPersonsType(); List<PersonType> persons = personsType.getPerson(); PodamFactory factory = new PodamFactoryImpl(); for (int i = 0; i < nbrElements; i++) { persons.add(factory.manufacturePojo(PersonType.class)); } JAXBElement<PersonsType> toWrite = new ObjectFactory() .createPersons(personsType); File file = new File(fileName); BufferedOutputStream bos = new BufferedOutputStream( new FileOutputStream(file), 4096); try { marshaller.marshal(toWrite, bos); bos.flush(); } finally { IOUtils.closeQuietly(bos); }
The XmlPullBenchmarker generates three large XML files under ~/xml-benchmark:
- large-person-10000.xml (Approx 3M)
- large-person-100000.xml (Approx 30M)
- large-person-1000000.xml (Approx 300M)
Each file looks like the following:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persons xmlns="http://uk.co.jemos.integration.xml/large-file"> <person active="false"> <firstName>Ult6yn0D7L</firstName> <lastName>U8DJoUTlK2</lastName> <address1>DxwlpOw6X3</address1> <address2>O4GGvxIMo7</address2> <postCode>Io7Kuz0xmz</postCode> <city>lMIY1uqKXs</city> <country>ZhTukbtwti</country> </person> <person active="false"> <firstName>gBc7KeX9Tn</firstName> <lastName>kxmWNLPREp</lastName> <address1>9BIBS1m5GR</address1> <address2>hmtqpXjcpW</address2> <postCode>bHpF1rRldM</postCode> <city>YDJJillYrw</city> <country>xgsTDJcfjc</country> </person> [..etc] </persons>
Each file contains 10,000 / 100,000 / 1,000,000 <person> elements.
The running environments
I tried the benchmarker on three different environments:
- Ubuntu 10, 64-bit running as Virtual Machine on a Windows 7 ultimate, with CPU i5, 750 @2.67GHz and 2.66GHz, 8GB RAM of which 4GB dedicated to the VM. JVM: 1.6.0_25, Hotspot
- Windows 7 Ultimate, hosting the above VM, therefore with same processor. JVM, 1.6.0_24, Hotspot
- Ubuntu 10, 32-bit, 3GB RAM, dual core. JVM, 1.6.0_24, OpenJDK
The XML unmarshalling
To unmarshall the code I used three different strategies:
- Pure JAXB
- STax + JAXB
- Woodstox + JAXB
Pure JAXB unmarshalling
The code which I used to unmarshall the large XML files using JAXB follows:
private void readLargeFileWithJaxb(File file, int nbrRecords) throws Exception { JAXBContext ucontext = JAXBContext .newInstance("xml.integration.jemos.co.uk.large_file"); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); BufferedInputStream bis = new BufferedInputStream(new FileInputStream( file)); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory(); long memend = 0L; try { JAXBElement<PersonsType> root = (JAXBElement<PersonsType>) unmarshaller .unmarshal(bis); root.getValue().getPerson().size(); memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("JAXB (" + nbrRecords + "): - Total Memory used: " + (memstart - memend)); LOG.info("JAXB (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { IOUtils.closeQuietly(bis); } }
The code uses a one-liner to unmarshall each XML file:
JAXBElement<PersonsType> root = (JAXBElement<PersonsType>) unmarshaller .unmarshal(bis);
I also accessed the size of the underlying PersonType collection to "touch" in memory data. BTW, debugging the application showed that all 10,000 elements were indeed available in memory after this line of code.
JAXB + STax
With STax, I just had to use an XMLStreamReader, iterate through all < person> elements, and pass each in turn to JAXB to unmarshall it into a PersonType domain model object. The code follows:
// set up a StAX reader XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmlr = xmlif .createXMLStreamReader(new FileReader(file)); JAXBContext ucontext = JAXBContext.newInstance(PersonType.class); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory(); long memend = 0L; try { xmlr.nextTag(); xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons"); xmlr.nextTag(); while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) { JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr, PersonType.class); if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) { xmlr.next(); } } memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("STax - (" + nbrRecords + "): - Total memory used: " + (memstart - memend)); LOG.info("STax - (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { xmlr.close(); } }
Note that this time when creating the context, I had to specify that it was for the PersonType object, and when invoking the JAXB unmarshalling I had to pass also the desired returned class type, with:
JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr,
PersonType.class);
Note that I don't to anything with the object, just create it, to keep the benchmark as truthful and possible by not introducing any unnecessary steps.
JAXB + Woodstox
With Woodstox, the approach is very similar to the one used with STax. In fact Woodstox provides a STax2 compatible API, so all I had to do was to provide the correct factory and...bang! I had Woodstox under the cover working.
private void readLargeXmlWithFasterStax(File file, int nbrRecords) throws FactoryConfigurationError, XMLStreamException, FileNotFoundException, JAXBException { // set up a Woodstox reader XMLInputFactory xmlif = XMLInputFactory2.newInstance(); XMLStreamReader xmlr = xmlif .createXMLStreamReader(new FileReader(file)); JAXBContext ucontext = JAXBContext.newInstance(PersonType.class); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory(); long memend = 0L; try { xmlr.nextTag(); xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons"); xmlr.nextTag(); while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) { JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr, PersonType.class); if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) { xmlr.next(); } } memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("Woodstox - (" + nbrRecords + "): Total memory used: " + (memstart - memend)); LOG.info("Woodstox - (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { xmlr.close(); } }
Note the following line:
XMLInputFactory xmlif = XMLInputFactory2.newInstance();
1.
XMLInputFactory xmlif = XMLInputFactory2.newInstance();
Where I pass in a STax2 XMLInputFactory. This uses the Woodstox implementation.
The main loop
Once the files are in place (you obtain this by passing -Dcreate.xml=true), the main performs the following:
System.gc(); System.gc(); for (int i = 0; i < 10; i++) { main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); }
It invites the GC to run, although as we know this is at the GC Thread discretion. It then executes each strategy 10 times, to normalise RAM and CPU consumption. The final data are then collected by running an average on the ten runs.
The benchmark results for memory consumption
Here follow some diagrams which show memory consumption across the different running environments, when unmarshalling 10,000 / 100,000 / 1,000,000 files.
You will probably notice that memory consumption for STax-related strategies often shows a negative value. This means that there was more free memory after unmarshalling all elements than there was at the beginning of the unmarshalling loop; this, in turn, suggests that the GC ran a lot more with STax than with JAXB. This is logical if one thinks about it; since with STax we don't keep all objects into memory there are more objects available for garbage collection. In this particular case I believe the PersonType object created in the while loop gets eligible for GC and enters the young generation area and then it gets reclamed by the GC. This, however, should have a minimum impact on performance, since we know that claiming objects from the young generation space is done very efficiently.
Summary for 10,000 XML elements
Summary for 100,000 XML elements
Summary for 1,000,000 XML elements
The benchmark results for processing speed
Results for 10,000 elements
Results for 100,000 elements
Results for 1,000,000 elements
Conclusions
The results on all three different environments, although with some differences, all tell us the same story:
- If you are looking for performance (e.g. XML unmarshalling speed), choose JAXB
- If you are looking for low-memory usage (and are ready to sacrifice some performance speed), then use STax.
My personal opinion is also that I wouldn't go for Woodstox, but I'd choose either JAXB (if I needed processing power and could afford the RAM) or STax (if I didn't need top speed and was low on infrastructure resources). Both these technologies are Java standards and part of the JDK starting from Java 6.
Resources
Benchmarker source code
- Zip version: Download Large-xml-parser-1.0.0-SNAPSHOT-project
- tar.gz version: Download Large-xml-parser-1.0.0-SNAPSHOT-project.tar
- tar.bz2 version: Download Large-xml-parser-1.0.0-SNAPSHOT-project.tar
Benchmarker executables:
- Zip version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin
- tar.gz version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin.tar
- tar.bz2 version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin.tar
Data files:
- Ubuntu 64-bit VM running environment: Download Stax-vs-jaxb-ubuntu-64-vm
- Ubuntu 32-bit running environment: Download Stax-vs-jaxb-ubuntu-32-bit
- Windows 7 Ultimate running environment: Download Stax-vs-jaxb-windows7
From http://tedone.typepad.com/blog/2011/06/unmarshalling-benchmark-in-java-jaxb-vs-stax-vs-woodstox.html
相关推荐
XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记XML笔记
20240126xml 笔记 2
电驴韩顺平php教程中xml内的笔记整理
通过《XML笔记》和《XSLT中文入门》的学习资料,你可以深入了解XML的语法和语义,以及XSLT的工作原理和实践技巧。《XML笔记.doc》可能包含了XML的基本概念、实例和最佳实践;而《XSLT中文入门.doc》可能详细讲解了...
xml 笔记
XML文件也很重要,之后要学习的框架中的配置文件都需要XML文件
XML 基础学习笔记 XML 文档基本结构: 1. 文档说明:一个完整的 XML 文档中必须包含一个文档说明,这个说明表示该文档是一个 XML 文挡,以及遵循那个 XML 版本的规范。最简单的文档说明如下:`<?xml version=”1.0...
MFC,WindowsAPI,数据库,excel,xml笔记对应的示例程序
本资料包含“XML笔记”和相关的源码,旨在帮助读者深入理解XML的使用和解析。 1. XML基础概念: - XML文档结构:XML文档由元素(Element)、属性(Attribute)、文本内容(Text Content)等组成,遵循严格的规则,...
本文档详细的给出了XML的介绍和XML解析的实例。包括DOM4J和SAX解析,节点的名和值得读取,属性的读取。生成XML文件等
自己总结的一份xml学习笔记。内容包括xml文件的书写格式,约束、jaxp及dom4j对xml解析的代码。
标题"Dom4j学习教程+API+xml实用大全+xml学习笔记+htc"提及了几个关键主题,包括Dom4j的学习资源、API文档,以及关于XML的实用指南和学习笔记,还提到了一个名为"htc"的文件,可能是关于HTC设备或技术的文档。...
XML标签必须遵循严格的规则,比如它们是大小写敏感的,且需要正确嵌套,每个标签都要有对应的关闭标签,除了XML声明。 XML与HTML的主要区别在于它们的用途和标签的定义方式。HTML用于展示网页内容,其标签有固定...
XML学习笔记(包括Java的两种解析XML的方法)。 XML基础语法的介绍,DTD的介绍和使用。 可快速入门,也可当手册使用。
XML学习笔记 本文档是一份详细的XML学习笔记,涵盖了XML的背景、XML和HTML的关系、SGML、XML和HTML的关系、XML文件、XML标记等方面的知识点。 一、XML的背景 XML是扩展的标记语言(eXtensible Markup Language)...
由于xml涉及很宽泛,所以本资源只是关于java编程的XML
XML_1.pdf和XML_2.pdf文档很可能是这两部分的详细笔记,包含了如何编写有效的XML文档、命名规则、实体引用以及空元素的使用等内容。这些笔记对于巩固XML基础知识至关重要,因为它们通常包含了讲师的讲解要点和示例...
移动开发的小白树懒,现在正在努力的迈向一个强大的自己而努力,现在学习xml的DTD约束
本Java视频教程中的XML课堂笔记深入浅出地讲解了XML的基本概念、语法规范以及在实际开发中的应用。以下是根据笔记内容整理的详细知识点: 1. **XML基础** - XML的起源:XML是从HTML演化而来,旨在提供一种结构化、...
韩顺平视频XML完整笔记 这是随视屏的完整版 的笔记 因为官网没有 特此放出来大家共享