Parquet-mr is the java implementation of the Parquet format to be used in Hadoop. It uses the record shredding and assembly algorithm described in the Dremel paper. Integration with Pig and Map/Reduce are provided.
A Loader and a Storer are provided to read and write Parquet files with Apache Pig
Thrift mapping to the parquet schema is provided using a TBase extending class. You can read and write parquet files using Thrift generated classes.
- The ParquetOutputFormat can be provided a WriteSupport to write your own objects to an event based RecordConsumer.
- the ParquetInputFormat can be provided a ReadSupport to materialize your own POJOs by implementing a RecordMaterializer
See the APIs:
to run the unit tests: mvn test
The build runs in Travis CI:
- Julien Le Dem http://twitter.com/J_
- Jonathan Coveney http://twitter.com/jco
- google group https://groups.google.com/d/forum/parquet-dev
- the group email address: parquet-dev@googlegroups.com
Copyright 2012 Twitter, Inc.
Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0