parquet-tools | Command line tools for the parquet project | Serialization library

 by   wesleypeck Java Version: Current License: Apache-2.0

kandi X-RAY | parquet-tools Summary

kandi X-RAY | parquet-tools Summary

parquet-tools is a Java library typically used in Utilities, Serialization applications. parquet-tools has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. However parquet-tools has 6 bugs. You can download it from GitHub.

Command line tools for the parquet project

            kandi-support Support

              parquet-tools has a low active ecosystem.
              It has 29 star(s) with 13 fork(s). There are 3 watchers for this library.
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. On average issues are closed in 2420 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of parquet-tools is current.

            kandi-Quality Quality

              parquet-tools has 6 bugs (2 blocker, 0 critical, 4 major, 0 minor) and 84 code smells.

            kandi-Security Security

              parquet-tools has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              parquet-tools code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              parquet-tools is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              parquet-tools releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              parquet-tools saves you 962 person hours of effort in developing the same functionality from scratch.
              It has 2192 lines of code, 207 functions and 18 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed parquet-tools and discovered the below as its top functions. This is intended to give you an instant insight into parquet-tools implemented functionality, and help decide if they suit your requirements.
            • Starts the SLF bridge handler
            • Prints usage information for given command
            • Merges options
            • Diagnostics
            • Read Parquet meta data
            • Flush columns
            • Prints the details of a column
            • Prints a string
            • Displays the Parquet file
            • Converts a binary value to a string
            • Dump the Parquet schema information
            • The main method
            • Pretty print values in the given print writer
            • Read Parquet file
            • Read and print the values from the command line
            • Creates a converter for a field
            Get all kandi verified functions for this library.

            parquet-tools Key Features

            No Key Features are available at this moment for parquet-tools.

            parquet-tools Examples and Code Snippets

            No Code Snippets are available at this moment for parquet-tools.

            Community Discussions


            Parquet write to gcs is not queryable by bigquery in nodejs
            Asked 2021-Nov-29 at 15:07

            i'm using parquetjs to create parquet files and push to google cloud storage.

            Problem is that bigquery cannot read the data from file but when i use parquet-tools everything looks healthy.



            Answered 2021-Nov-29 at 15:07

            just pass useDataPageV2: false as option to parquet.ParquetWriter.openFile(...)

            Like this:



            Issue with loading Parquet data into Snowflake Cloud Database when written with v1.11.0
            Asked 2020-Jun-22 at 09:19

            I am new to Snowflake, but my company has been using it successfully.

            Parquet files are currently being written with an existing Avro Schema, using Java parquet-avro v1.10.1.

            I have been updating the dependencies in order to use latest Avro, and part of that bumped Parquet to 1.11.0.

            The Avro Schema is unchanged. However when using the COPY INTO Snowflake command, I receive a LOAD FAILED with error: Error parsing the parquet file: Logical type Null can not be applied to group node but no other error details :(

            The problem is that there are no null columns in the files.

            I've cut the Avro schema down, and found that the presence of a MAP type in the Avro schema is causing the issue.

            The field is



            Answered 2020-Jun-22 at 09:19

            Logical type Null can not be applied to group node

            Looking up the error above, it appears that a version of Apache Arrow's parquet libraries is being used to read the file.

            However, looking closer, the real problem lies in the use of legacy types within the Avro based Parquet Writer implementation (the following assumes Java was used to write the files).

            The new logicalTypes schema metadata introduced in Parquet defines many types including a singular MAP type. Historically, the former convertedTypes schema field supported use of MAP AND MAP_KEY_VALUE for legacy readers. The new writers that use logicalTypes (1.11.0+) should not be using the legacy map type anymore, but work hasn't been done yet to update the Avro to Parquet schema conversions to drop the MAP_KEY_VALUE types entirely.

            As a result, the schema field for MAP_KEY_VALUE gets written out with an UNKNOWN value of logicalType, which trips up Arrow's implementation that only understands logicalType values of MAP and LIST (understandably).

            Consider logging this as a bug against the Apache Parquet project to update their Avro writers to stop nesting the legacy MAP_KEY_VALUE type when transforming an Avro schema to a Parquet one. It should've ideally been done as part of PARQUET-1410.

            Unfortunately this is hard-coded behaviour and there are no configuration options that influence map-types that can aid in producing a correct file for Apache Arrow (and for Snowflake by extension). You'll need to use an older version of the writer until a proper fix is released by the Apache Parquet developers.



            How to load key-value pairs (MAP) into Athena from Parquet file?
            Asked 2020-Jun-21 at 20:49

            I have an S3 bucket full of .gz.parquet files. I want to make them accessible in Athena. In order to do this I am creating a table in Athena that points at the s3 bucket:



            Answered 2020-Jun-17 at 20:16

            You can use an AWS Glue Crawler to automatically derive the schema from your Parquet files.

            Defining AWS Glue Crawlers:



            INT32 type error when scanning parquet federated table. Bug or Expected behavior?
            Asked 2020-Apr-13 at 15:53

            I am using BigQuery to query an external data source (also known as a federated table), where the source data is a hive-partitioned parquet table stored in google cloud storage. I used this guide to define the table.

            My first query to test this table looks like the following



            Answered 2020-Apr-13 at 15:53

            Note that, the schema of the external table is inferred from the last file sorted by the file names lexicographically among the list of all files that match the source URI of the table. So any chance that particular Parquet file in your case has a different schema than the one you described, e.g., a INT32 column with DATE logical type for the "visitor_partition" field -- which BigQuery would infer as DATE type.



            spark parquet enable dictionary
            Asked 2020-Mar-27 at 23:18

            I am running a spark job to write to parquet. I want to enable dictionary encoding for the files written. When I check the files, I see they are 'plain dictionary'. However, I do not see any stats for these columns

            Let me know if I am missing anything



            Answered 2020-Mar-27 at 23:18

            Got the answer. The parquet tools version I was using was 1.6. Upgrading to 1.10 solved the issue



            parquet int96 timestamp conversion to datetime/date via python
            Asked 2020-Mar-16 at 19:01

            I'd like to convert an int96 value such as ACIE4NxJAAAKhSUA into a readable timestamp format like 2020-03-02 14:34:22 or whatever that could be normally interpreted...I mostly use python so I'm looking to build a function that does this conversion. If there's another function that can do the reverse -- even better.


            I'm using parquet-tools to convert a raw parquet file (with snappy compression) to raw JSON via this commmand:



            Answered 2020-Mar-16 at 19:01

            parquet-tools will not be able to change format type from INT96 to INT64. What you are observing in json output is a String representation of the timestamp stored in INT96 TimestampType. You will need spark to re-write this parquet with timestamp in INT64 TimestampType and then the json output will produce a timestamp (in the format you desire).

            You will need to set a specific config in Spark -


            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install parquet-tools

            You can download it from GitHub.
            You can use parquet-tools like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the parquet-tools component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer For Gradle installation, please refer .


            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • HTTPS


          • CLI

            gh repo clone wesleypeck/parquet-tools

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Serialization Libraries


            by protocolbuffers


            by google


            by capnproto


            by protobufjs


            by golang