Apache Parquet file writer in JavaScript
Go to file
2025-04-17 01:13:46 -07:00
.github/workflows Support more SchemaElement options 2025-04-11 02:50:38 -07:00
src Don't write file_path, duckdb doesn't like it 2025-04-17 01:13:46 -07:00
test Don't write file_path, duckdb doesn't like it 2025-04-17 01:13:46 -07:00
.gitignore FileWriter 2025-04-08 03:22:30 -07:00
benchmark.js TPCH benchmark 2025-04-15 22:55:21 -07:00
eslint.config.js TPCH benchmark 2025-04-15 22:55:21 -07:00
hyparquet-writer.jpg Add mascot 2025-04-07 01:27:45 -07:00
LICENSE Initial JS project 2025-03-21 00:08:34 -07:00
package.json Publish v0.3.0 2025-04-17 00:39:09 -07:00
README.md Update README 2025-04-14 23:27:07 -07:00
tsconfig.build.json Handle byte array vs string, and change parquetWrite column api 2025-03-26 01:01:04 -07:00
tsconfig.json Thrift writer 2025-03-25 10:30:37 -07:00

Hyparquet Writer

hyparquet writer parakeet

npm minzipped workflow status mit license coverage dependencies

Hyparquet Writer is a JavaScript library for writing Apache Parquet files. It is designed to be lightweight, fast and store data very efficiently. It is a companion to the hyparquet library, which is a JavaScript library for reading parquet files.

Quick Start

To write a parquet file to an ArrayBuffer use parquetWriteBuffer with argument columnData. Each column in columnData should contain:

  • name: the column name
  • data: an array of same-type values
  • type: the parquet schema type (optional)
import { parquetWriteBuffer } from 'hyparquet-writer'

const arrayBuffer = parquetWriteBuffer({
  columnData: [
    { name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'BYTE_ARRAY' },
    { name: 'age', data: [25, 30, 35], type: 'INT32' },
  ],
})

Note: if type is not provided, the type will be guessed from the data. The supported parquet types are:

  • BOOLEAN
  • INT32
  • INT64
  • FLOAT
  • DOUBLE
  • BYTE_ARRAY
  • FIXED_LEN_BYTE_ARRAY

Strings are represented in parquet as type BYTE_ARRAY.

Node.js Write to Local Parquet File

To write a local parquet file in node.js use parquetWriteFile with arguments filename and columnData:

const { parquetWriteFile } = await import('hyparquet-writer')

parquetWriteFile({
  filename: 'example.parquet',
  columnData: [
    { name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'BYTE_ARRAY' },
    { name: 'age', data: [25, 30, 35], type: 'INT32' },
  ],
})

Note: hyparquet-writer is published as an ES module, so dynamic import() may be required on the command line.

Advanced Usage

Options can be passed to parquetWrite to adjust parquet file writing behavior:

  • writer: a generic writer object
  • compressed: use snappy compression (default true)
  • statistics: write column statistics (default true)
  • rowGroupSize: number of rows in each row group (default 100000)
  • kvMetadata: extra key-value metadata to be stored in the parquet footer
import { ByteWriter, parquetWrite } from 'hyparquet-writer'

const writer = new ByteWriter()
const arrayBuffer = parquetWrite({
  writer,
  columnData: [
    { name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'BYTE_ARRAY' },
    { name: 'age', data: [25, 30, 35], type: 'INT32' },
  ],
  compressed: false,
  statistics: false,
  rowGroupSize: 1000,
  kvMetadata: [
    { key: 'key1', value: 'value1' },
    { key: 'key2', value: 'value2' },
  ],
})

Converted Types

You can provide additional type hints by providing a converted_type to the columnData elements:

parquetWrite({
  columnData: [
    {
      name: 'dates',
      data: [new Date(1000000), new Date(2000000)],
      type: 'INT64',
      converted_type: 'TIMESTAMP_MILLIS',
    },
    {
      name: 'json',
      data: [{ foo: 'bar' }, { baz: 3 }, 'imastring'],
      type: 'BYTE_ARRAY',
      converted_type: 'JSON',
    },
  ]
})

Most converted types will be auto-detected if you just provide data with no types. However, it is still recommended that you provide type information when possible. (zero rows would throw an exception, floats might be typed as int, etc)

References