| .github/workflows | ||
| src | ||
| test | ||
| .gitignore | ||
| eslint.config.js | ||
| hyparquet-writer.jpg | ||
| LICENSE | ||
| package.json | ||
| README.md | ||
| tsconfig.build.json | ||
| tsconfig.json | ||
Hyparquet Writer
Hyparquet Writer is a JavaScript library for writing Apache Parquet files. It is designed to be lightweight, fast and store data very efficiently. It is a companion to the hyparquet library, which is a JavaScript library for reading parquet files.
Quick Start
To write a parquet file to an ArrayBuffer use parquetWriteBuffer with argument columnData. Each column in columnData should contain:
name: the column namedata: an array of same-type valuestype: the parquet schema type (optional)
import { parquetWriteBuffer } from 'hyparquet-writer'
const arrayBuffer = parquetWriteBuffer({
columnData: [
{ name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'STRING' },
{ name: 'age', data: [25, 30, 35], type: 'INT32' },
],
})
Note: if type is not provided, the type will be guessed from the data. The supported parquet types are:
BOOLEANINT32INT64FLOATDOUBLEBYTE_ARRAY
Node.js Write to Local Parquet File
To write a local parquet file in node.js use parquetWriteFile with arguments filename and columnData:
const { parquetWriteFile } = await import('hyparquet-writer')
parquetWriteFile({
filename: 'example.parquet',
columnData: [
{ name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'STRING' },
{ name: 'age', data: [25, 30, 35], type: 'INT32' },
],
})
Note: hyparquet-writer is published as an ES module, so dynamic import() may be required on the command line.
Advanced Usage
Options can be passed to parquetWrite to adjust parquet file writing behavior:
writer: a generic writer objectcompression: use snappy compression (default true)statistics: write column statistics (default true)rowGroupSize: number of rows in each row group (default 100000)kvMetadata: extra key-value metadata to be stored in the parquet footer
import { ByteWriter, parquetWrite } from 'hyparquet-writer'
const writer = new ByteWriter()
const arrayBuffer = parquetWrite({
writer,
columnData: [
{ name: 'name', data: ['Alice', 'Bob', 'Charlie'], type: 'STRING' },
{ name: 'age', data: [25, 30, 35], type: 'INT32' },
],
compression: false,
statistics: false,
rowGroupSize: 1000,
kvMetadata: {
'key1': 'value1',
'key2': 'value2',
},
})
