2023-12-29 17:37:37 +00:00
# hyparquet
2023-12-29 18:46:40 +00:00
2024-02-19 00:42:58 +00:00

2023-12-29 20:12:30 +00:00
2024-01-04 19:24:35 +00:00
[](https://www.npmjs.com/package/hyparquet)
2024-01-11 18:46:23 +00:00
[](https://github.com/hyparam/hyparquet/actions)
[](https://opensource.org/licenses/MIT)
2024-02-02 21:24:53 +00:00
[](https://www.npmjs.com/package/hyparquet?activeTab=dependencies)
2024-05-21 07:15:27 +00:00

2023-12-29 18:46:40 +00:00
2024-04-05 18:28:57 +00:00
Dependency free since 2023!
2023-12-29 19:27:16 +00:00
2024-04-05 18:28:57 +00:00
## What is hyparquet?
2024-01-03 01:16:33 +00:00
2024-04-05 18:28:57 +00:00
Hyparquet is a lightweight, pure JavaScript library for parsing [Apache Parquet ](https://parquet.apache.org ) files. Apache Parquet is a popular columnar storage format that is widely used in data engineering, data science, and machine learning applications for efficiently storing and processing large datasets.
Hyparquet allows you to read and extract data from Parquet files directly in JavaScript environments, both in Node.js and in the browser. It is designed to be fast, memory-efficient, and easy to use.
2024-07-23 04:51:26 +00:00
## Demo
Online parquet file reader demo available at:
https://hyparam.github.io/hyparquet/
2024-07-26 01:03:14 +00:00
## Features
2024-04-05 18:28:57 +00:00
1. **Performant** : Designed to efficiently process large datasets by only loading the required data, making it suitable for big data and machine learning applications.
2. **Browser-native** : Built to work seamlessly in the browser, opening up new possibilities for web-based data applications and visualizations.
3. **Dependency-free** : Hyparquet has zero dependencies, making it lightweight and easy to install and use in any JavaScript project.
2024-04-11 20:11:30 +00:00
4. **TypeScript support** : The library is written in jsdoc-typed JavaScript and provides TypeScript definitions out of the box.
2024-04-05 18:28:57 +00:00
5. **Flexible data access** : Hyparquet allows you to read specific subsets of data by specifying row and column ranges, giving fine-grained control over what data is fetched and loaded.
2024-01-03 17:56:17 +00:00
2024-07-26 01:03:14 +00:00
## Why hyparquet?
2024-01-09 23:15:08 +00:00
2024-04-11 20:11:30 +00:00
Why make a new parquet parser?
2024-01-15 19:01:35 +00:00
First, existing libraries like [parquetjs ](https://github.com/ironSource/parquetjs ) are officially "inactive".
Importantly, they do not support the kind of stream processing needed to make a really performant parser in the browser.
And finally, no dependencies means that hyparquet is lean, and easy to package and deploy.
2024-01-09 23:15:08 +00:00
## Usage
2024-04-05 18:28:57 +00:00
Install the hyparquet package from npm:
```bash
npm install hyparquet
```
2024-07-26 01:03:14 +00:00
## Reading Data
### Node.js
To read the entire contents of a parquet file in a node.js environment:
```js
const { parquetRead } = await import('hyparquet')
const { createReadStream } = await import('fs')
const file = { // AsyncBuffer
byteLength: stat.size,
async slice(start, end) {
// read file slice
const readStream = createReadStream(filename, { start, end })
return await readStreamToArrayBuffer(readStream)
}
}
await parquetRead({
file,
onComplete: data => console.log(data)
})
```
### Browser
Hyparquet supports asynchronous fetching of parquet files over a network.
You can provide an `AsyncBuffer` which is like a js `ArrayBuffer` but the `slice` method returns `Promise<ArrayBuffer>` .
```js
const { parquetRead } = await import('https://cdn.jsdelivr.net/npm/hyparquet/src/hyparquet.min.js')
const file = { // AsyncBuffer
byteLength,
async slice(start, end) {
// fetch byte range from url
const headers = new Headers()
headers.set('Range', `bytes=${start}-${end - 1}` )
const res = await fetch(url, { headers })
if (!res.ok || !res.body) throw new Error('fetch failed')
return res.arrayBuffer()
},
}
await parquetRead({
file,
onComplete: data => console.log(data)
})
```
In a node.js environment:
## Metadata
You can read just the metadata, including schema and data statistics using the `parquetMetadata` function:
2024-01-09 23:15:08 +00:00
```js
const { parquetMetadata } = await import('hyparquet')
const fs = await import('fs')
const buffer = fs.readFileSync('example.parquet')
2024-01-15 23:14:11 +00:00
const arrayBuffer = new Uint8Array(buffer).buffer
2024-01-09 23:15:08 +00:00
const metadata = parquetMetadata(arrayBuffer)
```
If you're in a browser environment, you'll probably get parquet file data from either a drag-and-dropped file from the user, or downloaded from the web.
To load parquet data in the browser from a remote server using `fetch` :
2024-01-04 19:24:35 +00:00
```js
import { parquetMetadata } from 'hyparquet'
2024-01-09 23:15:08 +00:00
const res = await fetch(url)
const arrayBuffer = await res.arrayBuffer()
const metadata = parquetMetadata(arrayBuffer)
2024-01-04 19:24:35 +00:00
```
2024-01-09 23:15:08 +00:00
To parse parquet files from a user drag-and-drop action, see example in [index.html ](index.html ).
2024-04-11 20:11:30 +00:00
## Filtering
To read large parquet files, it is recommended that you filter by row and column.
Hyparquet is designed to load only the minimal amount of data needed to fulfill a query.
You can filter rows by number, or columns by name:
```js
import { parquetRead } from 'hyparquet'
await parquetRead({
file,
columns: ['colA', 'colB'], // include columns colA and colB
rowStart: 100,
rowEnd: 200,
onComplete: data => console.log(data),
})
```
2024-07-26 01:03:14 +00:00
## Advanced Usage
### AsyncBuffer
2024-02-26 18:32:53 +00:00
2024-04-11 20:11:30 +00:00
Hyparquet supports asynchronous fetching of parquet files over a network.
2024-02-26 18:32:53 +00:00
You can provide an `AsyncBuffer` which is like a js `ArrayBuffer` but the `slice` method returns `Promise<ArrayBuffer>` .
2024-04-11 20:11:30 +00:00
```typescript
interface AsyncBuffer {
byteLength: number
slice(start: number, end?: number): Promise< ArrayBuffer >
}
```
You can read parquet files asynchronously using HTTP Range requests so that only the necessary byte ranges from a `url` will be fetched:
```js
import { parquetRead } from 'hyparquet'
2024-07-26 01:03:14 +00:00
const url = 'https://hyperparam-public.s3.amazonaws.com/wiki-en-00000-of-00041.parquet'
const byteLength = 420296449
2024-04-11 20:11:30 +00:00
await parquetRead({
file: { // AsyncBuffer
byteLength,
async slice(start, end) {
const headers = new Headers()
headers.set('Range', `bytes=${start}-${end - 1}` )
const res = await fetch(url, { headers })
2024-07-26 01:03:14 +00:00
return res.arrayBuffer()
2024-04-11 20:11:30 +00:00
},
2024-07-26 01:03:14 +00:00
},
2024-04-11 20:11:30 +00:00
onComplete: data => console.log(data),
})
```
2024-02-13 18:50:36 +00:00
## Supported Parquet Files
2024-04-03 20:30:08 +00:00
The parquet format is known to be a sprawling format which includes options for a wide array of compression schemes, encoding types, and data structures.
2024-05-20 12:10:21 +00:00
Supported parquet encodings:
- [X] PLAIN
- [X] PLAIN_DICTIONARY
- [X] RLE_DICTIONARY
- [X] RLE
- [X] BIT_PACKED
- [X] DELTA_BINARY_PACKED
- [X] DELTA_BYTE_ARRAY
- [X] DELTA_LENGTH_BYTE_ARRAY
- [X] BYTE_STREAM_SPLIT
## Compression
Supporting every possible compression codec available in parquet would blow up the size of the hyparquet library. In practice, most parquet files use snappy compression.
2024-02-23 18:25:06 +00:00
2024-04-11 20:11:30 +00:00
Parquet compression types supported by default:
2024-02-13 18:50:36 +00:00
- [X] Uncompressed
- [X] Snappy
- [ ] GZip
- [ ] LZO
- [ ] Brotli
- [ ] LZ4
- [ ] ZSTD
- [ ] LZ4_RAW
2024-05-20 12:10:21 +00:00
You can provide custom compression codecs using the `compressors` option.
2024-05-19 01:35:47 +00:00
2024-05-20 12:10:21 +00:00
## hysnappy
2024-04-08 06:08:09 +00:00
The most common compression codec used in parquet is snappy compression.
Hyparquet includes a built-in snappy decompressor written in javascript.
We developed [hysnappy ](https://github.com/hyparam/hysnappy ) to make parquet parsing even faster.
Hysnappy is a snappy decompression codec written in C, compiled to WASM.
To use hysnappy for faster parsing of large parquet files, override the `SNAPPY` compressor for hyparquet:
```js
import { parquetRead } from 'hyparquet'
import { snappyUncompressor } from 'hysnappy'
2024-05-20 12:10:21 +00:00
await parquetRead({
file,
compressors: {
SNAPPY: snappyUncompressor(),
},
onComplete: console.log,
})
2024-04-08 06:08:09 +00:00
```
2024-02-13 18:50:36 +00:00
2024-04-08 06:08:09 +00:00
Parsing a [420mb wikipedia parquet file ](https://huggingface.co/datasets/wikimedia/wikipedia/resolve/main/20231101.en/train-00000-of-00041.parquet ) using hysnappy reduces parsing time by 40% (4.1s to 2.3s).
2024-02-13 18:50:36 +00:00
2024-05-20 12:10:21 +00:00
## hyparquet-compressors
You can include support for ALL parquet compression codecs using the [hyparquet-compressors ](https://github.com/hyparam/hyparquet-compressors ) library.
```js
import { parquetRead } from 'hyparquet'
import { compressors } from 'hyparquet-compressors'
await parquetRead({ file, compressors, onComplete: console.log })
```
2024-01-03 01:16:33 +00:00
## References
- https://github.com/apache/parquet-format
2024-02-14 05:25:40 +00:00
- https://github.com/apache/parquet-testing
2024-01-03 01:16:33 +00:00
- https://github.com/apache/thrift
2024-04-11 20:11:30 +00:00
- https://github.com/apache/arrow
2024-02-14 05:25:40 +00:00
- https://github.com/dask/fastparquet
2024-04-29 02:03:39 +00:00
- https://github.com/duckdb/duckdb
2024-01-03 01:16:33 +00:00
- https://github.com/google/snappy
2024-04-11 20:11:30 +00:00
- https://github.com/ironSource/parquetjs
2024-01-03 01:16:33 +00:00
- https://github.com/zhipeng-jia/snappyjs
2024-06-18 16:56:00 +00:00
## Contributions
Contributions are welcome!
Hyparquet development is supported by an open-source grant from Hugging Face :hugs: