refactor: Refactor Parquet reader to avoid loading entire file in memory at once#184
Conversation
| stm >= 2.5 && < 3, | ||
| filepath >= 1.4 && < 2, | ||
| Glob >= 0.10 && < 1, | ||
| streamly-core, |
There was a problem hiding this comment.
Cabal check fails without version bounds. @adithyaov please advise on bounds.
src/DataFrame/IO/Parquet.hs
Outdated
| { selectedColumns = Nothing | ||
| , predicate = Nothing | ||
| , rowRange = Nothing | ||
| , forceNonSeekable = Nothing |
There was a problem hiding this comment.
Rather than putting this in the public API we'd much rather make a separate testing endpoint.
Something like:
-- production path
readParquetWithOpts opts path = withSeekable path ReadMode (readHelper opts path)
-- This would be the testing function.
_readParquetWithOpts opts path = withFilebuffer path ReadMode (readHelper opts path)I actually don't know if there is a good way to inject behaviour into Haskell tests in this way. Lemme ask around then get back to this but separating the functions seems like a good first start to me.
There was a problem hiding this comment.
Removed that option, and as you suggested, using function + partial application instead.
There was a problem hiding this comment.
Fixed.
Resolved branch conflict due to the new littleEndian helper introduced. Since the readMetadataSizeFromFooterSlice function now assumes it takes the exact 8 bit of footer slice, it can be simplified further using the new littleEndianWord32 helper.
Read Parquet metadata from the footer and fetch column chunk bytes by seek instead of loading the entire file into memory up front. This keeps the current page decoding path intact while reducing peak memory usage for normal file reads, ensuring that only the column chunks needed are loaded into memory. One column chunk at a time so extra memory is bounded by the size of the column chunk. This is also the first step towards a streaming reader.
…ication to inject testing behavior while preserving readParquetWithOpts API
5994fa5 to
6d972f4
Compare
Read Parquet metadata from the footer and fetch column chunk bytes by seek instead of loading the entire file into memory up front.
This keeps the current page decoding path intact while reducing peak memory usage for normal file reads, ensuring that only the column chunks needed are loaded into memory. One column chunk at a time so extra memory is bounded by the size of the column chunk.
(But for extra memory <= 1 chunk to be completely true, it still depends on the page decoding behavior, whether there are unevaluated thunks referencing a ByteString. I've changed to use Map.Strict for Parquet.hs, After we move to fully streamed reader this should not be a problem.)
This is also the first step towards a streaming reader.
Compatibility with non-seekable source is also maintained
testing added and passed.
Ready for feedbacks
Related issue: #133