RMarcus 4 days ago

This is awesome, thanks for creating this. I've had to write some absolutely wonky scripts to dump a PostgreSQL database into Parquet, or read a Parquet file into PostgreSQL. Normally some terrible combination of psycopg and pyarrow, which worked, but it was ad-hoc and slightly different every time.

A lot of other commenters are talking about `pg_duckdb` which maybe also could've solved my problem, but this looks quite simple and clean.

I hope for some kind of near-term future where there's some standardish analytics-friendly data archival format. I think Parquet is the closest thing we have now.

linuxhansl 4 days ago

Parquet itself is actually not that interesting. It should be able to read (and even write) Iceberg tables.

Also, how does it compare to pg_duckdb (which adds DuckDB execution to Postgres including reading parquet and Iceberg), or duck_fdw (which wraps a DuckDB database, which can be in memory and only pass-through Iceberg/Parquet tables)?

  • mslot 4 days ago

    (Marco from Crunchy Data)

    With PostgreSQL extensions, we find it's most effective to have single-purpose modular extensions.

    For instance, I created pg_cron a few years ago, and it's on basically every PostgreSQL service because it does one thing and does it well.

    We wanted to create a light-weight implementation of Parquet that does not pull a multi-threaded library into every postgres process.

    When you get to more complex features, a lot of questions around trade-offs, user experience, and deployment model start appearing. For instance, when querying an Iceberg table, caching becomes quite important, but that raises lots of other questions around cache management. Also, how do you deal with that memory hungry, multi-threaded query engine running in every process without things constantly falling over?

    It's easier to answer those questions in the context of a managed service where you control the environment, so we have a product that can query Iceberg/Parquet/CSV/etc. in S3, does automatic caching, figures out the region of your bucket, can create tables directly from files, and uses DuckDB to accelerate queries in a reliable manner. This is partially powered by a set of custom extensions, partially by other things running on the managed service. https://docs.crunchybridge.com/analytics

    However, some components can be neatly extracted and shared broadly like COPY TO/FROM Parquet. We find it very useful for archiving old partitions, importing public and private data sets, preparing data for analytics, and moving data between PostgreSQL servers.

  • fulafel 4 days ago

    Having the famously crashy DuckDB share a process and memory with PostgreSQL doesn't seem like the most robust setup.

    • memhole 3 days ago

      Famously crashy? Any incidents you can share? I’ve only had good experiences is why I ask.

      • fulafel 3 days ago

        I haven't used it, but have gone through their issue tracker and seen a lot of comments to this effect here and on other internets places. (Unverifiable: also some hearsay from colleagues)

        Not saying they're doing it wrong, it just seems they have some different stability vs performance tradeoffs than PG.

    • skeptrune 4 days ago

      I had the exact same reaction

  • AdamProut 4 days ago

    Had a similar thought. Azure Postgres has something similar to pg_parquet (pg_azure_storage), but we're looking into replacing it with pg_duckdb assuming the extension continues to mature.

    It would be great if the Postgres community could get behind one good opensource extension for the various columnstore data use cases (querying data stored in an open columnstore format - delta, iceberg, etc. being one of them). pg_duckdb seems to have the best chance at being the goto extension for this.

    • mslot 4 days ago

      Fun fact, I created pg_azure_storage :)

      • brinox 4 days ago

        I was just wondering if pg_parquet could be combined with pg_azure_storage to write Parquet files to Azure Storage.

        I had problems with pg_azure_storage in the past, because the roles pg_read_server_files and pg_write_server_files are unassignable on Azure PostgreSQL databases which makes the use of `COPY {FROM,TO}` impossible.

        • mslot 3 days ago

          Azure is not supported as a backend in pg_parquet right now, but shouldn't be hard to add (contributions welcome!)

          https://github.com/CrunchyData/pg_parquet

          It would not be safe to let any user access object storage. Therefore, pg_parquet has two roles called parquet_object_store_read and parquet_object_store_write that give permission to COPY FROM/TO object storage (but not local file system).

          In pg_azure_storage there is a comparable azure_storage_admin role that needs to be granted to users that need Azure Blob Storage permission.

whalesalad 4 days ago

I wish RDS made it easy to add custom extensions like this.

  • wdb 4 days ago

    or Google Cloud

  • treefarmer 4 days ago

    Yeah, I'm still surprised they haven't added a list of unsupported extensions (that you can add but they're not responsible for the performance of).

    • bastawhiz 3 days ago

      Amazingly their own aws_s3 extension isn't even supported in their multi-az cluster configuration.

oulipo 4 days ago

Cool, would this be better than using a clickhouse / duckdb extension that reads postgres and saves to Parquet?

What would be recommended to output regularly old data to S3 as parquet file? To use a cron job which launches a second Postgres process connecting to the database and extracting the data, or using the regular database instance? doesn't that slow down the instance too much?

  • craigkerstiens 4 days ago

    This alone wouldn't be a full replacement. We do have a full product that does that with customers seeing great performance in production. Crunchy Bridge for Analytics does similar by embedding DuckDB inside Postgres, though for users is largely an implementation detail. We support iceberg as well and have a lot more coming basically to allow for seamless analytics on Postgres building on what Postgres is good at, iceberg for storage, and duckdb for vectorized execution.

    That isn't fully open source at this time but has been production grade for some time. This was one piece that makes getting to that easier for folks and felt a good standalone bit to open source and share with the broader community. We can also see where this by itself for certain use cases makes sense, as you sort of point out if you had time series partitioned data, leveraged partman for new partitions and pg_cron which this same set of people authored you could automatically archive old partitions to parquet but still have thing for analysis if needed.

aamederen 4 days ago

Congratulations! I'm happy to see the PostgreSQL license.

jakozaur a day ago

It's good for small data, but the Iceberg format would be nicer for bigger data sets.

jeadie 4 days ago

Why not just federate Postgres and parquet files? That way the query planner can push down as much of the query and reduce how much data has to move about?

fforflo 3 days ago

I can see myself using this as alternative to foreign data wrappers and/or pg_dump even.