curl -sfL https://install.pgx.sh | sh -
brew install pgxman/tap/pgxman
Step 2: Install the Hydra Packagepgxman install hydra_pg_duckdb
Step 3: Add configuration for Hydra to your Postgres config file at: /etc/postgresql/{version}/main/postgresql.conf
Add Setting:shared_preload_libraries = 'pg_duckdb'
duckdb.hydra_token = 'fetch token at http://start.hydra.so/get-started'
Fetch an access token for free from the URL above and paste it in.Step 4: Restart Postgres and Create extensionsudo service postgresql restart
create extension pg_duckdb;
🎉 Installation Complete! Next, check out the Quick Start guide to start using Serverless Analytics on Postgres.Database Name | Relative time (lower is better) |
---|---|
Hydra | 1.51 |
ClickHouse (c6a.4XL) | 1.93 |
Snowflake (128x4XL) | 2.71 |
Redshift (serverless) | 2.80 |
TimescaleDB | 16.76 |
AlloyDB | 34.52 |
AWS Aurora | 277.78 |
PostgreSQL (indexes) | 35.20 |
Postgres (no indexes) | 2283 |
AS (id bigint, name text)
syntax is no longer supported when using read_parquet
, iceberg_scan
, etc. The new syntax is as follows:duckdb.query
function which allows using DuckDB query syntax in Postgres.approx_count_distinct
DuckDB aggregate.bytea
(aka blob), uhugeint
,jsonb
, timestamp_ns
, timestamp_ms
, timestamp_s
& interval
types.duckdb.allow_community_extensions
setting.duckdb.raw_query
, duckdb.cache_info
, duckdb.cache_delete
and duckdb.recycle_db
as non-superusers.COPY
commands. This allows using PARTITION_BY
as one of the COPY
options.timestamp with timezone
field by enabling DuckDB its icu
extension by default.read_parquet
functions when not using superuser privileges.duckdb.delta_scan(...)
function.duckdb.read_json(...)
function.float
, numeric
and uuid
arrays.duckdb.cache_info()
and duckdb.cache_delete()
functions.scope
column to duckdb.secrets
table.duckdb.install_extension()
is usually not necessary anymore.CREATE SCHEMA AUTHORIZATION
.DEFAULT
values.SELECT
queries executed by the DuckDB engine can directly read Postgres tables. (If you only query Postgres tables you need to run SET duckdb.force_execution TO true
, see the IMPORTANT section above for details)
SELECT n FROM read_parquet('s3://bucket/file.parquet') AS (n int)
SELECT n FROM read_csv('s3://bucket/file.csv') AS (n int)
SELECT duckdb.install_extension('iceberg')
and read Iceberg files with iceberg_scan
.COPY (SELECT foo, bar FROM baz) TO 's3://...'
COPY table TO 's3://...'
JOIN
data in object storage/MotherDuck with Postgres tables, views, and materialized views.CREATE TEMP TABLE ... USING duckdb
.SELECT duckdb.install_extension('extension_name');
SET duckdb.force_execution = true|false
SELECT duckdb.cache('path', 'type');
where