To enable serverless analytics simply create a table. Hydra use its analytics-optimized columnstore as the default table type. Columnstore tables have full read and write capabilities and can be joined with standard Postgres tables (rowstore). Serverless processing scales compute automatically to maximize performance. To scale read capabilities further, navigate to the read-scaling replicas documentation.
Hydra uses DuckDB to perform isolated serverless processing on these tables in Postgres. DuckDB is a fast, in-process SQL database optimized for analytics. Hydra integrates DuckDB execution and features with Postgres using pg_duckdb, an open-source project we co-developed with the creators of DuckDB.
No, using Hydra is identical to using standard Postgres — Hydra abstracts away the DuckDB execution details. However, your friends will be impressed when you can skillfully explain Hydra internals during game night! 🎲Serverless analytical processing is enabled automatically when needed. It runs whenever you read or write to your analytics tables.If you’d like our help or have questions, post a quick question in discord! That’s the easiest place to find our engineering, sales, and founders.If you’d like to learn see how DuckDB executed the query, run a normal EXPLAIN query like EXPLAIN SELECT * FROM foo on a table in the analytics schema. The DuckDB execution plan will be present anytime DuckDB is in use. Here’s an example Hydra explain plan:
Hydra runs pg_duckdb an extension that enables DuckDB’s high performance analytics engine in Postgres. DuckDB stands on the shoulders of giants and draws components and inspiration from open source projects and scientific publications.
To efficiently support this workload, it is critical to reduce the amount of CPU cycles that are expended per individual value. The state of the art in data management to achieve this are either vectorized or just-in-time query execution engines. DuckDB contains a columnar-vectorized query execution engine, where queries are still interpreted, but a large batch of values (a “vector”) are processed in one operation. This greatly reduces overhead present in traditional systems such as standard PostgreSQL, MySQL or SQLite which process each row sequentially. Vectorized query execution leads to far better performance in OLAP queries.Here is an overview of components and scientific publications which have inspired DuckDB’s design:
SQL inequality joins: DuckDB’s inequality join implementation uses the IEJoin algorithm as described in the paper Lightning Fast and Space Efficient Inequality Joins Zuhair Khayyat, William Lucia, Meghna Singh, Mourad Ouzzani, Paolo Papotti, Jorge-Arnulfo Quiané-Ruiz, Nan Tang and Panos Kalnis.
Compression of floating-point values: DuckDB supports the multiple algorithms for compressing floating-point values:
Chimp by Panagiotis Liakos, Katia Papakonstantinopoulou and Yannis Kotidi