To unlock realtime, serverless analytics simply CREATE SCHEMA analytics; on Hydra. Behind the scenes, Hydra uses DuckDB to run isolated serverless processing on Postgres’ analytics schema. DuckDB is a fast, in-process SQL database for analytics. Hydra is built on pg _duckdb - an open source project we built to integrate DuckDB’s execution and features with Postgres automatically.

Do I have to know how to use DuckDB?

No. Using Hydra is identical to using standard Postgres — Hydra abstracts away the DuckDB execution details. However, your friends will be impressed when you can skillfully explain Hydra internals during game night! 🎲

Serverless analytical processing is enabled automatically when needed. It runs whenever you use your analytics schema.

If you’d like our help or have questions, post a quick question in discord! That’s the easiest place to find our engineering, sales, and founders.

If you’d like to learn see how DuckDB executed the query, run a normal EXPLAIN query like EXPLAIN SELECT * FROM foo on a table in the analytics schema. The DuckDB execution plan will be present anytime DuckDB is in use. Here’s an example Hydra explain plan:

Read Scaling Replicas

Hydra employs light weight compute orchestration so read-only users can avoid resource contention. Read scaling tokens will transparently access up to 4X compute resources to better handle concurrent query workloads. The result is an improved end user experience on your analytics schema.

Hydra accounts now support scaling up to 4 replicas of your database. When connecting with a read scaling token, each concurrent end user connects to a read scaling replica of the database that is served by its own process. Beyond this limit, applications can gracefully degrade by having multiple end users be served by the same process.

Create read scaling replicas

To add read scaling replicas to your project please email support@hydra.so.

Permissions and limitations

A read scaling token grants permission for read operations, such as querying tables, but restricts write operations, including:

  • Updating tables

  • Creating new databases

  • Attaching or detaching databases

How a client uses read scaling replicas

When a client connects to Hydra with a read scaling replica:

  • It is assigned to one of the read scaling replicas for the user account.

  • This is in addition to the standard read-write processing that is used normally.

These replicas are eventually consistent, meaning data from read operations may briefly lag behind the latest writes.

Why is Hydra so ducking fast?

Hydra runs pg_duckdb an extension that enables DuckDB’s high performance analytics engine in Postgres. DuckDB stands on the shoulders of giants and draws components and inspiration from open source projects and scientific publications.

To efficiently support this workload, it is critical to reduce the amount of CPU cycles that are expended per individual value. The state of the art in data management to achieve this are either vectorized or just-in-time query execution engines. DuckDB contains a columnar-vectorized query execution engine, where queries are still interpreted, but a large batch of values (a “vector”) are processed in one operation. This greatly reduces overhead present in traditional systems such as standard PostgreSQL, MySQL or SQLite which process each row sequentially. Vectorized query execution leads to far better performance in OLAP queries.

Here is an overview of components and scientific publications which have inspired DuckDB’s design:

Limitations

Navigate to the known limitations section of the pg_duckdb repo.