Blog post

Next steps for Postgres pluggable storage

2023-05-01

4 minute read

Next steps for Postgres pluggable storage

You might have recently read “The Part of PostgreSQL We Hate the Most” which highlights some of the deficiencies with Postgres storage. Fortunately, there is ongoing work in the Postgres ecosystem to solve this using “pluggable storage”.

This article explores the history of Postgres pluggable storage and the possibility of landing it in the Postgres core.

What is pluggable storage?

Pluggable storage gives developers the ability to use different storage engines for different tables within the same database.

Developers will be able to choose a storage method that is optimized for their specific needs. Some tables could be configured for high transactional loads, others for analytics workloads, and still others for archiving.

Something like this is already available in MySQL, which uses the InnoDB as the default storage engine since MySQL 5.5. This replaced MyISAM which had it’s own set of problems.

The “pluggable” part refers to the ability for developers to develop their own storage methods, similar to Postgres Extensions. While some databases offer multiple built-in storage methods by default, Postgres follows a core philosophy of flexibility and extensibility, and using a pluggable approach makes a lot of sense.

Current progress for pluggable storage

In version 12, PostgreSQL introduced basic support for pluggable storage with the goal of adding ZHeap. Andres Freund and several contributors enabled that feature by refactoring and expanding a variety of table access method (TAM) APIs.

ZHeap was released with promising results, providing an undo log and solving some long-term table bloat issues in Postgres.

While it was a great start, unfortunately it appears that the work on ZHeap and the TAM API has stalled. This leaves a few holes in that need to be fixed before the community can use the TAM API for pluggable storage. In particular:

  • The TAM API doesn’t provide much flexibility for working with indexes. For example, when you update a row an index update is an all-or-nothing procedure.
  • Row versions need to be identified by the pairing of a block number and offset number. This means you can’t implement some useful functions, like an index-organized table or undo-based versioning.

Even without a full API for pluggable storage, there are a couple of solutions available on the market today:

  • Citus’s Columnar Storage, which powers Microsoft’s Hyperscale offering. Citus has a few limitations due to the restrictions of the current API.
  • EDB’s Advanced Storage Pack, which is appears to provide some enhancements to heap storage, although the source is not available for us to confirm.

The future of pluggable storage

Right now, pluggable storage requires a fork of Postgres. But there are several initiatives in the Postgres community that we’re excited about.

  • There have been community updates for custom resource managers that expand write-ahead log (WAL) support for TAM.
  • The creators at OrioleDB started with a fork of Postgres and have been quietly upstreaming code that would make pluggable storage a reality. In Postgres 14 their patchset was ~5,000 lines of code, representing the total number of changes required in their fork. In the upcoming Postgres 16 release, it is ~2,000 lines of code. That means that around 60% of the code is already committed to the Postgres core. (Disclosure: Supabase started sponsoring OrioleDB in 2022).

Alexander Korotkov, the maintainer of OrioleDB, will be talking at PGCON 23 about some of the changes they are making. They are solving many "wicked Postgres problems" with dual pointers instead of buffer mapping, row-level WAL instead of block-level WAL, undo logs instead of bloat-prone MVCC, and index-organized tables instead of heap.

Timelines for pluggable storage

Ambitiously, many of the changes required for pluggable storage could land in Postgres 17. If that happens, new storage engines like OrioleDB would be available as a simple extension for developers to use in any Postgres instance. This paves the way for other Pluggable Storage Engines, just like MySQL’s support for multiple storage engines.

As a community project, Postgres requires support for these initiatives. You can support pluggable storage by getting involved during Postgres Commitfests. Postgres is already one of the best database engines in the world, and pluggable storage would make it even more attractive and maintainable.

More Postgres content

Share this article

Last post

Securing your Flutter apps with Multi-Factor Authentication

4 May 2023

Next post

Launch Week 7 Hackathon Winners

24 April 2023

Related articles

Building ChatGPT Plugins with Supabase Edge Runtime

Flutter Hackathon

Supabase Beta April 2023

Securing your Flutter apps with Multi-Factor Authentication

Next steps for Postgres pluggable storage

Build in a weekend, scale to millions