Catalogs and the REST catalog

GETTING STARTED

Catalogs in Apache Iceberg

The core responsibility of Iceberg is to manage a collection of files as a high-performance, scalable, and SQL-compatible table. But there are some important operations you can’t perform with files alone. Many of these are higher-level changes in a collection of tables. For example, it is common in data engineering patterns to rename tables. The trouble is, renaming a huge collection of files is impractical in object stores, where renaming each file actually creates a copy.

Further, Iceberg’s metadata model is a persistent tree structure. All table updates create a new root file and commit by atomically swapping the current root for the new one. This atomic swap can be implemented in several ways, but it is again impractical to use only file system or object store operations for it, and of course the methods differ by storage system. Distributed file systems like HDFS can provide an atomic rename, local file systems can create files exclusively, some object stores provide a putIfAbsent operation,but Amazon S3 — by far the most popular object store — supports none of these.

As is common with databases, Iceberg uses a catalog to solve these challenges. A catalog is responsible for two things:

  1. Maintaining the current metadata location via atomic swap
  2. Tracking table names and namespaces (such as Trino’s schema)

Catalogs are a useful layer for handling administrative concerns, such as determining where to store the files for a table or controlling the default format version.

Clarifying different uses of catalogs

The term catalog is used in multiple contexts with very different definitions. In Iceberg, a catalog is a technical catalog or metastore. It plays an important role in tracking tables and their metadata that is specific to Iceberg. At a minimum, a catalog is the source of truth for a table’s current metadata location. This is similar to the role catalogs play in Postgres or Snowflake.

This is in contrast to a federated catalog, which tracks datasets across multiple data stores and typically focuses on business needs: data governance, documentation, and discovery. A federated catalog may point to tables from Cassandra, Postgres, Hive, and other systems, and track additional lineage and governance metadata.

If you’re familiar with Hive’s Metastore (HMS), you may have noticed it doesn’t cleanly fit either definition. HMS is the technical catalog for Hive tables, but it may also track JDBC tables, ElasticSearch datasets, and metadata for a huge variety of other sources. HMS is actually a mix of both categories. Its support for multiple file formats evolved over time into secondary references because Hive and Spark could only interact with a single HMS instance. More recently, multi-catalog support in Spark has made it possible to work with other data stores directly.

Pluggable catalogs

When we first created Iceberg at Netflix, the Hive Metastore was the most popular catalog, though there were already hosted alternatives like AWS Glue. Scale challenges led Netflix and other big tech companies to develop their own more scalable private HMS forks.

This influenced the design of Iceberg in two important ways:

  1. The metastore catalog’s responsibility is minimal — it just handles the atomic swap — to avoid creating a central metadata bottleneck. Iceberg distributes metadata work to clients so that it scales along with compute resources.
  2. Catalogs are easily pluggable to support the many available metastores, and since then the options have grown over time.

Iceberg has built-in support for REST (covered in more detail below), HMS, Glue, DynamoDB, JDBC, Nessie, and Snowflake. You can also build your own catalog.

Configuring Iceberg catalogs

The built-in catalogs are selected and configured using a common set of catalog properties:

PropertyDescription
typeOne of resthivegluejdbc, or snowflake
uriURI to connect to the catalog, for example http://host/api/
warehouseA warehouse location or name
clientsSize of a client connection pool (if used); defaults to 2
cache-enabledWhether to cache tables when loaded; defaults to true

Catalog implementations may also expose their own catalog-specific properties.

Different engines and environments have slightly different ways to pass these common configuration properties. For example, Spark configures catalogs in the Spark conf, using properties starting with spark.sql.catalog.hms_prod. Here’s an example Spark configuration for a Hive catalog named hms_prod:

# Create hms_prod that uses Iceberg's Spark catalog implementation
spark.sql.catalog.hms_prod=org.apache.iceberg.spark.SparkCatalog
# Configure hms_prod to use the Hive implementation
spark.sql.catalog.hms_prod.type=hive
spark.sql.catalog.hms_prod.uri=thrift://hms.example.com:9083
spark.sql.catalog.hms_prod.warehouse=s3://prod_bucket/hive/warehouse
spark.sql.catalog.hms_prod.clients=4

You can also implement your own custom catalog. In JVM frameworks, you can configure a custom catalog by omitting type and setting catalog-impl to the catalog class.

The REST catalog protocol

As the Iceberg project grew to support more languages and engines, pluggable catalogs started to cause some practical problems. Catalogs needed to be implemented in multiple languages and it proved difficult for commercial offerings to support many different catalogs and clients.

To solve compatibility problems and pave a path for new features, the community created the REST catalog protocol, a common API (using the OpenAPI spec) for interacting with any Iceberg catalog. This is analogous to Hive’s thrift protocol for HMS.

The REST protocol is important for several reasons:

  • New languages and engines can support any catalog with just one client implementation.
  • It uses change-based commits to enable server-side deconfliction and retries — fewer failures!
  • Metadata version upgrades are easier because root metadata is written by the catalog service.
  • It enables new features such as lazy snapshot loading, multi-table commits, and caching.
  • The protocol supports secure table sharing using credential vending or remote signing.

You can use the REST catalog protocol with any built-in catalog using translation in the open source CatalogHandlers class, or using this docker image (maintained by Tabular).

To use a REST catalog client, configure type=rest and point uri to the catalog’s HTTP endpoint. For a full example, see the recipe Connecting to a REST catalog.