Trait datafusion::catalog::CatalogProvider

source ·
pub trait CatalogProvider: Sync + Send {
    // Required methods
    fn as_any(&self) -> &dyn Any;
    fn schema_names(&self) -> Vec<String>;
    fn schema(&self, name: &str) -> Option<Arc<dyn SchemaProvider>>;

    // Provided methods
    fn register_schema(
        &self,
        name: &str,
        schema: Arc<dyn SchemaProvider>
    ) -> Result<Option<Arc<dyn SchemaProvider>>> { ... }
    fn deregister_schema(
        &self,
        _name: &str,
        _cascade: bool
    ) -> Result<Option<Arc<dyn SchemaProvider>>> { ... }
}
Expand description

Represents a catalog, comprising a number of named schemas.

§Catalog Overview

To plan and execute queries, DataFusion needs a “Catalog” that provides metadata such as which schemas and tables exist, their columns and data types, and how to access the data.

The Catalog API consists:

  • CatalogProviderList: a collection of CatalogProviders
  • CatalogProvider: a collection of SchemaProviders (sometimes called a “database” in other systems)
  • SchemaProvider: a collection of TableProviders (often called a “schema” in other systems)
  • [TableProvider]: individual tables

§Implementing Catalogs

To implement a catalog, you implement at least one of the CatalogProviderList, CatalogProvider and SchemaProvider traits and register them appropriately the SessionContext.

DataFusion comes with a simple in-memory catalog implementation, MemoryCatalogProvider, that is used by default and has no persistence. DataFusion does not include more complex Catalog implementations because catalog management is a key design choice for most data systems, and thus it is unlikely that any general-purpose catalog implementation will work well across many use cases.

§Implementing “Remote” catalogs

Sometimes catalog information is stored remotely and requires a network call to retrieve. For example, the Delta Lake table format stores table metadata in files on S3 that must be first downloaded to discover what schemas and tables exist.

The CatalogProvider can support this use case, but it takes some care. The planning APIs in DataFusion are not async and thus network IO can not be performed “lazily” / “on demand” during query planning. The rationale for this design is that using remote procedure calls for all catalog accesses required for query planning would likely result in multiple network calls per plan, resulting in very poor planning performance.

To implement CatalogProvider and SchemaProvider for remote catalogs, you need to provide an in memory snapshot of the required metadata. Most systems typically either already have this information cached locally or can batch access to the remote catalog to retrieve multiple schemas and tables in a single network call.

Note that SchemaProvider::table is an async function in order to simplify implementing simple SchemaProviders. For many table formats it is easy to list all available tables but there is additional non trivial access required to read table details (e.g. statistics).

The pattern that DataFusion itself uses to plan SQL queries is to walk over the query to find all schema / table references in an async function, performing required remote catalog in parallel, and then plans the query using that snapshot.

§Example Catalog Implementations

Here are some examples of how to implement custom catalogs:

Required Methods§

source

fn as_any(&self) -> &dyn Any

Returns the catalog provider as Any so that it can be downcast to a specific implementation.

source

fn schema_names(&self) -> Vec<String>

Retrieves the list of available schema names in this catalog.

source

fn schema(&self, name: &str) -> Option<Arc<dyn SchemaProvider>>

Retrieves a specific schema from the catalog by name, provided it exists.

Provided Methods§

source

fn register_schema( &self, name: &str, schema: Arc<dyn SchemaProvider> ) -> Result<Option<Arc<dyn SchemaProvider>>>

Adds a new schema to this catalog.

If a schema of the same name existed before, it is replaced in the catalog and returned.

By default returns a “Not Implemented” error

source

fn deregister_schema( &self, _name: &str, _cascade: bool ) -> Result<Option<Arc<dyn SchemaProvider>>>

Removes a schema from this catalog. Implementations of this method should return errors if the schema exists but cannot be dropped. For example, in DataFusion’s default in-memory catalog, MemoryCatalogProvider, a non-empty schema will only be successfully dropped when cascade is true. This is equivalent to how DROP SCHEMA works in PostgreSQL.

Implementations of this method should return None if schema with name does not exist.

By default returns a “Not Implemented” error

Implementors§