Skip to main content

Default storage in Databricks

This page explains the concept of default storage on Databricks and how to create catalogs and work with data on your workspace's default storage.

What is default storage?

Default storage is a fully managed storage location in your Databricks account. Default storage is used as general workspace storage in serverless workspaces (Public Preview), but is also utilized as a managed storage location for serverless features such as Vector Search.

If you have a serverless workspace, your workspace is provisioned with default storage by default, allowing you to create managed tables and volumes without ever connecting to your cloud-storage account.

Requirements

Default storage is only available in serverless workspaces. Creating catalogs on default storage is only available if your account is enabled for the serverless workspace preview.

Create a catalog with default storage

note

By default, catalogs that use default storage are only accessible from the workspace where they are created. You can grant other workspaces access, but they must use serverless compute to access data in the catalog. See Limit catalog access to specific workspaces.

You must have CREATE CATALOG privileges to create a catalog with default storage. See Unity Catalog privileges and securable objects.

Complete the following steps to create a new catalog using default storage:

  1. Click Data icon. Catalog in the sidebar. Catalog Explorer appears.
  2. Click Create catalog. The Create a new catalog dialog appears.
  3. Provide a Catalog name that is unique in your account.
  4. Select the option to Use default storage.
  5. Click Create.

In serverless workspaces, you can also use the following SQL command to create a new catalog in your default storage. You do not need to specify a location for the catalog.

SQL
CREATE CATALOG [ IF NOT EXISTS ] catalog_name
[ COMMENT comment ]

Work with default storage

All interactions with default storage require serverless, Unity Catalog-enabled compute.

Resources backed by default storage use the same privilege model as other objects in Unity Catalog. You must have sufficient privileges to create, view, query, or modify data objects. See Unity Catalog privileges and securable objects.

You work with default storage by creating and interacting with managed tables and managed volumes backed by default storage. See Unity Catalog managed tables in Databricks for Delta Lake and Apache Iceberg and What are Unity Catalog volumes?.

You can use Catalog Explorer, notebooks, the SQL editor, and dashboards to interact with data objects stored in default storage.

Example tasks

The following are examples of tasks you can complete with default storage:

Limitations

The following limitations apply:

  • Classic compute (any compute that is not serverless) cannot interact with data assets in default storage.
  • Delta Sharing supports sharing tables to any recipient—open or Databricks—and recipients can use classic compute to access shared tables (Beta). Enable the Delta Sharing for Default Storage – Expanded Access feature in your account console.
  • All other shareable assets can only be Delta shared with Databricks recipients on the same cloud. Recipients must use serverless compute.
  • Tables with partitioning enabled cannot be Delta shared.
  • External readers and writers cannot access default storage.
  • You cannot create a new catalog on default storage using the CREATE CATALOG catalog_name SQL command.
  • Front-end PrivateLink is not fully supported with default storage. Features such as Cloud Fetch from external clients are not supported.