Practice Test 2
Practice Test 2
Correct
One of the foundational technologies provided by the Databricks Lakehouse Platform
is an open-source, file-based storage format that brings reliability to data lakes.
Apache Spark
Unity Catalog
Photon
Overall explanation
Delta Lake is an open source technology that extends Parquet data files with a
file-based transaction log for ACID transactions that brings reliability to data
lakes.
Reference: https://docs.databricks.com/delta/index.html
Lecture
Hands-on
Domain
Databricks Lakehouse Platform
Question 2
Correct
Which of the following commands can a data engineer use to purge stale data files
of a Delta table?
DELETE
GARBAGE COLLECTION
CLEAN
Overall explanation
The VACUUM command deletes the unused data files older than a specified data
retention period.
Reference: https://docs.databricks.com/sql/language-manual/delta-vacuum.html
Lecture
Hands-on
Domain
Databricks Lakehouse Platform
Question 3
Correct
In Databricks Repos (Git folders), which of the following operations a data
engineer can use to save local changes of a repo to its remote repository ?
Overall explanation
Commit & Push is used to save the changes on a local repo, then uploads this local
repo content to the remote repository.
References:
https://docs.databricks.com/repos/index.html
https://github.com/git-guides/git-push
Hands-on
Domain
Databricks Lakehouse Platform
Question 4
Correct
In Delta Lake tables, which of the following is the primary format for the
transaction log files?
Delta
Parquet
Hive-specific format
XML
Overall explanation
Delta Lake builds upon standard data formats. Delta lake table gets stored on the
storage in one or more data files in Parquet format, along with transaction logs in
JSON format.
Reference: https://docs.databricks.com/delta/index.html
Lecture
Hands-on
Domain
Databricks Lakehouse Platform
Question 5
Correct
Which of the following functionalities can be performed in Databricks Repos (Git
folders)?
Delete branches
Overall explanation
Databricks Repos supports git Pull operation. It is used to fetch and download
content from a remote repository and immediately update the local repo to match
that content.
References:
https://docs.databricks.com/repos/index.html
https://github.com/git-guides/git-pull
Hands-on
Domain
Databricks Lakehouse Platform
Question 6
Correct
Which of the following locations completely hosts the customer data ?
Control plane
Databricks account
Databricks-managed cluster
Repos
Overall explanation
According to the Databricks Lakehouse architecture, the storage account hosting the
customer data is provisioned in the data plane in the Databricks customer's cloud
account.
Reference: https://docs.databricks.com/getting-started/overview.html
Lecture
Domain
Databricks Lakehouse Platform
Question 7
Correct
If the default notebook language is Python, which of the following options a data
engineer can use to run SQL commands in this Python Notebook ?
This is not possible! They need to change the default language of the notebook to
SQL
Databricks detects cells language automatically, so they can write SQL syntax in
any cell
They can add %language magic command at the start of a cell to force language
detection.
Overall explanation
By default, cells use the default language of the notebook. You can override the
default language in a cell by using the language magic command at the beginning of
a cell. The supported magic commands are: %python, %sql, %scala, and %r.
Reference: https://docs.databricks.com/notebooks/notebooks-code.html
Hands-on
Domain
Databricks Lakehouse Platform
Question 8
Incorrect
A junior data engineer uses the built-in Databricks Notebooks versioning for source
control. A senior data engineer recommended using Databricks Repos (Git folders)
instead.
Which of the following could explain why Databricks Repos is recommended instead of
Databricks Notebooks versioning?
Correct answer
Databricks Repos supports creating and managing branches for development work.
Databricks Repos automatically tracks the changes and keeps the history.
Overall explanation
One advantage of Databricks Repos over the built-in Databricks Notebooks versioning
is that Databricks Repos supports creating and managing branches for development
work.
Reference: https://docs.databricks.com/repos/index.html
Study materials from our exam preparation course on Udemy:
Hands-on
Domain
Databricks Lakehouse Platform
Question 9
Correct
Which of the following services provides a data warehousing experience to its
users?
Unity Catalog
Overall explanation
Databricks SQL (DB SQL) is a data warehouse on the Databricks Lakehouse Platform
that lets you run all your SQL and BI applications at scale.
Reference: https://www.databricks.com/product/databricks-sql
Hands-on
Domain
Databricks Lakehouse Platform
Question 10
Correct
A data engineer noticed that there are unused data files in the directory of a
Delta table. They executed the VACUUM command on this table; however, only some of
those unused data files have been deleted.
Which of the following could explain why only some of the unused data files have
been deleted after running the VACUUM command ?
The deleted data files were larger than the default size threshold. While the
remaining files are smaller than the default size threshold and can not be deleted.
The deleted data files were smaller than the default size threshold. While the
remaining files are larger than the default size threshold and can not be deleted.
The deleted data files were newer than the default retention threshold. While the
remaining files are older than the default retention threshold and can not be
deleted.
Overall explanation
Running the VACUUM command on a Delta table deletes the unused data files older
than a specified data retention period. Unused files newer than the default
retention threshold are kept untouched.
Reference: https://docs.databricks.com/sql/language-manual/delta-vacuum.html
Lecture
Hands-on
Domain
Databricks Lakehouse Platform
Question 11
Correct
The data engineering team has a Delta table called products that contains products’
details including the net price.
Which of the following code blocks will apply a 50% discount on all the products
where the price is greater than 1000 and save the new price to the table?
UPDATE products SET price = price * 0.5 WHERE price >= 1000;
SELECT price * 0.5 AS new_price FROM products WHERE price > 1000;
MERGE INTO products WHERE price < 1000 WHEN MATCHED UPDATE price = price * 0.5;
MERGE INTO products WHERE price > 1000 WHEN MATCHED UPDATE price = price * 0.5;
Overall explanation
The UPDATE statement is used to modify the existing records in a table that match
the WHERE condition. In this case, we are updating the products where the price is
strictly greater than 1000.
Syntax:
UPDATE table_name
SET column_name = expr
WHERE condition
Reference:
https://docs.databricks.com/sql/language-manual/delta-update.html
Domain
Databricks Lakehouse Platform
Question 12
Correct
A data engineer wants to create a relational object by pulling data from two
tables. The relational object will only be used in the current session. In order to
save on storage costs, the date engineer wants to avoid copying and storing
physical data.
Which of the following relational objects should the data engineer create?
External table
Managed table
View
Overall explanation
In order to avoid copying and storing physical data, the data engineer must create
a view object. A view in databricks is a virtual table that has no physical data.
It’s just a saved SQL query against actual tables.
The view type should be Temporary view since it’s tied to a Spark session and
dropped when the session ends.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-
create-view.html
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 13
Correct
A data engineer has a database named db_hr, and they want to know where this
database was created in the underlying storage.
Which of the following commands can the data engineer use to complete this task?
DESCRIBE db_hr
There is no need for a command since all databases are created under the default
hive metastore directory
Overall explanation
The DESCRIBE DATABASE or DESCRIBE SCHEMA returns the metadata of an existing
database (schema). The metadata information includes the database’s name, comment,
and location on the filesystem. If the optional EXTENDED option is specified,
database properties are also returned.
Syntax:
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-
describe-schema.html
Hands-on
Domain
ELT with Spark SQL and Python
Question 14
Correct
Which of the following commands a data engineer can use to register the table
orders from an existing SQLite database ?
Reference: https://learn.microsoft.com/en-us/azure/databricks/external-data/jdbc
Lecture
Domain
ELT with Spark SQL and Python
Question 15
Correct
When dropping a Delta table, which of the following explains why both the table's
metadata and the data files will be deleted ?
The user running the command has the necessary permissions to delete the data files
The data files are older than the default retention period
Overall explanation
Managed tables are tables whose metadata and the data are managed by Databricks.
When you run DROP TABLE on a managed table, both the metadata and the underlying
data files are deleted.
Reference: https://docs.databricks.com/lakehouse/data-objects.html#what-is-a-
managed-table
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 16
Incorrect
Given the following commands:
USE db_hr;
CREATE TABLE employees;
Correct answer
dbfs:/user/hive/warehouse/db_hr.db
dbfs:/user/hive/warehouse/db_hr
dbfs:/user/hive/databases/db_hr.db
Overall explanation
Since we are creating the database here without specifying a LOCATION clause, the
database will be created in the default warehouse directory under
dbfs:/user/hive/warehouse. The database folder have the extension (.db)
And since we are creating the table also without specifying a LOCATION clause, the
table becomes a managed table created under the database directory (in db_hr.db
folder)
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-
create-schema.html
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 17
Correct
Which of the following code blocks can a data engineer use to create a Python
function to multiply two integers and return the result?
Syntax:
def function_name(params):
return params
Reference: https://www.w3schools.com/python/python_functions.asp
Domain
ELT with Spark SQL and Python
Question 18
Correct
Given the following 2 tables:
Fill in the blank to make the following query returns the below result:
RIGHT JOIN
INNER JOIN
ANTI JOIN
CROSS JOIN
Overall explanation
LEFT JOIN returns all values from the left table and the matched values from the
right table, or appends NULL if there is no match. In the above example, we see
NULL in the course_id of John (U0003) since he is not enrolled in any course.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-qry-
select-join.html
Domain
ELT with Spark SQL and Python
Question 19
Correct
Which of the following SQL keywords can be used to rotate rows of a table by
turning row values into multiple columns ?
ROTATE
TRANSFORM
GROUP BY
ZORDER BY
Overall explanation
PIVOT transforms the rows of a table by rotating unique values of a specified
column list into separate columns. In other words, It converts a table from a long
format to a wide format.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-qry-
select-pivot.html
Hands-on
Domain
ELT with Spark SQL and Python
Question 20
Correct
Fill in the below blank to get the number of courses incremented by 1 for each
student in array column students.
SELECT
faculty_id,
students,
___________ AS new_totals
FROM faculties
TRANSFORM (students, total_courses + 1)
ELSE NULL
END
Overall explanation
transform(input_array, lambd_function) is a higher order function that returns an
output array from an input array by transforming each element in the array using a
given lambda function.
Example:
output: [2, 3, 4]
Reference:
https://docs.databricks.com/sql/language-manual/functions/transform.html
https://docs.databricks.com/optimizations/higher-order-lambda-functions.html
Hands-on
Domain
ELT with Spark SQL and Python
Question 21
Correct
Fill in the below blank to successfully create a table using data from CSV files
located at /path/input
USING DELTA
AS
AS CSV
Overall explanation
CREATE TABLE USING allows to specify an external data source type like CSV format,
and with any additional options. This creates an external table pointing to files
stored in an external location.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-
create-table-using.html
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 22
Correct
Which of the following statements best describes the usage of CREATE SCHEMA command
?
It’s used to merge the schema when writing data into a target table
Overall explanation
CREATE SCHEMA is an alias for CREATE DATABASE statement. While usage of SCHEMA and
DATABASE is interchangeable, SCHEMA is preferred.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-
create-database.html
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 23
Correct
Which of the following statements is Not true about CTAS statements ?
With CTAS statements, data will be inserted during the table creation
Overall explanation
CREATE TABLE AS SELECT statements, or CTAS statements create and populate Delta
tables using the output of a SELECT query. CTAS statements automatically infer
schema information from query results and do not support manual schema declaration.
Reference: (cf. AS query clause)
https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-create-table-
using.html
Lecture
Hands-on
Domain
ELT with Spark SQL and Python
Question 24
Correct
Which of the following SQL commands will append this new row to the existing Delta
table users?
Overall explanation
INSERT INTO allows inserting new rows into a Delta table. You specify the inserted
rows by value expressions or the result of a query.
Reference: https://docs.databricks.com/sql/language-manual/sql-ref-syntax-dml-
insert-into.html
Hands-on
Domain
ELT with Spark SQL and Python
Question 25
Incorrect
Given the following Structured Streaming query:
(spark.table("orders")
.withColumn("total_after_tax", col("total")+col("tax"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("append")
.___________
.table("new_orders") )
Fill in the blank to make the query executes multiple micro-batches to process all
available data, then stops the trigger.
trigger(“micro-batches”)
trigger(processingTime=”0 seconds")
trigger(micro-batches=True)
Correct answer
trigger(availableNow=True)
Overall explanation
In Spark Structured Streaming, we use trigger(availableNow=True) to run the stream
in batch mode where it processes all available data in multiple micro-batches. The
trigger will stop on its own once it finishes processing the available data.
Reference:
https://docs.databricks.com/structured-streaming/triggers.html#configuring-
incremental-batch-processing
Lecture
Hands-on
Domain
Incremental Data Processing
Question 26
Correct
Which of the following techniques allows Auto Loader to track the ingestion
progress and store metadata of the discovered files ?
mergeSchema
COPY INTO
Watermarking
Z-Ordering
Overall explanation
Auto Loader keeps track of discovered files using checkpointing in the checkpoint
location. Checkpointing allows Auto loader to provide exactly-once ingestion
guarantees.
Reference: https://docs.databricks.com/ingestion/auto-loader/index.html#how-does-
auto-loader-track-ingestion-progress
Lecture
Hands-on
Domain
Incremental Data Processing
Question 27
Incorrect
A data engineer has defined the following data quality constraint in a Delta Live
Tables pipeline:
Fill in the above blank so records violating this constraint cause the pipeline to
fail.
ON VIOLATION FAIL
Correct answer
ON VIOLATION FAIL UPDATE
Overall explanation
With ON VIOLATION FAIL UPDATE, records that violate the expectation will cause the
pipeline to fail. When a pipeline fails because of an expectation violation, you
must fix the pipeline code to handle the invalid data correctly before re-running
the pipeline.
Reference:
https://learn.microsoft.com/en-us/azure/databricks/workflows/delta-live-tables/
delta-live-tables-expectations#--fail-on-invalid-records
Hands-on
Domain
Incremental Data Processing
Question 28
Correct
In multi-hop architecture, which of the following statements best describes the
Silver layer tables?
They maintain data that powers analytics, machine learning, and production
applications
The table structure in this layer resembles that of the source system table
structure with any additional metadata columns like the load time, and input file
name.
Overall explanation
Silver tables provide a more refined view of the raw data. For example, data can be
cleaned and filtered at this level. And we can also join fields from various bronze
tables to enrich our silver records
Reference:
https://www.databricks.com/glossary/medallion-architecture
Lecture
Hands-on
Domain
Incremental Data Processing
Question 29
Correct
The data engineer team has a DLT pipeline that updates all the tables at defined
intervals until manually stopped. The compute resources of the pipeline continue
running to allow for quick testing.
Which of the following best describes the execution modes of this DLT pipeline ?
The DLT pipeline executes in Continuous Pipeline mode under Production mode.
The DLT pipeline executes in Triggered Pipeline mode under Production mode.
The DLT pipeline executes in Triggered Pipeline mode under Development mode.
Overall explanation
Continuous pipelines update tables continuously as input data changes. Once an
update is started, it continues to run until the pipeline is shut down.
In Development mode, the Delta Live Tables system ease the development process by
Reusing a cluster to avoid the overhead of restarts. The cluster runs for two hours
when development mode is enabled.
Disabling pipeline retries so you can immediately detect and fix errors.
Reference:
https://docs.databricks.com/workflows/delta-live-tables/delta-live-tables-
concepts.html
Hands-on
Domain
Incremental Data Processing
Question 30
Correct
Given the following Structured Streaming query:
(spark.readStream
.table("cleanedOrders")
.groupBy("productCategory")
.agg(sum("totalWithTax"))
.writeStream
.option("checkpointLocation", checkpointPath)
.outputMode("complete")
.table("aggregatedOrders")
)
Which of the following best describe the purpose of this query in a multi-hop
architecture?
The query is performing data transfer from a Gold table into a production
application
Overall explanation
The above Structured Streaming query creates business-level aggregates from clean
orders data in the silver table cleanedOrders, and loads them in the gold table
aggregatedOrders.
Reference:
https://www.databricks.com/glossary/medallion-architecture
Lecture
Hands-on
Domain
Incremental Data Processing
Question 31
Incorrect
Given the following Structured Streaming query:
(spark.readStream
.table("orders")
.writeStream
.option("checkpointLocation", checkpointPath)
.table("Output_Table")
)
Correct answer
Every half second
The query will run in batch mode to process all available data at once, then the
trigger stops.
Overall explanation
By default, if you don’t provide any trigger interval, the data will be processed
every half second. This is equivalent to trigger(processingTime=”500ms")
Reference: https://docs.databricks.com/structured-streaming/triggers.html#what-is-
the-default-trigger-interval
Lecture
Hands-on
Domain
Incremental Data Processing
Question 32
Correct
A data engineer has the following query in a Delta Live Tables pipeline
Which of the following changes should be made to this query to successfully start
the DLT pipeline ?
Remember, to query another DLT table, prepend always the LIVE. keyword to the table
name.
* Note that the previously used CREATE STREAMING LIVE TABLE syntax is now
deprecated; however, you may still encounter it in the current exam version.
Reference: https://docs.databricks.com/workflows/delta-live-tables/delta-live-
tables-incremental-data.html#streaming-from-other-datasets-within-a-
pipeline&language-sql
Hands-on
Domain
Incremental Data Processing
Question 33
Correct
In multi-hop architecture, which of the following statements best describes the
Gold layer tables?
The table structure in this layer resembles that of the source system table
structure with any additional metadata columns like the load time, and input file
name.
Overall explanation
Gold layer is the final layer in the multi-hop architecture, where tables provide
business level aggregates often used for reporting and dashboarding, or even for
Machine learning.
Reference:
https://www.databricks.com/glossary/medallion-architecture
Lecture
Hands-on
Domain
Incremental Data Processing
Question 34
Correct
The data engineer team has a DLT pipeline that updates all the tables once and then
stops. The compute resources of the pipeline terminate when the pipeline is
stopped.
Which of the following best describes the execution modes of this DLT pipeline ?
The DLT pipeline executes in Continuous Pipeline mode under Production mode.
The DLT pipeline executes in Continuous Pipeline mode under Development mode.
The DLT pipeline executes in Triggered Pipeline mode under Development mode.
Restarts the cluster for recoverable errors (e.g., memory leak or stale
credentials).
Reference:
https://docs.databricks.com/workflows/delta-live-tables/delta-live-tables-
concepts.html
Hands-on
Domain
Incremental Data Processing
Question 35
Correct
A data engineer needs to determine whether to use Auto Loader or COPY INTO command
in order to load input data files incrementally.
In which of the following scenarios should the data engineer use Auto Loader over
COPY INTO command ?
If they are going to ingest few number of files in the order of thousands
There is no difference between using Auto Loader and Copy Into command
Overall explanation
Here are a few things to consider when choosing between Auto Loader and COPY INTO
command:
If you’re going to ingest files in the order of thousands, you can use COPY INTO.
If you are expecting files in the order of millions or more over time, use Auto
Loader.
If your data schema is going to evolve frequently, Auto Loader provides better
primitives around schema inference and evolution.
Reference: https://docs.databricks.com/ingestion/index.html#when-to-use-copy-into-
and-when-to-use-auto-loader
Lecture
Hands-on
Domain
Incremental Data Processing
Question 36
Incorrect
From which of the following locations can a data engineer set a schedule to
automatically refresh a Databricks SQL query ?
Correct answer
From the query's page in Databricks SQL
Overall explanation
In Databricks SQL, you can set a schedule to automatically refresh a query from the
query's page.
Reference: https://docs.databricks.com/sql/user/queries/schedule-query.html
Hands-on
Domain
Production Pipelines
Question 37
Correct
Databricks provides a declarative ETL framework for building reliable and
maintainable data processing pipelines, while maintaining table dependencies and
data quality.
Delta Lake
Databricks Jobs
Databricks SQL
Overall explanation
Delta Live Tables is a framework for building reliable, maintainable, and testable
data processing pipelines. You define the transformations to perform on your data,
and Delta Live Tables manages task orchestration, cluster management, monitoring,
data quality, and error handling.
Reference: https://docs.databricks.com/workflows/delta-live-tables/index.html
Hands-on
Domain
Production Pipelines
Question 38
Correct
Which of the following services can a data engineer use for orchestration purposes
in Databricks platform ?
Cluster Pools
Data Explorer
Overall explanation
Databricks Jobs allow to orchestrate data processing tasks. This means the ability
to run and manage multiple tasks as a directed acyclic graph (DAG) in a job.
Reference: https://docs.databricks.com/workflows/jobs/jobs.html
Hands-on
Domain
Production Pipelines
Question 39
Correct
A data engineer has a Job with multiple tasks that takes more than 2 hours to
complete. In the last run, the final task unexpectedly failed.
Which of the following actions can the data engineer perform to complete this Job
Run while minimizing the execution time ?
They can rerun this Job Run to execute all the tasks
They need to delete the failed Run, and start a new Run for the Job
They can keep the failed Run, and simply start a new Run for the Job
They can run the Job in Production mode which automatically retries execution in
case of errors
Overall explanation
You can repair failed multi-task jobs by running only the subset of unsuccessful
tasks and any dependent tasks. Because successful tasks are not re-run, this
feature reduces the time and resources required to recover from unsuccessful job
runs.
Reference: https://docs.databricks.com/workflows/jobs/repair-job-failures.html
Hands-on
Domain
Production Pipelines
Question 40
Correct
A data engineering team has a multi-tasks Job in production. The team members need
to be notified in the case of job failure.
Which of the following approaches can be used to send emails to the team members in
the case of job failure ?
They can use Job API to programmatically send emails according to each task status
Only Job owner can be configured to be notified in the case of job failure
They can configure email notifications settings per notebook in the task page
Overall explanation
Databricks Jobs support email notifications to be notified in the case of job
start, success, or failure. Simply, click Edit email notifications from the details
panel in the Job page. From there, you can add one or more email addresses.
Reference: https://docs.databricks.com/workflows/jobs/jobs.html#alerts-job
Hands-on
Domain
Production Pipelines
Question 41
Correct
For production jobs, which of the following cluster types is recommended to use?
All-purpose clusters
Production clusters
On-premises clusters
Serverless clusters
Overall explanation
Job Clusters are dedicated clusters for a job or task run. A job cluster auto
terminates once the job is completed, which saves cost compared to all-purpose
clusters.
Reference: https://docs.databricks.com/workflows/jobs/jobs.html#choose-the-correct-
cluster-type-for-your-job
Hands-on
Domain
Production Pipelines
Question 42
Correct
In Databricks Jobs, which of the following approaches can a data engineer use to
configure a linear dependency between Task A and Task B ?
They can assign Task A an Order number of 1, and assign Task B an Order number of 2
They can visually drag and drop an arrow from Task A to Task B in the Job canvas
They can configure the dependency at the notebook level using the dbutils.jobs
utility
Databricks Jobs do not support linear dependency between tasks. This can only be
achieved in Delta Live Tables pipelines
Overall explanation
You can define the order of execution of tasks in a job using the Depends on
dropdown menu. You can set this field to one or more tasks in the job.
Reference: https://docs.databricks.com/workflows/jobs/jobs.html#task-dependencies
Hands-on
Domain
Production Pipelines
Question 43
Correct
Which part of the Databricks Platform can a data engineer use to revoke permissions
from users on tables ?
DBFS
Overall explanation
Data Explorer in Databricks SQL allows you to manage data object permissions. This
includes revoking privileges on tables and databases from users or groups of users.
Reference: https://docs.databricks.com/security/access-control/data-acl.html#data-
explorer
Hands-on
Domain
Data Governance
Question 44
Correct
A data engineer uses the following SQL query:
Reference: https://docs.databricks.com/security/access-control/table-acls/object-
privileges.html#privileges
Lecture
Hands-on
Domain
Data Governance
Question 45
Correct
In which of the following locations can a data engineer change the owner of a
table?
In Data Explorer, under the Permissions tab of the database's page, since owners
are set at database-level
In Data Explorer, from the Owner field in the database's page, since owners are set
at database-level
Overall explanation
From Data Explorer in Databricks SQL, you can navigate to the table's page to
review and change the owner of the table. Simply, click on the Owner field, then
Edit owner to set the new owner.
Reference: https://docs.databricks.com/security/access-control/data-
acl.html#manage-data-object-ownership
Domain
Data Governance