Red Hat OpenShift Data Foundation-4.16-Deploying OpenShift Data Foundation Using Microsoft Azure-en-US
Red Hat OpenShift Data Foundation-4.16-Deploying OpenShift Data Foundation Using Microsoft Azure-en-US
4.16
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
Read this document for instructions about how to install and manage Red Hat OpenShift Data
Foundation using Red Hat OpenShift Container Platform on Microsoft Azure.
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
PREFACE
. . . . . . . . . . . 1.. .PREPARING
CHAPTER . . . . . . . . . . . . .TO
. . . .DEPLOY
. . . . . . . . .OPENSHIFT
. . . . . . . . . . . . .DATA
. . . . . . FOUNDATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 2.
. . DEPLOYING
. . . . . . . . . . . . . .OPENSHIFT
. . . . . . . . . . . . .DATA
. . . . . .FOUNDATION
. . . . . . . . . . . . . . . ON
. . . .MICROSOFT
. . . . . . . . . . . . . AZURE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
2.1. INSTALLING RED HAT OPENSHIFT DATA FOUNDATION OPERATOR 7
2.2. ENABLING CLUSTER-WIDE ENCRYPTION WITH KMS USING THE TOKEN AUTHENTICATION METHOD
8
2.3. ENABLING CLUSTER-WIDE ENCRYPTION WITH KMS USING THE KUBERNETES AUTHENTICATION
METHOD 9
2.4. CREATING AN OPENSHIFT DATA FOUNDATION CLUSTER 12
.CHAPTER
. . . . . . . . . . 3.
. . VERIFYING
. . . . . . . . . . . . .OPENSHIFT
. . . . . . . . . . . . DATA
. . . . . . .FOUNDATION
. . . . . . . . . . . . . . .DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
3.1. VERIFYING THE STATE OF THE PODS 17
3.2. VERIFYING THE OPENSHIFT DATA FOUNDATION CLUSTER IS HEALTHY 19
3.3. VERIFYING THE MULTICLOUD OBJECT GATEWAY IS HEALTHY 19
3.4. VERIFYING THAT THE SPECIFIC STORAGE CLASSES EXIST 19
.CHAPTER
. . . . . . . . . . 4.
. . .DEPLOY
. . . . . . . . .STANDALONE
. . . . . . . . . . . . . . . MULTICLOUD
. . . . . . . . . . . . . . . OBJECT
. . . . . . . . . .GATEWAY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
..............
4.1. INSTALLING RED HAT OPENSHIFT DATA FOUNDATION OPERATOR 21
4.2. CREATING A STANDALONE MULTICLOUD OBJECT GATEWAY 22
. . . . . . . . . . . 5.
CHAPTER . . VIEW
. . . . . . OPENSHIFT
. . . . . . . . . . . . . DATA
. . . . . . .FOUNDATION
. . . . . . . . . . . . . . .TOPOLOGY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
..............
.CHAPTER
. . . . . . . . . . 6.
. . .UNINSTALLING
. . . . . . . . . . . . . . . . OPENSHIFT
. . . . . . . . . . . . . DATA
. . . . . . .FOUNDATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
6.1. UNINSTALLING OPENSHIFT DATA FOUNDATION IN INTERNAL MODE 27
1
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
2
MAKING OPEN SOURCE MORE INCLUSIVE
3
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
3. Fill in the Description field with your suggestion for improvement. Include a link to the relevant
part(s) of documentation.
4
PREFACE
PREFACE
Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container
Platform (RHOCP) Azure clusters.
NOTE
Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See
Planning your deployment for more information about deployment requirements.
To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift
Data Foundation chapter and then follow the appropriate deployment process based on your
requirement:
5
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
Before you begin the deployment of OpenShift Data Foundation, follow these steps:
1. Setup a chrony server. See Configuring chrony time service and use knowledgebase solution to
create rules allowing all traffic.
2. Optional: If you want to enable cluster-wide encryption using the external Key Management
System (KMS) HashiCorp Vault, follow these steps:
Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription.
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article
on OpenShift Data Foundation subscriptions.
When the Token authentication method is selected for encryption then refer to Enabling
cluster-wide encryption with the Token authentication using KMS.
When the Kubernetes authentication method is selected for encryption then refer to
Enabling cluster-wide encryption with the Kubernetes authentication using KMS .
Ensure that you are using signed certificates on your Vault servers.
NOTE
If you are using Thales CipherTrust Manager as your KMS, you will enable it
during deployment.
For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for
OpenShift Workloads guide, and Requirements and recommendations section of the Install
guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
6
CHAPTER 2. DEPLOYING OPENSHIFT DATA FOUNDATION ON MICROSOFT AZURE
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift
Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway .
NOTE
Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See
Planning your deployment for more information about deployment requirements.
Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation
chapter before proceeding with the below steps for deploying using dynamic storage devices:
Prerequisites
Access to an OpenShift Container Platform cluster using an account with cluster-admin and
operator installation permissions.
You must have at least three worker or infrastructure nodes in the Red Hat OpenShift
Container Platform cluster.
For additional resource requirements, see the Planning your deployment guide.
IMPORTANT
When you need to override the cluster-wide default node selector for OpenShift
Data Foundation, you can use the following command to specify a blank node
selector for the openshift-storage namespace (create openshift-storage
namespace in this case):
Taint a node as infra to ensure only Red Hat OpenShift Data Foundation
resources are scheduled on that node. This helps you save on subscription costs.
For more information, see the How to use dedicated worker nodes for Red Hat
OpenShift Data Foundation section in the Managing and Allocating Storage
Resources guide.
7
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
Procedure
3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the
OpenShift Data Foundation Operator.
4. Click Install.
If you select Manual updates, then the OLM creates an update request. As a cluster
administrator, you must then manually approve that update request to update the Operator
to a newer version.
e. Ensure that the Enable option is selected for the Console plugin.
f. Click Install.
Verification steps
After the operator is successfully installed, a pop-up with a message, Web console update is
available appears on the user interface. Click Refresh web console from this pop-up for the
console changes to reflect.
Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator
shows a green tick indicating successful installation.
Prerequisites
A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see
the knowledgebase article on OpenShift Data Foundation subscriptions .
8
CHAPTER 2. DEPLOYING OPENSHIFT DATA FOUNDATION ON MICROSOFT AZURE
Carefully, select a unique path name as the backend path that follows the naming convention
since you cannot change it later.
Procedure
2. Create a policy to restrict the users to perform a write or delete operation on the secret:
echo '
path "odf/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "sys/mounts" {
capabilities = ["read"]
}'| vault policy write odf -
Prerequisites
A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see
the knowledgebase article on OpenShift Data Foundation subscriptions .
The OpenShift Data Foundation operator must be installed from the Operator Hub.
Select a unique path name as the backend path that follows the naming convention carefully.
You cannot change this path name later.
Procedure
9
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
For example:
For example:
$ oc proxy &
$ proxy_pid=$!
$ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r
.issuer)"
$ kill $proxy_pid
7. Use the information collected in the previous step to setup the Kubernetes authentication
10
CHAPTER 2. DEPLOYING OPENSHIFT DATA FOUNDATION ON MICROSOFT AZURE
7. Use the information collected in the previous step to setup the Kubernetes authentication
method in Vault:
IMPORTANT
9. Create a policy to restrict the users to perform a write or delete operation on the secret:
echo '
path "odf/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "sys/mounts" {
capabilities = ["read"]
}'| vault policy write odf -
The role odf-rook-ceph-op is later used while you configure the KMS connection details during
the creation of the storage system.
11
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
bound_service_account_names=rook-ceph-osd \
bound_service_account_namespaces=openshift-storage \
policies=odf \
ttl=1440h
Prerequisites
The OpenShift Data Foundation operator must be installed from the Operator Hub. For more
information, see Installing OpenShift Data Foundation Operator using the Operator Hub .
If you want to use Azure Vault [Technology preview] as the key management service provider,
make sure to set up client authetication and fetch the client credentials from Azure using the
following steps:
1. Create Azure Vault. For more information, see Quickstart: Create a key vault using the
Azure portal in Microsoft product documentation.
2. Create Service Principal with certificate based authentication. For more information, see
Create an Azure service principal with Azure CLI in Microsoft product documentation.
3. Set Azure Key Vault role based access control (RBAC). For more information, see Enable
Azure RBAC permissions on Key Vault
Procedure
1. In the OpenShift Web Console, click Operators → Installed Operators to view all the installed
operators.
Ensure that the Project selected is openshift-storage.
2. Click on the OpenShift Data Foundation operator, and then click Create StorageSystem.
Username
Password
12
CHAPTER 2. DEPLOYING OPENSHIFT DATA FOUNDATION ON MICROSOFT AZURE
Database name
ii. Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
e. Click Next.
a. Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default.
NOTE
Once you select the initial storage capacity, cluster expansion is performed
only using the selected usable capacity (three times of raw storage).
c. In the Configure performance section, select one of the following performance profiles:
Lean
Use this in a resource constrained environment with minimum resources that are lower
than the recommended. This profile minimizes resource consumption by allocating
fewer CPUs and less memory.
Balanced (default)
Use this when recommended resources are available. This profile provides a balance
between resource consumption and performance for diverse workloads.
Performance
Use this in an environment with sufficient resources to get the best performance. This
profile is tailored for high performance by allocating ample memory and CPUs to
ensure optimal execution of demanding workloads.
NOTE
You have the option to configure the performance profile even after the
deployment using the Configure performance option from the options
menu of the StorageSystems tab.
IMPORTANT
For more information about resource requirements, see Resource requirement for
performance profiles.
d. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift
Data Foundation.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread
across different Locations/availability zones.
If the nodes selected do not match the OpenShift Data Foundation cluster requirements of
13
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
If the nodes selected do not match the OpenShift Data Foundation cluster requirements of
an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum
starting node requirements, see the Resource requirements section in the Planning guide.
e. Click Next.
5. Optional: In the Security and network page, configure the following based on your
requirements:
a. To enable encryption, select Enable data encryption for block and file storage.
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage
class.
ii. Optional: Select the Connect to an external key management servicecheckbox. This
is optional for cluster-wide encryption.
A. From the Key Management Service Provider drop-down list, select one of the
following providers and provide the necessary details:
Vault
Click Save.
Click Save.
I. Enter a unique Connection Name for the Key Management service within
the project.
II. In the Address and Port sections, enter the IP of Thales CipherTrust
Manager and the port where the KMIP interface is enabled. For example:
Address: 123.34.3.2
Port: 5696
III. Upload the Client Certificate, CA certificate, and Client Private Key.
V. The TLS Server field is optional and used when there is no DNS entry for
the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local.
I. Enter a unique Connection name for the key management service within
the project.
V. Upload Certificate file in .PEM format and the certificate file must include
a client certificate and a private key.
i. Select a Network.
6. In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data
Foundation then select the Prepare cluster for disaster recovery (Regional-DR only)
checkbox, else click Next.
15
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
NOTE
When your deployment has five or more nodes, racks, or rooms, and when there are five
or more number of failure domains present in the deployment, you can configure Ceph
monitor counts based on the number of racks or zones. An alert is displayed in the
notification panel or Alert Center of the OpenShift Web Console to indicate the option to
increase the number of Ceph monitor counts. You can use the Configure option in the
alert to configure the Ceph monitor counts. For more information, see Resolving low
Ceph monitor count alert.
Verification steps
b. Verify that Status of StorageCluster is Ready and has a green tick mark next to it.
To verify that all components for OpenShift Data Foundation are successfully installed, see
Verifying your OpenShift Data Foundation deployment .
Additional resources
To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
16
CHAPTER 3. VERIFYING OPENSHIFT DATA FOUNDATION DEPLOYMENT
Procedure
NOTE
If the Show default projects option is disabled, use the toggle button to list all
the default projects.
For more information on the expected number of pods for each component and how it varies
depending on the number of nodes, see the following table:
1. Set filter for Running and Completed pods to verify that the following pods are in Running and
Completed state:
odf-operator-controller-manager-*
(1 pod on any storage node)
csi-addons-controller-manager-* (1
pod on any storage node)
ocs-client-operator-console -* (1 pod
on any storage node)
17
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
MON rook-ceph-mon-*
MGR rook-ceph-mgr-*
MDS rook-ceph-mds-ocs-storagecluster-
cephfilesystem-*
CSI
cephfs
csi-cephfsplugin-* (1 pod on each
storage node)
csi-cephfsplugin-provisioner-* (2
pods distributed across storage nodes)
rbd
csi-rbdplugin-* (1 pod on each
storage node)
csi-rbdplugin-provisioner-* (2 pods
distributed across storage nodes)
rook-ceph-crashcollector rook-ceph-crashcollector-*
OSD
rook-ceph-osd-* (1 pod for each device)
rook-ceph-osd-prepare-ocs-
deviceset-* (1 pod for each device)
18
CHAPTER 3. VERIFYING OPENSHIFT DATA FOUNDATION DEPLOYMENT
Procedure
2. In the Status card of the Overview tab, click Storage System and then click the storage
system link from the pop up that appears.
3. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
For more information on the health of the OpenShift Data Foundation cluster using the Block and File
dashboard, see Monitoring OpenShift Data Foundation .
Procedure
2. In the Status card of the Overview tab, click Storage System and then click the storage
system link from the pop up that appears.
a. In the Status card of the Object tab, verify that both Object Service and Data Resiliency
have a green tick.
For more information on the health of the OpenShift Data Foundation cluster using the object service
dashboard, see Monitoring OpenShift Data Foundation .
IMPORTANT
The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This
means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in
total data loss of applicative data residing on the Multicloud Object Gateway. Because of
this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB
fails and cannot be recovered, then you can revert to the latest backed-up version. For
instructions on backing up your NooBaa DB, follow the steps in this knowledgabase
article.
Procedure
1. Click Storage → Storage Classesfrom the left pane of the OpenShift Web Console.
2. Verify that the following storage classes are created with the OpenShift Data Foundation
cluster creation:
ocs-storagecluster-ceph-rbd
19
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
ocs-storagecluster-cephfs
openshift-storage.noobaa.io
20
CHAPTER 4. DEPLOY STANDALONE MULTICLOUD OBJECT GATEWAY
IMPORTANT
The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This
means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in
total data loss of applicative data residing on the Multicloud Object Gateway. Because of
this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB
fails and cannot be recovered, then you can revert to the latest backed-up version. For
instructions on backing up your NooBaa DB, follow the steps in this knowledgabase
article.
Prerequisites
Access to an OpenShift Container Platform cluster using an account with cluster-admin and
operator installation permissions.
You must have at least three worker or infrastructure nodes in the Red Hat OpenShift
Container Platform cluster.
For additional resource requirements, see the Planning your deployment guide.
IMPORTANT
When you need to override the cluster-wide default node selector for OpenShift
Data Foundation, you can use the following command to specify a blank node
selector for the openshift-storage namespace (create openshift-storage
namespace in this case):
Taint a node as infra to ensure only Red Hat OpenShift Data Foundation
resources are scheduled on that node. This helps you save on subscription costs.
For more information, see the How to use dedicated worker nodes for Red Hat
OpenShift Data Foundation section in the Managing and Allocating Storage
Resources guide.
Procedure
21
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the
OpenShift Data Foundation Operator.
4. Click Install.
If you select Manual updates, then the OLM creates an update request. As a cluster
administrator, you must then manually approve that update request to update the Operator
to a newer version.
e. Ensure that the Enable option is selected for the Console plugin.
f. Click Install.
Verification steps
After the operator is successfully installed, a pop-up with a message, Web console update is
available appears on the user interface. Click Refresh web console from this pop-up for the
console changes to reflect.
Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator
shows a green tick indicating successful installation.
Prerequisites
Procedure
1. In the OpenShift Web Console, click Operators → Installed Operators to view all the installed
operators.
22
CHAPTER 4. DEPLOY STANDALONE MULTICLOUD OBJECT GATEWAY
2. Click OpenShift Data Foundation operator and then click Create StorageSystem.
c. Click Next.
a. From the Key Management Service Provider drop-down list, either select Vault or Thales
CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you
selected Thales CipherTrust Manager (using KMIP), go to step iii.
Enter the Key Value secret path in Backend Path that is dedicated and unique
to OpenShift Data Foundation.
Enter a unique Vault Connection Name, host Address of the Vault server
('https://<hostname or ip>'), Port number and Role name.
Enter the Key Value secret path in Backend Path that is dedicated and unique
to OpenShift Data Foundation.
c. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps
23
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
c. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps
below:
i. Enter a unique Connection Name for the Key Management service within the project.
ii. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the
port where the KMIP interface is enabled. For example:
Address: 123.34.3.2
Port: 5696
iii. Upload the Client Certificate, CA certificate, and Client Private Key.
iv. If StorageClass encryption is enabled, enter the Unique Identifier to be used for
encryption and decryption generated above.
v. The TLS Server field is optional and used when there is no DNS entry for the KMIP
endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
d. Select a Network.
e. Click Next.
Verification steps
2. In the Status card of the Overview tab, click Storage System and then click the storage
system link from the pop up that appears.
a. In the Status card of the Object tab, verify that both Object Service and Data Resiliency
have a green tick.
2. Select openshift-storage from the Project drop-down list and verify that the following
pods are in Running state.
NOTE
If the Show default projects option is disabled, use the toggle button to list
all the default projects.
24
CHAPTER 4. DEPLOY STANDALONE MULTICLOUD OBJECT GATEWAY
OpenShift Data
ocs-operator-* (1 pod on any storage node)
Foundation Operator
ocs-metrics-exporter-* (1 pod on any storage node)
Multicloud Object
noobaa-operator-* (1 pod on any storage node)
Gateway
noobaa-core-* (1 pod on any storage node)
25
Red Hat OpenShift Data Foundation 4.16 Deploying OpenShift Data Foundation using Microsoft Azure
Procedure
2. Choose a node to view node details on the right-hand panel. You can also access resources or
deployments within a node by clicking on the search/preview decorator icon.
a. Click the preview decorator on a node. A modal window appears above the node that
displays all of the deployments associated with that node along with their statuses.
b. Click the Back to main view button in the model’s upper left corner to close and return to
the previous view.
c. Select a specific deployment to see more information about it. All relevant data is shown in
the side panel.
4. Click the Resources tab to view the pods information. This tab provides a deeper
understanding of the problems and offers granularity that aids in better troubleshooting.
5. Click the pod links to view the pod information page on OpenShift Container Platform. The link
opens in a new window.
26
CHAPTER 6. UNINSTALLING OPENSHIFT DATA FOUNDATION
27