Exercise 2 - Managing Clusters using Advanced Cluster Management
In this exercise you manage the clusters on the Red Hat Advanced Cluster Management stack. You will work with local-cluster and any additional clusters provisioned in Module 01 (standard-cluster, gpu-cluster). You will attach labels to clusters, visualize their resources, inspect the fleet, and review upgrade options.
2.1 Label and Inspect Managed Clusters
Logging into the clusters
Before you begin, make sure you are logged into the hub cluster (where ACM is installed). Use the credentials from the Red Hat Demo Platform or the token from the OpenShift console:
<hub> $ oc login --token=<your-token> --server=https://api.<hub-domain>:6443
Verify that all managed clusters are visible from the hub:
<hub> $ oc get managedclusters
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
local-cluster true https://api.<hub-domain>:6443 True True ...
standard-cluster true https://api.standard-cluster.<base-domain>:6443 True True ...
gpu-cluster true https://api.gpu-cluster.<base-domain>:6443 True True ...
Logging into Hive-provisioned clusters (standard-cluster, gpu-cluster):
Clusters provisioned via Hive in Module 01 store their kubeadmin credentials as secrets on the hub. To retrieve them:
# Get the API URL for a managed cluster:
<hub> $ oc get managedcluster standard-cluster -o jsonpath='{.spec.managedClusterClientConfigs[0].url}'
# Find the admin password secret name:
<hub> $ oc get clusterdeployment standard-cluster -n standard-cluster \
-o jsonpath='{.spec.clusterMetadata.adminPasswordSecretRef.name}'
# Extract the password (replace <secret-name> with the output above):
<hub> $ oc get secret <secret-name> -n standard-cluster \
-o jsonpath='{.data.password}' | base64 -d; echo
# Log into the managed cluster (use the API URL from the first command):
<hub> $ oc login -u kubeadmin -p <password> <standard-cluster-api-url>
Repeat the same steps for gpu-cluster (using namespace gpu-cluster).
On macOS, replace base64 -d with base64 -D.
|
Applying labels
-
Modify the attributes of the managed clusters in Red Hat Advanced Cluster Management -
For local-cluster:
-
labels:
-
environment=hub
-
owner=<your-name>
-
For standard-cluster (if provisioned):
-
labels:
-
environment=dev
-
owner=<your-name>
-
For gpu-cluster (if provisioned):
-
labels (in addition to the gpu/accelerator labels already set by the ClusterDeployment):
-
owner=<your-name>
-
Option A — Via the ACM Console:
-
Navigate to Clusters → select a cluster → Actions → Edit labels.
-
Add the labels in the
key=valueformat.
Option B — Via the CLI (from the hub):
<hub> $ oc label managedcluster local-cluster environment=hub owner=<your-name> --overwrite
<hub> $ oc label managedcluster standard-cluster environment=dev owner=<your-name> --overwrite
<hub> $ oc label managedcluster gpu-cluster owner=<your-name> --overwrite
Verify all labels:
<hub> $ oc get managedcluster -L environment,owner,gpu,accelerator
NAME HUB ACCEPTED ... ENVIRONMENT OWNER GPU ACCELERATOR
local-cluster true ... hub <your-name>
standard-cluster true ... dev <your-name>
gpu-cluster true ... ai <your-name> true nvidia-l4
Inspecting agent pods
-
Log into each managed cluster (see login instructions above) and make sure the agent pods are running:
<managed cluster> $ oc get pods -n open-cluster-management-agent
NAME READY STATUS RESTARTS AGE
klusterlet-agent-7b5dcbcc8-jwnqh 1/1 Running 0 ...
klusterlet-b5c848766-h5j6p 1/1 Running 0 ...
<managed cluster> $ oc get pods -n open-cluster-management-agent-addon
NAME READY STATUS RESTARTS AGE
application-manager-85d5c7b944-v5qq9 1/1 Running 0 ...
cert-policy-controller-849bcbcf94-qhp8j 1/1 Running 0 ...
config-policy-controller-5796ffcbb5-4mhht 1/1 Running 0 ...
governance-policy-framework-645d6cdb4c-wqkd6 1/1 Running 0 ...
klusterlet-addon-workmgr-54db9ddcc-ngk2k 1/1 Running 0 ...
managed-serviceaccount-addon-agent-56cc6c7c-ztsqc 1/1 Running 0 ...
-
Alternatively, you can verify the klusterlet status of all clusters from the hub without logging into each one:
<hub> $ oc get managedcluster
All clusters should show JOINED=True and AVAILABLE=True.
2.2 Analyzing the managed cluster
In this exercise you will be using the Red Hat Advanced Cluster Management portal to analyze the managed cluster’s resources. You may use the workshop presentation for examples and guidance.
-
Using Red Hat Advanced Cluster Management, find out what is the cloud provider of the managed cluster.
-
Using Red Hat Advanced Cluster Management, find out the number of nodes that make up the managed cluster. How many CPUs does each node have?
-
Using Red Hat Advanced Cluster Management, check out whether all users can provision new projects on local-cluster (check if the self-provisioners ClusterRoleBinding has the system:authenticated:oauth group associated with it).
-
Using Red Hat Advanced Cluster Management, check what channel version is associated with local-cluster (stable / candidate / fast) - (Search for kind:ClusterVersion CR).
-
Using Red Hat Advanced Cluster Management -
-
Check the port number that the alertmanager-main-0 pod listens on local-cluster (can be found using the pod logs and pod resource definition).
-
Check the full path of the alertmanager-main-0 pod configuration file (can be found using the pod logs and pod resource definition).
-
2.3 Review Cluster Upgrades using Advanced Cluster Management
|
On a single-node cluster (SNO), performing an actual upgrade is risky and may cause extended downtime. This exercise focuses on reviewing available upgrades rather than executing them. If you have a multi-node standard-cluster provisioned, you may perform an actual upgrade on that cluster instead. |
-
In the RHACM console, navigate to Clusters and select a managed cluster.
-
Review the available upgrade paths by checking the channel version (stable / candidate / fast) - (Search for kind:ClusterVersion CR).
-
Examine what versions are available for upgrade without actually initiating the upgrade.
-
If you have a non-SNO managed cluster (e.g., standard-cluster), you may optionally change the channel version from stable-4.x to stable-4.x+1 and initiate the upgrade using Red Hat Advanced Cluster Management. The upgrading process may take up to an hour to complete.