Module 2: Environment Setup and Verification
|
🚀 Quick Start: SSH to Your Bastion Host (Recommended) The easiest way to work with your SNO cluster is via SSH to the bastion host. The bastion has
Connect to your bastion:
Once connected, you can immediately run cluster commands - no additional login required:
|
Module Overview
This module helps you verify that your Single Node OpenShift (SNO) cluster is properly configured with all required operators and tools for the low-latency performance workshop. Your environment comes pre-configured with all necessary components, so this module focuses on verification and understanding what’s available.
|
Workshop Environment Pre-Configuration Your Single Node OpenShift (SNO) cluster has been automatically provisioned with:
This module teaches you how to verify these components and understand their purpose for the workshop exercises. |
Key Learning Objectives
-
Understand the workshop environment architecture
-
Verify your pre-configured SNO cluster is ready
-
Confirm all required operators are installed and running
-
Understand the purpose of each operator for low-latency workloads
-
Prepare for baseline testing and performance tuning
Workshop Architecture
This workshop uses a dedicated SNO cluster for performance testing:
-
Isolated Environment: Dedicated cluster for performance tuning experiments
-
Pre-configured: All operators and tools installed automatically
-
Bastion Access: SSH access with pre-configured tools and kubeconfig
-
Safety First: Isolated from other workloads for reliable testing
-
Isolation: Performance tuning won’t affect other environments
-
Repeatability: Fresh cluster for each workshop run
-
Simplicity: Single node reduces complexity while maintaining OpenShift capabilities
-
Real-World Relevance: SNO is used in edge and resource-constrained scenarios
Prerequisites
Before starting this module, ensure you have:
-
SSH access to the bastion host (credentials provided by administrator)
-
Basic understanding of OpenShift and Kubernetes concepts
-
Familiarity with command-line tools (
oc,kubectl)
Hands-on Exercise: Verifying Your Workshop Environment
Exercise Overview
Your SNO cluster has been automatically configured with all required operators. In this exercise, you will:
-
Verify cluster connectivity and access
-
Confirm all pre-installed operators are running
-
Understand the purpose of each operator
-
Validate the environment is ready for performance testing
Step 1: Verify Cluster Access
-
Connect to your bastion host (if not already connected):
ssh lab-user@bastion.student1.sandbox5466.opentlc.com -
Verify you have cluster access:
# Check current user and cluster oc whoami oc whoami --show-server # Verify cluster nodes oc get nodes # Check cluster version oc versionExpected output should show:
-
User:
system:admin -
One node in
Readystate (your SNO) -
OpenShift version 4.20 or later
-
Step 2: Clone Workshop Repository
The workshop includes educational Python scripts for performance analysis. Clone the repository to your home directory:
# Clone the workshop repository
cd ~
git clone https://github.com/tosin2013/low-latency-performance-workshop.git
# Install Python dependencies
pip install --user PyYAML
# Verify scripts are available
ls ~/low-latency-performance-workshop/scripts/*.py | head -5
|
These Python scripts provide color-coded analysis output and educational explanations that are much clearer than raw JSON/bash output. You’ll use them throughout modules 3-6 for performance analysis. |
Step 3: Verify Pre-Installed Operators
Your SNO cluster was provisioned with all required operators. Let’s verify they’re running.
-
Verify OpenShift Virtualization is installed:
oc get csv -n openshift-cnvExpected output:
NAME DISPLAY VERSION PHASE kubevirt-hyperconverged-operator.v4.x.x OpenShift Virtualization 4.x.x Succeeded -
To verify SR-IOV Network Operator is installed run the command below. Currently not installed on this cluster:
oc get csv -n openshift-sriov-network-operatorExpected output:
NAME DISPLAY VERSION PHASE sriov-network-operator.v4.x.x SR-IOV Network Operator 4.x.x Succeeded -
Verify the built-in Node Tuning Operator (available in all OpenShift 4.11+ clusters):
# Check Node Tuning Operator pods oc get pods -n openshift-cluster-node-tuning-operator # Verify Performance Profile CRD is available oc get crd performanceprofiles.performance.openshift.io -
Check the HyperConverged instance (OpenShift Virtualization):
oc get hyperconverged -n openshift-cnv # Check detailed status oc get hyperconverged -n openshift-cnv -o jsonpath='{.items[0].status.conditions}' | jq
|
All operators should show as "Succeeded" or "Running" If any operator is not yet ready, wait a few minutes - the provisioning script deploys them automatically and they may still be initializing. What was deployed automatically:
|
Step 4: Understanding the Performance Operator Architecture
Your SNO cluster includes these pre-configured components:
| Component | Purpose | Status |
|---|---|---|
Node Tuning Operator |
Manages TuneD profiles AND Performance Profiles. Handles CPU isolation, real-time kernels, and system-level tuning. Built-in since OpenShift 4.11+ |
Built-in ✅ |
OpenShift Virtualization |
Enables running VMs with performance optimizations for low-latency virtualization scenarios |
Pre-installed ✅ |
Step 5: Final Verification
-
Verify built-in Node Tuning Operator:
# Check Node Tuning Operator (built-in since 4.11+) oc get tuned -n openshift-cluster-node-tuning-operator # Verify Performance Profile CRD is available (managed by NTO) oc get crd performanceprofiles.performance.openshift.io # Check NTO pods are running oc get pods -n openshift-cluster-node-tuning-operator -
Verify pre-installed operators:
# OpenShift Virtualization oc get csv -n openshift-cnv -
Check all operator pods are running:
# Check operator pods in all namespaces oc get pods -n openshift-cnv --field-selector=status.phase=Running oc get pods -n openshift-sriov-network-operator --field-selector=status.phase=Running oc get pods -n openshift-cluster-node-tuning-operator --field-selector=status.phase=Running
|
Expected Results:
If any operator is not ready, wait 5-10 minutes for initialization to complete. |
|
OpenShift 4.20+ Performance Operator Architecture
For more details, see: OpenShift 4.20 Low Latency Tuning Documentation |
Workshop Environment Verification
Before proceeding to the next module, let’s verify that the complete environment is ready for performance testing.
Verify Cluster Readiness
-
Test basic cluster functionality:
# Check node status oc get nodes # Check cluster operators oc get co | grep -v "True.*False.*False" # Verify you can create resources oc new-project test-connectivity oc delete project test-connectivity -
Verify OpenShift Virtualization is ready:
# Check operator status oc get csv -n openshift-cnv | grep kubevirt # Check HyperConverged status oc get hco -n openshift-cnv # Verify all virtualization pods are running oc get pods -n openshift-cnv --field-selector=status.phase=Running -
Verify Showroom documentation is accessible:
# Check Showroom deployment oc get deployment -n low-latency-workshop # Check Showroom route oc get route -n low-latency-workshop
Module Summary
In this module, you have:
✅ Verified your SNO cluster connectivity and access
✅ Confirmed pre-installed operators (OpenShift Virtualization, SR-IOV) are running
✅ Verified built-in Node Tuning Operator for Performance Profile management
✅ Understood the purpose of each operator for low-latency workloads
✅ Prepared for baseline testing and performance tuning
-
Your workshop SNO cluster comes pre-configured with all required operators
-
OpenShift 4.20+ has mature built-in performance capabilities via Node Tuning Operator
-
The workshop uses a dedicated SNO cluster for isolated performance testing
-
All operators are automatically deployed during cluster provisioning
-
✅ SNO cluster ready with cluster-admin access
-
✅ OpenShift Virtualization operator deployed and configured
-
✅ SR-IOV Network Operator installed for high-performance networking
-
✅ Built-in Node Tuning Operator ready for Performance Profile management
-
✅ Showroom documentation site deployed
-
✅ Environment ready for baseline testing and performance tuning
In Module 3, you will establish baseline performance metrics on your SNO cluster using industry-standard tools like kube-burner. This will provide quantitative measurements that serve as the foundation for measuring improvements in subsequent modules.
Troubleshooting Common Issues
Operator CSV Not Showing "Succeeded"
If an operator CSV is still in "Installing" or "Pending" state:
# Check CSV status with more detail
oc get csv -n openshift-cnv -o yaml | grep -A 10 "phase:"
# Check for pod issues
oc get pods -n openshift-cnv --field-selector=status.phase!=Running
# Check operator logs
oc logs -n openshift-cnv deployment/hco-operator --tail=50
OpenShift Virtualization Pods Not Starting
If virtualization pods fail to start:
# Check node resources
oc describe nodes
# Verify HyperConverged status
oc get hyperconverged -n openshift-cnv -o yaml | grep -A 20 "conditions:"
# Check for pod errors
oc get pods -n openshift-cnv --field-selector=status.phase!=Running
oc describe pod <pod-name> -n openshift-cnv
SR-IOV Operator Not Ready
If SR-IOV operator is not fully initialized:
# Check SR-IOV operator status
oc get csv -n openshift-sriov-network-operator
# Verify config daemon is running
oc get pods -n openshift-sriov-network-operator
# Check SriovNetworkNodeState (may be empty on virtual/cloud instances)
oc get sriovnetworknodestates -n openshift-sriov-network-operator
|
SR-IOV on Cloud Instances On AWS or other cloud providers, SR-IOV may not detect physical network adapters because the instances use virtualized networking. This is expected and won’t prevent you from completing the workshop exercises. The operator will still be functional for configuration purposes. |
Best Practices
For Your Own Environments
-
Use dedicated clusters or node pools for performance testing
-
Verify operator versions match your OpenShift version
-
Monitor operator health regularly
-
Use Performance Profiles for consistent tuning across nodes
-
Test operator functionality before deploying production workloads
Performance Operator Architecture (OpenShift 4.20+)
-
Node Tuning Operator (Built-in): Manages TuneD profiles AND Performance Profiles
-
Performance Addon Operator (Deprecated): Functionality moved to Node Tuning Operator
-
SR-IOV Network Operator (Pre-installed): High-performance networking capabilities
-
OpenShift Virtualization (Pre-installed): Low-latency virtualization platform
Knowledge Check
-
What is the purpose of the Node Tuning Operator in OpenShift 4.20+?
-
Why was the Performance Addon Operator deprecated in OpenShift 4.11+?
-
What performance capabilities are now built into the Node Tuning Operator?
-
Which operators come pre-installed on your workshop SNO cluster?
-
How would you verify that OpenShift Virtualization is fully initialized?
Next Steps
In the next module, you will:
-
Use kube-burner to establish baseline performance metrics
-
Measure pod creation latency and cluster response times
-
Analyze performance data to identify optimization opportunities
-
Create a performance baseline document for comparison
The pre-installed operators verified in this module will be essential for subsequent modules:
-
Module 4: Performance Profiles (built-in Node Tuning Operator) for CPU isolation and real-time kernels
-
Module 5: SR-IOV networking + OpenShift Virtualization for high-performance VM networking
-
Module 6: TuneD profiles (built-in Node Tuning Operator) for system-level performance optimization