CKP Cluster Creation
Compass UI Cluster Creation
The primary way to create CKP clusters is through the Compass UI. The UI presents a guided workflow:
| Setting | Options / Details |
|---|---|
| Kubernetes Version | v1.33.7 through v1.35.1 (all CNCF Certified) |
| Networking (CNI) | Calico (default) |
| Networking Version | v3.30.5 |
| Worker Host Group | Selection from pre-registered host groups |
| Worker Nodes | Number of worker nodes (minimum 1) |
The control plane uses Managed Control Plane (hosted control plane), so users only configure worker nodes and host group assignment.
Manual Cluster Installation (Standalone)
For standalone installations, CKP provides automated installation scripts. The installation process follows this sequence:
- Package integrity verification — PGP signature and Coredge.io maintainer validation
- System dependency installation — Required system packages and CRI tools
- Container runtime setup — Containerd with Coredge-standard configuration
- OCI runtime installation — runc runtime
- Network configuration — Kernel modules and IP forwarding settings
- CKP package installation — kubeadm, kubelet, and kubectl from the CKP distribution
- Swap disabling — Required for Kubernetes operation
- Cluster initialization — Using kubeadm with CKP-specific configuration
- kubectl configuration — Admin access setup
- Helm installation — For addon management
- CNI deployment — Calico via tigera-operator with configured pod CIDR
- CSI driver deployment — OpenEBS hostpath as default storage class
Kubeadm Configuration (CKP-Specific)
The key CKP-specific configuration in kubeadm is that both the main image repository and the DNS image repository are pointed to the Coredge Docker Hub registry instead of the upstream Kubernetes registry.
Worker / Additional Master Nodes
Worker and additional master nodes follow a similar dependency installation process, after which they join the cluster using a join token generated during master node initialization.