This page describes the installation of K3s using the OpsRamp ISO/OVA for both single-node and multi-node setups.
Overview
The OpsRamp-provided ISO and OVA images come pre-bundled with K3s, a lightweight Kubernetes distribution. This integration simplifies the Kubernetes installation process, allowing you to bring up a functional cluster using a single command with no complex configuration required.
About K3s in OpsRamp Gateway
K3s is used to host and run the NextGen Gateway as a collection of Kubernetes-native applications. These applications are deployed using Helm charts and rely on Docker container images. During installation and runtime, access to a container registry is essential.
- By default, the Gateway is configured to use the OpsRamp cloud-hosted container registry to pull the required Helm charts and images.
- Alternatively, organizations can configure their own private or public registry to meet internal security or compliance requirements.
Registry Configuration (Optional, Based on Use Case)
Before installing K3s, it is recommended to review your container registry configuration. This step is optional if you plan to use the default OpsRamp registry.
When You Can Skip Registry Configuration
If you are using the OpsRamp registry to pull Helm charts and Docker images, no additional configuration is required. The Gateway will automatically connect to OpsRamp’s registry during deployment.
When You Must Configure a Custom Registry
If you choose not to use the OpsRamp registry and instead prefer to use your own registry (public or private), you must configure it before installing K3s.
Note
If you are using a private registry, you are responsible for syncing your repository with the OpsRamp registry to pull the necessary Helm charts and Docker images. This synchronization process must be handled on your end.Steps to Configure a Custom Registry
Note
The private registry configuration must be applied individually on each node in the cluster using the same settings to ensure consistency across the deployment.- Open the registry configuration template:
vi /var/cgw/asserts_k3s/registries.yaml.template - Uncomment the
configssection and update it with your registry credentials and endpoint:
Ensure proper YAML indentation to avoid syntax issues during deployment.mirrors: artifact.registry: endpoint: - "https://{your-private-repo}" configs: "{your-private-repo}": auth: username: "{your-username}" password: "{your-password}"
Example:mirrors: artifact.registry: endpoint: - "https://hub.testrepo.com" configs: "hub.testrepo.com": auth: username: "test-user" password: "test-password"
Installation Steps
Step 1: Installing K3s
Once registry configuration is complete (if applicable), install K3s using the following command:
opsramp-collector-start setup initNote
TheSetup and Init options are only visible to users who are using OpsRamp ISO/OVA. These options are not available for users on other platforms.Optional: Custom Pod and Service CIDR Ranges
Note
Do not provide a /16 subnet mask or any subnet range in this field. Only enter the individual IP address. The required /16 subnet range will be automatically applied by the bootstrap tool during configuration.opsramp-collector-start setup init --cluster-cidr <pod-cidr> --service-cidr <service-cidr>Note
If you do not specify these flags, the default CIDR ranges are:
- Cluster CIDR:
10.42.0.0/16 - Service CIDR:
10.43.0.0/16
Ensure custom CIDRs are within the/16subnet to maintain compatibility.
Step 2: Verifying K3s Installation
Upon successful installation, you will see output similar to:
root@node1:/home/gateway-admin# opsramp-collector-start setup init
Installing K3s
Installed K3sCheck the K3s service status:
service k3s statusExpected output:
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2025-04-24 11:12:21 UTC; 1min 10s ago
Docs: https://k3s.io
Process: 1931925 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 1931926 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 1931927 (k3s-server)Troubleshooting
If K3s fails to install or the service is not in a running state:
- Review the installation logs.
- Check for registry access errors or YAML syntax issues in the registry configuration.
- Review the k3s service logs to identify the underlying error messages related to the issue (for example etcd timeouts, API server errors, or networking/plugin failures).
sudo journalctl -u k3s
Step 1: Installing K3s
Step 1a: Initialize the First Node
To install K3s and initialize the NextGen Gateway on the first node in an HA cluster, use the following command:
opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp}Load Balancer IP Configuration ({loadbalancerIp})
The {loadbalancerIp} parameter supports specifying single IPs, IP ranges, or CIDR blocks. Below are the supported formats and examples:
| IP-range | Description | Result Value |
|---|---|---|
| 192.25.254.45/32 | This is used to add single metallb IP | 192.25.254.45 |
| 192.25.254.45/30 | This is used to add 4 IPs from a given IP | 192.25.254.44 - 192.25.254.47 |
| 192.25.254.44 - 192.25.254.49 | This is used to add a custom range | 192.25.254.44 - 192.25.254.49 |
Examples of {loadbalancerIp} Usage
- Single IP:
--loadbalancer-ip 192.25.254.45/32 - CIDR block of 4 IPs:
--loadbalancer-ip 192.25.254.45/30 - Custom IP range:
--loadbalancer-ip 192.25.254.44-192.25.254.49 - Multiple IPs and ranges (comma-separated):
--loadbalancer-ip 192.25.251.12/32,192.25.251.23/28,192.25.251.50-192.25.251.56
CIDR Range Configuration (Optional)
Note
Do not provide a /16 subnet mask or any subnet range in this field. Only enter the individual IP address. The required /16 subnet range will be automatically applied by the bootstrap tool during configuration.If CIDR ranges are not specified during K3s cluster setup, the following defaults will be applied:
- Cluster CIDR: 10.42.0.0/16
- Service CIDR: 10.43.0.0/16
To use custom CIDR ranges, include the –cluster-cidr and –service-cidr flags, ensuring the CIDRs fall within a /16 subnet:
opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp} --cluster-cidr <cluster-cidr-ip> --service-cidr <service-cidr-ip>
Step 1b: Adding Additional Nodes to the Cluster
After completing the setup of the first node, the next step is to add additional nodes to the existing Kubernetes cluster to achieve High Availability (HA).
Step i: Generate the K3s Token on the First Node
Run the following command on the first node to generate the token required for joining new nodes to the cluster:
opsramp-collector-start setup node tokenThis command will output a token that will be used to authenticate and securely join new nodes.Step ii: Add a New Node to the Cluster
On each new node that you want to add, run the following command to join it to the existing cluster:
opsramp-collector-start setup node add -u https://{NodeIp}:6443 -t {token}- Replace {NodeIp} with the IP address of the first node (the cluster master).
- Replace {token} with the token generated in Step 1.
This command installs K3s on the new node and joins it to the cluster.
Step iii: Verify Node AdditionAfter joining the new node, verify that it has successfully been added by running this command on the first node or any node with access to the cluster:kubectl get nodes
kubectl get nodesSample Output:
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,etcd,master 8m3s v1.23.5+k3s1
node2 Ready control-plane,etcd,master 5m13s v1.23.5+k3s1Repeat Step i, Step ii, Step iii for each additional node to build your HA cluster. For example,
- For a 3-node cluster, add the third node following the above steps.
- For a 5-node cluster, repeat the process on the third, fourth, and fifth nodes.
- Continue as needed for larger clusters.
Step 2: Verify the Cluster Status
After adding all nodes to your High Availability (HA) cluster, verify that the Kubernetes environment is healthy and all essential components are running correctly.
Step 2a: Confirm Node Readiness
Ensure all nodes in the cluster are in a Ready state:
kubectl get nodesSample Output:
pgsql
CopyEdit
NAME STATUS ROLES AGE VERSION
nodea Ready control-plane,etcd,master 8m3s v1.23.5+k3s1
nodeb Ready control-plane,etcd,master 5m13s v1.23.5+k3s1
nodec Ready control-plane,etcd,master 4m5s v1.23.5+k3s1Step 2b: Verify Helm Chart Deployments
Ensure required Helm-based services like Longhorn (for storage) and MetalLB (for load balancing) have been deployed successfully:
helm list -ASample Output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ipaddresspool kube-system 1 2025-07-09 07:10:34 +0000 UTC deployed ipaddresspool-2.0.0 1.16.0
longhorn longhorn-system 1 2025-07-09 07:08:04 +0000 UTC deployed longhorn-1.6.2 v1.6.1
metallb kube-system 1 2025-07-09 07:07:59 +0000 UTC deployed metallb-6.3.6 0.14.5Note
ipaddresspool: Manages IP allocations for MetalLB and load balancing in the Kubernetes cluster.longhorn: Provides distributed block storage for persistent volumes across the cluster.metallb: Handles external IP allocation and load balancer services.
Ensure the STATUS is deployed for all charts. If any chart is missing or in a failed state, re-check your deployment steps or logs for errors.
Step 2c: Check Pod Status Across All Namespaces
Verify that all pods are in a Running state and all containers within them are Ready:
kubectl get pods -ASample Output:
NAMESPACE NAME READY STATUS RESTARTS AGEdefault nextgen-gw-0 4/4 Running 0 23hdefault stan-0 2/2 Running 0 23h
kube-system coredns-d76bd69b-n8jhh 1/1 Running 0 23h
kube-system metallb-controller-7954c9c84d-pm89k 1/1 Running 0 23h
kube-system metallb-speaker-j69tp 1/1 Running 0 23h
kube-system metallb-speaker-mddqj 1/1 Running 0 23h
kube-system metallb-speaker-n45g4 1/1 Running 0 23h
kube-system metrics-server-7cd5fcb6b7-tvnps 1/1 Running 0 23h
longhorn-system csi-attacher-76c9f797d7-2jg5w 1/1 Running 0 23h
longhorn-system csi-attacher-76c9f797d7-qhs85 1/1 Running 0 23h
longhorn-system csi-attacher-76c9f797d7-qn9pr 1/1 Running 0 23h
longhorn-system csi-provisioner-b749dbdf9-chjs9 1/1 Running 0 23h
longhorn-system csi-provisioner-b749dbdf9-hbwx2 1/1 Running 0 23hNote
All pods should show aSTATUS of Running and READY columns like 1/1, 2/2, or 4/4, indicating all containers within the pod are active.