All Integrations
ContainersHelm chart

k3s Integration

Purpose-built monitoring for lightweight Kubernetes. Track k3s server and agent health, embedded etcd, Traefik ingress, and edge node resources with minimal overhead.

Setup

How It Works

01

Deploy TigerOps Agent via Helm

Install the TigerOps lightweight agent optimized for k3s. The Helm chart automatically detects the k3s server binary, data directory, and embedded etcd socket to configure all scrapers without manual intervention.

02

Enable k3s Metrics Server

k3s exposes metrics at /metrics on the API server. TigerOps configures its scraper to authenticate using the k3s kubeconfig and collects server, scheduler, and controller manager metrics from the combined binary.

03

Configure Node Agent on All Nodes

Deploy the TigerOps node agent to both server and agent nodes. On agent-only nodes, the agent collects containerd, kubelet, and cgroup metrics without requiring access to the control plane.

04

Set Edge-Appropriate Alerting

Configure low-overhead alert rules suited for edge deployments — node unreachable thresholds, disk pressure on resource-constrained nodes, and embedded etcd raft health. TigerOps queues alerts during connectivity gaps.

Capabilities

What You Get Out of the Box

k3s Server & Agent Component Health

Monitor the k3s server (combined API server, scheduler, controller manager) and agent (kubelet, kube-proxy) as a single process. Track component restart counts, binary version, and feature gate configuration.

Embedded etcd Monitoring

Track k3s embedded etcd cluster health, leader election events, wal fsync duration, and peer round-trip time. Alert when a single-node embedded etcd experiences database file growth that risks disk exhaustion.

Traefik Ingress Metrics

Monitor k3s bundled Traefik ingress request rates, response codes, backend health, and TLS certificate expiry. Track per-IngressRoute latency percentiles and error rates without any additional configuration.

Resource-Constrained Node Monitoring

Purpose-built metrics collection for edge nodes with limited RAM and CPU. TigerOps uses compressed metric shipping and adaptive scrape intervals to minimize agent overhead on ARM and low-power devices.

Klipper Load Balancer & ServiceLB

Monitor k3s built-in ServiceLB (Klipper) for service endpoint health, DaemonSet pod status, and iptables rule application. Track which nodes are actively serving traffic for LoadBalancer services.

AI-Assisted Edge Incident Detection

TigerOps AI accounts for intermittent connectivity in edge environments. It distinguishes node unreachability from genuine outages using historical connectivity patterns, reducing false-positive alerts for edge k3s clusters.

Configuration

Helm Values for k3s Monitoring

Deploy the TigerOps lightweight agent to your k3s cluster with minimal resource overhead.

tigerops-k3s-values.yaml
# TigerOps Helm values for k3s
# helm repo add tigerops https://charts.atatus.net
# helm install tigerops tigerops/tigerops-k3s -f values.yaml

global:
  apiKey: "${TIGEROPS_API_KEY}"
  remoteWriteEndpoint: https://ingest.atatus.net/api/v1/write

k3s:
  enabled: true
  # k3s combined binary metrics endpoint
  serverMetricsPort: 10250
  # k3s data directory (for embedded etcd socket detection)
  dataDir: /var/lib/rancher/k3s

  embeddedEtcd:
    enabled: true  # set false for external etcd

  traefik:
    enabled: true
    # Scrape Traefik dashboard metrics
    dashboardPort: 9000

  serviceLB:
    enabled: true

# Lightweight agent profile for edge/constrained nodes
agent:
  profile: edge
  resources:
    requests:
      cpu: 50m
      memory: 64Mi
    limits:
      cpu: 200m
      memory: 128Mi

  # Buffer metrics locally during connectivity loss
  localBuffer:
    enabled: true
    maxDurationHours: 24
    storagePath: /var/lib/tigerops/buffer

  scrapeInterval: 30s   # relaxed for edge nodes

alerts:
  nodeUnreachableMinutes: 5
  embeddedEtcdDiskGrowthMBPerHour: 50
  traefikErrorRatePct: 5
FAQ

Common Questions

How does TigerOps handle k3s single-node (server + agent) deployments?

TigerOps auto-detects single-node k3s deployments where the server and agent roles run on one node. It merges the combined binary metrics into a single node view, correctly attributing control plane CPU usage separately from workload container usage.

Can TigerOps monitor k3s clusters that are intermittently connected to the internet?

Yes. The TigerOps agent includes a local metric buffer that stores up to 24 hours of compressed metrics on disk. When connectivity is restored, buffered metrics are replayed to TigerOps with accurate timestamps, preserving historical data across gaps.

Does TigerOps support k3s High Availability with external etcd?

Yes. TigerOps supports both embedded etcd HA and external etcd configurations. For external etcd, configure the etcd endpoint and credentials separately. TigerOps monitors both the k3s server components and the external etcd cluster independently.

What is the agent resource footprint on a k3s edge node?

The TigerOps lightweight agent for k3s uses approximately 30–60 MB of RAM and under 1% CPU on a typical ARM64 edge node at 15-second scrape intervals. The agent can be configured with reduced scrape frequency and metric cardinality limits for highly constrained devices.

Can I use TigerOps to monitor k3s clusters running in Rancher?

Yes. k3s clusters registered in Rancher can be monitored through both the TigerOps Rancher integration (which covers fleet and project metrics) and the k3s-specific integration (which adds embedded etcd, Traefik, and Klipper metrics not covered by generic Kubernetes monitoring).

Get Started

Production-Grade Monitoring for Lightweight Kubernetes at the Edge

k3s-native metrics, embedded etcd health, and offline-resilient alerting — without adding resource pressure to constrained nodes.