All Integrations
CloudCloudWatch Metric Streams + IAM

AWS EKS Integration

Full Kubernetes observability for EKS clusters with node, pod, and control plane metrics. AI root cause analysis for K8s incidents, HPA visibility, and multi-cluster dashboards.

Setup

How It Works

01

Install via Helm

Add the TigerOps Helm chart and install the monitoring stack into your EKS cluster. It deploys the OpenTelemetry collector as a DaemonSet and creates the necessary RBAC resources.

02

Configure IRSA Permissions

TigerOps uses IAM Roles for Service Accounts (IRSA) to access the EKS control plane API and CloudWatch Container Insights without storing credentials in the cluster.

03

Enable Control Plane Logging

Enable EKS control plane logging (API server, audit, scheduler) and route the logs to TigerOps via CloudWatch Logs subscription. Control plane metrics are correlated with workload events.

04

Build Workload Dashboards

TigerOps auto-generates per-namespace and per-deployment dashboards. Drill from cluster overview to individual pod metrics, container logs, and distributed traces in one click.

Capabilities

What You Get Out of the Box

Node & Pod Metrics

CPU, memory, network, and filesystem metrics for every node and pod. TigerOps correlates resource pressure at the node level with pod evictions and OOMKill events.

Control Plane Observability

API server request latency, etcd database size, scheduler binding latency, and controller manager queue depth — critical control plane signals often invisible in other tools.

Workload Health Tracking

Deployment rollout progress, ReplicaSet availability, DaemonSet rollout status, and CronJob last success time. TigerOps alerts on workload degradation before end users are impacted.

HPA & Cluster Autoscaler Visibility

Track Horizontal Pod Autoscaler scaling decisions, current vs. desired replicas, and Cluster Autoscaler node provisioning events with timing and reason annotations.

Persistent Volume Monitoring

PVC capacity utilisation, I/O throughput, and latency for EBS-backed and EFS-backed persistent volumes. Alerts before a volume fills up and causes pod failures.

AI Root Cause for K8s Incidents

TigerOps AI correlates pod restarts, OOMKills, node pressure events, and application error rate spikes to surface the root cause of Kubernetes incidents with actionable remediation steps.

Configuration

Helm Install for EKS

Deploy the TigerOps monitoring stack into your EKS cluster with Helm.

eks-helm-install.sh
# Add TigerOps Helm repo
helm repo add tigerops https://charts.atatus.net
helm repo update

# Create namespace
kubectl create namespace tigerops-monitoring

# Create API key secret
kubectl create secret generic tigerops-secret \
  --from-literal=apiKey=${TIGEROPS_API_KEY} \
  --namespace tigerops-monitoring

# Install TigerOps monitoring stack
helm install tigerops-eks tigerops/eks-monitoring \
  --namespace tigerops-monitoring \
  --set cluster.name=production-eks \
  --set cluster.region=us-east-1 \
  --set remoteWrite.endpoint=https://ingest.atatus.net/api/v1/write \
  --set remoteWrite.apiKeySecret=tigerops-secret \
  --set controlPlane.enabled=true \
  --set nodeExporter.enabled=true \
  --set kubeStateMetrics.enabled=true \
  --set scrapeInterval=15s

# Enable EKS control plane logging (run once per cluster)
aws eks update-cluster-config \
  --name production-eks \
  --logging '{"clusterLogging":[{"types":["api","audit","scheduler","controllerManager"],"enabled":true}]}'
FAQ

Common Questions

Does TigerOps support EKS managed node groups and Karpenter?

Yes. TigerOps auto-discovers nodes regardless of whether they are managed node groups, self-managed, or provisioned by Karpenter. Node labels and Karpenter provisioner annotations are used to group and filter metrics.

How does TigerOps collect EKS control plane metrics?

EKS control plane metrics are collected via CloudWatch Container Insights (enabled with one flag in the EKS console or API) and routed to TigerOps via Metric Streams. Additionally, TigerOps reads the Kubernetes metrics-server for pod-level resource usage.

Can TigerOps monitor multiple EKS clusters from a single workspace?

Yes. Install the TigerOps Helm chart in each cluster with a unique cluster label. All clusters report to the same TigerOps workspace and you can switch between them or view a unified multi-cluster overview.

Does TigerOps work with EKS Fargate profiles?

Yes. For Fargate pods, TigerOps collects metrics via the EKS Fargate metrics API and Fluent Bit log router. CPU and memory limits and requests are tracked alongside actual usage per Fargate pod.

How does TigerOps handle Kubernetes namespace isolation for multi-team clusters?

TigerOps supports namespace-scoped views so each team sees only their own workloads. Role-based access control in TigerOps mirrors your Kubernetes RBAC — admins see the full cluster, developers see their namespace.

Get Started

Complete Kubernetes Visibility for Your EKS Clusters

Node, pod, and control plane metrics with AI root cause analysis. Deploy the Helm chart in minutes.