All Integrations
CI/CDPrometheus metrics + CloudEvent sink

Tekton Integration

Pipeline run metrics, task step durations, and trigger event throughput for Tekton Pipelines. Full observability into your Kubernetes-native CI/CD without leaving TigerOps.

Setup

How It Works

01

Enable Tekton Metrics Endpoint

Tekton Pipelines exposes Prometheus metrics on port 9090 of the tekton-pipelines-controller. Enable the metrics service and add a ServiceMonitor or PodMonitor so TigerOps can scrape PipelineRun and TaskRun metrics.

02

Configure Remote Write

Add TigerOps as a remote-write target in your Prometheus configuration or the embedded Tekton metrics config. PipelineRun duration histograms, TaskRun counts, and reconciler work queue depths flow in immediately.

03

Deploy the TigerOps CloudEvent Sink

Configure a Tekton Triggers EventListener with a TigerOps CloudEvent sink or use the TigerOps interceptor to forward trigger event payloads. This gives you trigger event throughput and interceptor latency metrics.

04

Correlate Pipeline Runs with Deployments

TigerOps matches completed PipelineRuns that produce deployment artifacts to downstream service metric changes, automatically flagging regressions that begin after a specific pipeline run completes.

Capabilities

What You Get Out of the Box

PipelineRun Completion Metrics

Track PipelineRun success rates, failure rates, and cancellation rates per Pipeline definition and namespace. TigerOps histograms on PipelineRun duration detect regressions in your build performance over time.

TaskRun Step Duration Analysis

Per-Task and per-step duration tracking with percentile breakdowns. TigerOps identifies which steps within a TaskRun are the critical path and alerts when individual steps exceed configurable duration thresholds.

Trigger Event Throughput

Monitor Tekton Triggers EventListener throughput, interceptor latency, and TriggerBinding evaluation rates. TigerOps alerts when trigger processing falls behind incoming event volume.

Workspace & Volume Mount Timing

Track time spent provisioning PVC workspaces and initializing sidecars before the first step runs. TigerOps surfaces slow workspace provisioning that inflates pipeline queue-to-start latency.

Reconciler Queue & Controller Health

Monitor tekton-pipelines-controller work queue depth, reconciler add rates, and processing latency. Deep queue depth indicates controller resource pressure that will delay new PipelineRun scheduling.

Flaky Step Detection

TigerOps tracks step failure rates across multiple PipelineRuns on the same branch and flags steps that intermittently fail without a consistent triggering cause — the hallmark of flaky test infrastructure.

Configuration

Tekton Metrics Remote Write Setup

Scrape Tekton controller metrics and forward them to TigerOps via Prometheus remote write.

tekton-tigerops-monitor.yaml
# PodMonitor for Tekton Pipelines controller metrics
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: tekton-pipelines-controller
  namespace: tekton-pipelines
spec:
  selector:
    matchLabels:
      app: tekton-pipelines-controller
  podMetricsEndpoints:
    - port: metrics          # port 9090
      interval: 15s
      path: /metrics

---
# PodMonitor for Tekton Triggers EventListener metrics
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: tekton-triggers-eventlistener
  namespace: tekton-pipelines
spec:
  selector:
    matchLabels:
      app.kubernetes.io/part-of: tekton-triggers
  podMetricsEndpoints:
    - port: prometheus-metrics
      interval: 15s

---
# Prometheus remote write to TigerOps
# Add to your prometheus.yml or PrometheusSpec:
#
# remoteWrite:
#   - url: https://ingest.atatus.net/api/v1/write
#     bearerToken: ${TIGEROPS_API_KEY}
#     writeRelabelConfigs:
#       - sourceLabels: [__name__]
#         regex: "tekton_.*"
#         action: keep
FAQ

Common Questions

Which Tekton components does TigerOps monitor?

TigerOps monitors Tekton Pipelines (controller, webhook), Tekton Triggers (EventListener, interceptors), and optionally Tekton Chains for supply chain security events. Each component exposes a Prometheus metrics endpoint that TigerOps scrapes.

Does TigerOps support Tekton running on OpenShift Pipelines?

Yes. OpenShift Pipelines is built on Tekton and exposes the same Prometheus metrics endpoints. TigerOps works with the OpenShift Pipelines operator installation and can scrape metrics via the OpenShift monitoring stack remote-write configuration.

How does TigerOps handle multi-namespace Tekton deployments?

Tekton emits metrics with namespace labels on PipelineRun and TaskRun metrics. TigerOps preserves these labels so you can filter and aggregate by namespace in dashboards, with per-namespace alerting thresholds.

Can TigerOps detect which Tekton Pipeline caused a production regression?

Yes. TigerOps records each PipelineRun that completes with a successful deployment outcome as a change event. The AI correlation engine then scans service metrics for anomalies that start within your configured post-deploy correlation window.

What is the overhead of the TigerOps CloudEvent sink on EventListeners?

The TigerOps CloudEvent sink is asynchronous and batching. It buffers trigger events in memory and flushes them to TigerOps every 5 seconds or when the batch reaches 100 events. The overhead on EventListener request processing is under 1ms.

Get Started

Complete Observability for Kubernetes-Native CI/CD

PipelineRun metrics, step duration analysis, and trigger throughput. Instrument Tekton in under 10 minutes.