All Integrations
DatabasesTigerOps agent + JMX

Apache Pinot Integration

Monitor segment metrics, broker query latency, and real-time ingestion health across your Pinot clusters. Get per-table SLO tracking and AI anomaly detection before query degradation impacts your analytics.

Setup

How It Works

01

Deploy TigerOps Pinot Agent

Install the TigerOps agent on your Pinot controller node. It auto-discovers all brokers, servers, and minion instances via the Pinot controller REST API and begins scraping JMX metrics immediately.

02

Configure JMX Metric Export

Enable the Pinot JMX metrics endpoint in your pinot-controller.conf and pinot-server.conf. TigerOps reads all standard Pinot MBeans including query latency, segment counts, and ingestion row rates.

03

Set Up Table-Level Dashboards

TigerOps auto-generates per-table dashboards for segment availability, query SLOs, and ingestion lag. Configure p99 latency thresholds and segment freshness windows per table in your alert rules.

04

Enable Cross-Service Correlation

Link Pinot query spikes to upstream Kafka ingestion lag or StarTree cache misses. TigerOps correlates Pinot performance with application-level traces to surface full request lifecycle visibility.

Capabilities

What You Get Out of the Box

Broker Query Latency Tracking

Per-table broker query latency at p50, p95, and p99 with query timeout rates, exceptions per second, and scatter-gather subquery fan-out metrics across all broker instances.

Segment Availability Monitoring

Real-time segment counts per table and tenant, consuming vs. online segment ratios, replication factor compliance, and segment refresh age across all server nodes.

Real-Time Ingestion Health

Consuming segment row rates, ingestion lag behind Kafka offsets, record decode error rates, and LLC (Low-Level Consumer) partition assignment health per realtime table.

Server Node Resource Metrics

Per-server CPU utilization, heap memory usage, GC pause times, disk I/O rates for segment loading, and network throughput during segment fetch and query execution.

Controller & Minion Health

Controller leadership status, ideal state vs. external view divergence, minion task queue depth, segment merge task throughput, and ZooKeeper session health metrics.

AI Query Anomaly Detection

When broker latency spikes, TigerOps AI identifies whether the cause is a hot partition, a large fan-out query, a server GC pause, or an ingestion backlog — and routes the alert with full context.

Configuration

TigerOps Agent for Apache Pinot

Configure the TigerOps agent to scrape Pinot JMX metrics and forward them to your monitoring endpoint.

tigerops-pinot.yaml
# TigerOps Apache Pinot Agent Configuration
# Install: curl -sSL https://install.atatus.net/agent | sh

receivers:
  pinot_jmx:
    # Pinot controller for cluster-level metadata
    controller_url: "http://pinot-controller:9000"
    # JMX endpoints per component
    broker_jmx: "service:jmx:rmi:///jndi/rmi://pinot-broker:9090/jmxrmi"
    server_jmx: "service:jmx:rmi:///jndi/rmi://pinot-server:9090/jmxrmi"
    collection_interval: 15s

  pinot_api:
    controller_url: "http://pinot-controller:9000"
    # Per-table ingestion lag polling
    tables:
      - name: "orders_REALTIME"
        lag_threshold_ms: 5000
      - name: "events_REALTIME"
        lag_threshold_ms: 10000
    poll_interval: 30s

exporters:
  tigerops:
    endpoint: "https://ingest.atatus.net/api/v1/write"
    bearer_token: "${TIGEROPS_API_KEY}"
    send_interval: 15s

# Alert thresholds
alerts:
  broker_query_latency_p99_ms: 1500
  segment_missing_count: 0
  ingestion_lag_warning_ms: 5000
  server_heap_usage_pct: 85
FAQ

Common Questions

Which Apache Pinot versions does TigerOps support?

TigerOps supports Apache Pinot 0.10 and later. The agent handles both the legacy and current JMX metric namespaces. StarTree Cloud and managed Pinot deployments are supported via the controller REST API polling mode.

How does TigerOps track real-time ingestion lag per Kafka partition?

The TigerOps agent queries the Pinot controller segment metadata API to derive per-partition consuming offset vs. the upstream Kafka high watermark. This gives per-table ingestion lag in messages and estimated time-lag without requiring direct Kafka access.

Can I monitor individual Pinot tables with separate alert policies?

Yes. TigerOps allows you to define per-table alert policies for query latency, segment freshness, and ingestion lag independently. Critical tables can have tight SLOs while development tables use relaxed thresholds — all managed from a single dashboard.

Does TigerOps support multi-tenant Pinot deployments?

Yes. TigerOps discovers all tenants via the Pinot controller API and provides per-tenant segment counts, broker resource allocation, and query throughput metrics. You can create tenant-specific dashboards and alert policies.

How are Pinot segment errors linked to application impact?

TigerOps correlates Pinot segment unavailability events with application-level error rates and latency increases captured via APM traces. When a table goes partially offline, you see exactly which application endpoints were affected and for how long.

Get Started

Stop Discovering Pinot Query Degradation After the Fact

Per-table SLO tracking, ingestion health monitoring, and AI root cause analysis. Deploy in 5 minutes.