All Integrations
Standardslogstash.conf output plugin

Logstash Integration

Ingest logs from any Logstash pipeline into TigerOps. Grok parsing, field enrichment, and AI-powered anomaly detection — without replacing your existing Logstash investment.

Setup

How It Works

01

Install the TigerOps Output Plugin

Install logstash-output-tigerops from the Logstash plugin registry. It wraps the TigerOps HTTP ingest API with batching, compression, and retry logic built in.

02

Add the Output to Your Pipeline

Add the tigerops output block to your logstash.conf output section. Pipe any filtered log stream to TigerOps alongside or instead of your existing Elasticsearch output.

03

Map Fields with Mutate Filters

Use Logstash mutate, rename, and add_field filters to standardize field names before the TigerOps output. Service name, severity, and trace ID are auto-indexed when present.

04

Enable AI Anomaly Alerting

TigerOps AI builds a baseline of your log error rates and message patterns per service. Deviations — new error classes, volume spikes — trigger instant alerts.

Capabilities

What You Get Out of the Box

Drop-in Logstash Output

The TigerOps Logstash output plugin integrates into any existing pipeline. Route all events or a filtered subset to TigerOps without disrupting existing outputs.

Grok Pattern Support

Apply Grok patterns in your Logstash filter stage to parse unstructured logs before forwarding. TigerOps indexes the parsed fields for structured search and alerting.

Multi-Pipeline Routing

Use Logstash pipeline-to-pipeline routing to fan logs out to TigerOps, Elasticsearch, and S3 simultaneously from a single input stream with zero duplication overhead.

Dead Letter Queue

Failed events are written to the Logstash DLQ and retried automatically. TigerOps monitors DLQ depth and alerts when events are stuck due to ingest errors.

Logstash Health Metrics

Monitor Logstash pipeline throughput, event rate, queue depth, and JVM heap from the built-in monitoring API. Correlate pipeline slowdowns with log volume changes.

Log-to-Metric Extraction

TigerOps extracts numeric fields from parsed logs — latencies, sizes, counts — and stores them as time series metrics alongside your Prometheus data.

Configuration

logstash.conf Output Plugin

Add TigerOps as an output destination in your Logstash pipeline configuration.

logstash.conf
# logstash.conf — Add TigerOps as an output destination

input {
  beats {
    port => 5044
  }
}

filter {
  # Parse application logs with Grok
  if [fields][log_type] == "app" {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:severity} %{GREEDYDATA:log_message}" }
    }
  }

  # Normalize field names for TigerOps
  mutate {
    rename => {
      "log_message" => "message"
      "host"        => "hostname"
    }
    add_field => {
      "service_name" => "%{[fields][service]}"
      "environment"  => "%{[fields][env]}"
    }
  }
}

output {
  # Send to TigerOps (native plugin)
  tigerops {
    endpoint => "https://ingest.atatus.net/api/v1/logs"
    api_key  => "${TIGEROPS_API_KEY}"
    index    => "application"
    batch_size    => 500
    flush_interval => 5
  }

  # Alternative: use the built-in http output
  # http {
  #   url => "https://ingest.atatus.net/api/v1/logs"
  #   http_method => "post"
  #   headers => { "Authorization" => "Bearer ${TIGEROPS_API_KEY}" }
  #   codec => "json_lines"
  # }
}
FAQ

Common Questions

Can I send logs to TigerOps and Elasticsearch simultaneously?

Yes. Use Logstash's multiple output capability to send events to both TigerOps and Elasticsearch in the same pipeline configuration. There is no performance penalty for multiple outputs when using the pipeline worker model.

Which Logstash versions are supported?

TigerOps supports Logstash 7.x and 8.x. The logstash-output-tigerops plugin requires JRuby and the Logstash plugin framework. The HTTP output plugin is an alternative for environments where installing custom plugins is restricted.

How do I handle sensitive data before sending to TigerOps?

Use Logstash's mutate remove_field, gsub, or the logstash-filter-anonymize plugin to mask or remove PII fields before the TigerOps output stage. TigerOps never receives data you remove in the filter pipeline.

Can TigerOps monitor Logstash itself?

Yes. Enable the Logstash X-Pack monitoring API and point the TigerOps Logstash integration at your Logstash monitoring endpoint. Pipeline throughput, event rates, and JVM metrics flow into TigerOps dashboards.

Is the HTTP output plugin a viable alternative to the native plugin?

Yes, for simple use cases. Configure the http output plugin with the TigerOps logs endpoint, JSON codec, and your API key in the Authorization header. The native plugin adds batching and compression for high-throughput pipelines.

Get Started

Add TigerOps Intelligence to Your Logstash Pipeline

AI error pattern detection, log-to-metric extraction, and trace correlation without replacing Logstash.