Fluentd Integration
Route log streams from your Fluentd pipelines into TigerOps. Tag-based routing, field mapping, buffered delivery, and AI error pattern detection — all out of the box.
How It Works
Install the TigerOps Output Plugin
Install the fluent-plugin-tigerops gem on your Fluentd instances. The plugin handles batching, compression, and authenticated delivery to the TigerOps ingest endpoint.
Configure Output in fluent.conf
Add a match block targeting TigerOps in your fluent.conf. Route logs by tag pattern — send application logs, access logs, and system logs to separate TigerOps streams.
Map Fields to TigerOps Schema
Use the record_transformer filter to map your existing field names to TigerOps standard fields. Severity, service name, trace ID, and timestamp are auto-correlated.
Enable Log Anomaly Detection
TigerOps AI learns your log volume and error rate baselines per service. Anomalous log bursts, new error patterns, and silent services all trigger automated alerts.
What You Get Out of the Box
Tag-Based Log Routing
Route different log streams — application logs, access logs, audit logs, and infrastructure logs — to separate TigerOps indexes using Fluentd tag matching.
Field Mapping & Enrichment
Map arbitrary field names to TigerOps standard schema. Auto-enrich logs with Kubernetes pod metadata, host information, and deployment version from Fluentd filter plugins.
Buffered Delivery
The TigerOps output plugin uses Fluentd chunked buffering to handle backpressure and network interruptions without log loss. Configurable flush interval and retry logic.
AI Error Pattern Detection
TigerOps ML groups recurring error messages into patterns and alerts when a new pattern emerges or an existing pattern exceeds its baseline frequency.
Metric Extraction from Logs
Use TigerOps log-to-metric rules to extract numeric values from log fields — request durations, response sizes, queue depths — and plot them on metric dashboards.
Trace Correlation
When logs contain trace IDs (OpenTelemetry or B3 format), TigerOps automatically links log lines to the corresponding distributed trace spans for instant context.
fluent.conf Output Plugin Config
Route log streams to TigerOps with buffering and field enrichment.
# fluent.conf — TigerOps output plugin configuration
# Install: gem install fluent-plugin-tigerops
# Or in td-agent: td-agent-gem install fluent-plugin-tigerops
# Enrich logs with Kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
# Map fields to TigerOps schema
<filter **>
@type record_transformer
<record>
service_name ${record["kubernetes"]["labels"]["app"]}
severity ${record["level"] || record["severity"] || "info"}
trace_id ${record["traceId"] || record["trace_id"]}
</record>
</filter>
# Route application logs to TigerOps
<match app.**>
@type tigerops
endpoint https://ingest.atatus.net/api/v1/logs
api_key "#{ENV['TIGEROPS_API_KEY']}"
index application
<buffer>
@type file
path /var/log/fluentd/tigerops-buffer
chunk_limit_size 64mb
total_limit_size 512mb
flush_interval 5s
retry_max_times 10
retry_type exponential_backoff
</buffer>
</match>
# Route access logs separately
<match nginx.access>
@type tigerops
endpoint https://ingest.atatus.net/api/v1/logs
api_key "#{ENV['TIGEROPS_API_KEY']}"
index access
<buffer>
@type memory
flush_interval 10s
</buffer>
</match>Common Questions
Which Fluentd versions are supported?
TigerOps supports Fluentd v1.x (td-agent 4.x and above). The fluent-plugin-tigerops gem is compatible with Ruby 2.7 and later. The plugin is also compatible with Fluentd running inside Kubernetes via the fluentd DaemonSet.
Can I use the HTTP output plugin instead of installing a custom gem?
Yes. TigerOps exposes an HTTP ingest endpoint compatible with Fluentd's built-in out_http plugin. Use the endpoint https://ingest.atatus.net/api/v1/logs with Bearer token authentication for a zero-dependency setup.
How does TigerOps handle high-volume log pipelines?
The TigerOps output plugin uses Fluentd's file or memory buffer with configurable chunk size and flush interval. For high-throughput pipelines, we recommend file buffering with chunk_limit_size 256mb and flush_interval 5s.
Can I filter out sensitive fields before logs reach TigerOps?
Yes. Use Fluentd's record_modifier or record_transformer filter to mask or remove sensitive fields before the TigerOps output match. This ensures PII never leaves your network in plaintext.
Does TigerOps support Fluentd multi-worker mode?
Yes. The TigerOps output plugin is thread-safe and works with Fluentd multi-worker mode. Each worker maintains its own buffer and flush thread, providing horizontal throughput scaling within a single Fluentd process.
Your Fluentd Pipelines. TigerOps Intelligence.
AI error pattern detection, trace correlation, and log-to-metric extraction. Configure in minutes.