All Integrations
StandardsLoki push API + Mimir remote write

Grafana Loki & Mimir Integration

Migrate your Grafana LGTM stack to TigerOps or run in parallel with zero pipeline changes. Compatible Loki and Mimir endpoints accept your existing Promtail, Alloy, and remote_write configs.

Setup

How It Works

01

Point Loki Config at TigerOps

Update your Promtail, Fluent Bit, or Alloy pipeline to use the TigerOps Loki-compatible push endpoint. TigerOps accepts the same label schema and LogQL-compatible log entries.

02

Redirect Prometheus Remote Write

Update your Prometheus remote_write configuration or Grafana Alloy pipeline to send metrics to the TigerOps Mimir-compatible endpoint. Zero reconfiguration of scrape jobs required.

03

Run in Parallel (Optional)

Use Prometheus remote_write to dual-write to both your existing Mimir instance and TigerOps simultaneously. Evaluate TigerOps with zero risk before migrating fully.

04

AI Layer Activates on Your Data

Once your logs and metrics flow into TigerOps, the AI correlation engine begins analyzing patterns, detecting anomalies, and linking log entries to metric spikes automatically.

Capabilities

What You Get Out of the Box

Loki-Compatible Log Ingestion

TigerOps accepts logs via the Loki push API (/loki/api/v1/push). Promtail, Fluent Bit, Grafana Alloy, and Vector all work without configuration changes beyond the endpoint URL.

Mimir-Compatible Remote Write

The TigerOps metrics endpoint accepts Prometheus remote_write protocol. Point your existing Prometheus, Thanos, or Grafana Alloy remote_write at TigerOps to start sending metrics.

Label & Stream Preservation

All Loki labels and Prometheus metric labels are preserved in TigerOps. Existing queries, dashboards, and alert rules that reference your label schema continue to work.

AI Log Anomaly Detection

TigerOps applies ML-based log anomaly detection on your Loki log streams. Unlike Grafana Loki, TigerOps surfaces unusual log patterns without requiring manual pattern definition.

Unified Metrics & Logs Correlation

TigerOps automatically correlates log error spikes with metric anomalies across all your services — something that requires manual Grafana Explore work today.

Grafana Dashboard Import

Import your existing Grafana dashboards into TigerOps with one click. PromQL and LogQL queries are translated automatically to TigerOps native queries.

Configuration

Loki Config Pointing to TigerOps

Update your Promtail and Prometheus remote_write to send to TigerOps-compatible endpoints.

promtail.yml + prometheus.yml
# promtail.yml — redirect Loki push to TigerOps
clients:
  # Existing Loki (keep for parallel mode)
  - url: http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push

  # TigerOps Loki-compatible endpoint
  - url: https://ingest.atatus.net/loki/api/v1/push
    basic_auth:
      username: tigerops
      password: "${TIGEROPS_API_KEY}"
    # Optional: add environment label
    external_labels:
      environment: production
      cluster: prod-us-east-1

---
# prometheus.yml — dual remote_write for zero-risk migration
global:
  scrape_interval: 15s

remote_write:
  # Existing Mimir / Cortex (keep during migration)
  - url: https://mimir.internal/api/v1/push
    queue_config:
      max_samples_per_send: 10000

  # TigerOps Mimir-compatible endpoint
  - url: https://ingest.atatus.net/api/v1/write
    authorization:
      credentials: "${TIGEROPS_API_KEY}"
    queue_config:
      max_samples_per_send: 10000
      batch_send_deadline: 5s

---
# Grafana Alloy (River config) — loki.write to TigerOps
loki.write "tigerops" {
  endpoint {
    url = "https://ingest.atatus.net/loki/api/v1/push"
    basic_auth {
      username = "tigerops"
      password = env("TIGEROPS_API_KEY")
    }
  }
}
FAQ

Common Questions

Can I migrate from Grafana Loki and Mimir without downtime?

Yes. Use Prometheus dual remote_write and Promtail dual-client to send data to both your existing Loki/Mimir stack and TigerOps simultaneously. Once you verify TigerOps is receiving all your data correctly, cut over by updating DNS or removing the old endpoints.

Does TigerOps support LogQL queries from existing Grafana dashboards?

TigerOps supports a subset of LogQL for log filtering and label selection. Most Grafana dashboard log panels that use line_filter, label_filter, and json parser expressions work without modification. Complex metric queries over log streams may require translation.

What happens to my Grafana alert rules when I migrate?

TigerOps provides a Grafana alert rule importer that reads your alertmanager.yaml and Grafana alert rule JSON and converts them to TigerOps alert policies. PromQL-based alert expressions are re-evaluated against TigerOps metric storage.

Does TigerOps support Grafana Alloy (the successor to Grafana Agent)?

Yes. Grafana Alloy supports configurable remote_write and loki.write destinations. Update the endpoint URLs in your Alloy River config to point at TigerOps and all your existing collection pipelines continue working.

How does TigerOps pricing compare to self-hosting Loki and Mimir?

TigerOps eliminates the operational overhead of running Loki, Mimir, Grafana, and their underlying object storage. For most teams, TigerOps costs less than the engineering time and cloud storage costs of operating the Grafana LGTM stack at scale. Contact us for a cost comparison.

Get Started

Migrate from Grafana — Zero Reconfiguration

Compatible Loki and Mimir endpoints. Dual-write for safe migration. AI correlation the moment data arrives.