All Integrations
MessagingCelery Signals + SDK

Celery Integration

Monitor task queues, worker utilization, failure rates, and distributed execution traces across your Celery deployment. From Beat schedules to task chains — full visibility included.

Setup

How It Works

01

Install the TigerOps SDK

Add tigerops-celery to your requirements.txt. The SDK hooks into Celery signals (task_prerun, task_postrun, task_failure, task_retry) and requires no changes to your existing task definitions.

02

Configure celeryconfig.py

Set your TigerOps API key and ingest endpoint in celeryconfig.py or via environment variables. The SDK auto-discovers all registered task names and maps them to metric labels.

03

Set Queue Depth Thresholds

Configure alert thresholds for queue depth per named queue, worker concurrency utilization, and task failure rate. TigerOps applies AI baselines based on your observed traffic patterns.

04

Trace Tasks End-to-End

TigerOps propagates trace context from the originating request through to Celery task execution. You get full distributed traces showing the path from HTTP request to queued task to completion.

Capabilities

What You Get Out of the Box

Queue Depth Per Named Queue

Real-time queue depth for every named Celery queue (default, high-priority, email, etc.) backed by Redis or RabbitMQ. Alerts fire when queues back up beyond your defined thresholds.

Worker Utilization

Active vs. reserved vs. available worker slots across your worker pool. Track concurrency saturation and scale workers before task queuing latency impacts your users.

Task Duration Histograms

P50, P95, and P99 task execution duration per task type. Identify which task classes are slowest and correlate duration increases with code deploys or external dependency degradation.

Failure & Retry Rates

Task failure rate, retry count, and dead-letter queue depth per task type. TigerOps groups failures by exception class so you can triage the most impactful errors first.

Distributed Trace Propagation

Trace context is injected into task headers automatically. Every Celery task execution appears as a child span in the originating request trace, giving you full end-to-end visibility.

Beat Scheduler Monitoring

Track Celery Beat scheduled task execution times, missed schedules, and execution drift. TigerOps alerts when a scheduled task has not run within its expected window.

Configuration

celeryconfig.py with TigerOps

Add the TigerOps instrumentation to your Celery application with a single SDK import and configuration block.

celeryconfig.py
# celeryconfig.py — TigerOps Celery monitoring setup
import os
from tigerops.celery import TigerOpsCeleryInstrumentation

# Standard Celery broker/backend config
broker_url = os.environ.get("CELERY_BROKER_URL", "redis://localhost:6379/0")
result_backend = os.environ.get("CELERY_RESULT_BACKEND", "redis://localhost:6379/1")

# Task serialization
task_serializer = "json"
result_serializer = "json"
accept_content = ["json"]
timezone = "UTC"
enable_utc = True

# TigerOps instrumentation — attach to your Celery app
# In your app factory (e.g. celery_app.py):
#
#   from celery import Celery
#   from tigerops.celery import TigerOpsCeleryInstrumentation
#
#   app = Celery("myproject")
#   app.config_from_object("celeryconfig")
#
#   TigerOpsCeleryInstrumentation(
#       app=app,
#       api_key=os.environ["TIGEROPS_API_KEY"],
#       endpoint="https://ingest.atatus.net/api/v1/write",
#       # Track these queues explicitly (auto-discovers others)
#       queues=["default", "high-priority", "email", "exports"],
#       # Enable distributed trace context propagation
#       trace_propagation=True,
#       # Capture task args/kwargs as span attributes (sanitized)
#       capture_args=False,
#   ).instrument()

# Beat schedule example (TigerOps monitors missed executions)
beat_schedule = {
    "generate-daily-report": {
        "task": "myapp.tasks.generate_daily_report",
        "schedule": 86400.0,  # Every 24 hours
    },
    "sync-external-data": {
        "task": "myapp.tasks.sync_external",
        "schedule": 300.0,    # Every 5 minutes
    },
}
FAQ

Common Questions

Does TigerOps support Celery with Redis and RabbitMQ brokers?

Yes. TigerOps supports all Celery broker backends including Redis, RabbitMQ, AWS SQS, and database brokers. It collects both Celery-level metrics via signals and broker-level queue depth metrics through the corresponding broker integration.

Can TigerOps monitor Celery Beat scheduled tasks?

Yes. TigerOps monitors Celery Beat scheduled task execution and alerts when a task has not fired within its expected schedule window. This catches silent failures where Beat is running but a task quietly stops executing.

How does distributed tracing work across Celery task chains?

TigerOps injects trace context into Celery task headers using the W3C TraceContext standard. When a task spawns a chain, group, or chord, each child task inherits the parent trace ID, creating a full execution graph visible in the TigerOps trace view.

Does this work with Celery 5.x and the Kombu 5.x message library?

Yes. TigerOps is tested against Celery 4.x and 5.x and supports all Kombu-compatible transports. The SDK uses Celery signal hooks rather than monkey-patching internals, ensuring compatibility with future Celery releases.

Can I monitor worker pools using prefork, gevent, and eventlet concurrency?

Yes. TigerOps monitors all Celery pool types including prefork (multiprocessing), gevent, eventlet, and the solo/threads pools. Worker slot utilization metrics are collected accurately for each concurrency model.

Get Started

Full Visibility Into Your Task Processing Layer

No credit card required. Connect in minutes. Queue depths, worker utilization, and task traces immediately.