RabbitMQ Integration
Monitor RabbitMQ with queue depth, message rates, consumer utilization, DLQ monitoring, and node-level resource metrics — via the built-in Prometheus plugin.
How It Works
Enable the Prometheus Plugin
Run rabbitmq-plugins enable rabbitmq_prometheus on each node. This activates the built-in Prometheus scrape endpoint at /metrics on port 15692 (HTTP) or 15691 (HTTPS). No additional exporter process is needed.
Configure rabbitmq.conf
Set prometheus.return_per_object_metrics = true in rabbitmq.conf to expose per-queue and per-channel metrics. This enables TigerOps to show queue-level depth and message rate breakdowns.
Add Scrape Config
Point the TigerOps Collector at each RabbitMQ node's Prometheus port. For clustered deployments, scraping any one node returns cluster-wide metrics. For per-queue metrics, scrape all nodes separately.
Queues, Rates & Nodes Appear
Within minutes TigerOps dashboards display queue depth per vhost, publish and deliver rates, unacked message counts, consumer utilization, and Erlang process/memory metrics per node.
What You Get Out of the Box
Queue Depth & Message Rates
Messages ready, messages unacked, and messages total per queue. Publish rate, deliver rate, redeliver rate, and ack rate per queue. TigerOps alerts when queue depth exceeds configurable thresholds.
Consumer Counts & Utilization
Consumer count per queue, consumer utilization percentage, and prefetch count. Low consumer utilization with high queue depth indicates consumer processing bottlenecks that need attention.
Dead Letter Queue Monitoring
Dead letter exchange routing, DLQ depth, and DLQ growth rate. TigerOps alerts when DLQ depth grows beyond a threshold and correlates DLQ spikes with consumer error rates.
Node-Level Resources
Erlang process count, socket and file descriptor usage, memory breakdown (binaries, queues, connections, ETS), and disk free space per node. Node-level resource alarms are forwarded to TigerOps as events.
Connection & Channel Health
Connection count, channel count, blocked connections, and channel errors per node and vhost. High blocked connection counts indicate memory or disk pressure requiring immediate attention.
Cluster Quorum & Partition
Quorum queue leader election metrics, network partition detection, and mirror synchronization status for classic mirrored queues. Partition events trigger immediate alerts via TigerOps.
Enable Prometheus Plugin
One plugin command and two config lines to start collecting RabbitMQ metrics.
# Enable the Prometheus plugin on each node
rabbitmq-plugins enable rabbitmq_prometheus
# Verify the endpoint is available
curl -s http://localhost:15692/metrics | head -5
# rabbitmq.conf — enable per-object metrics
prometheus.return_per_object_metrics = true
prometheus.path = /metrics
prometheus.tcp.port = 15692
# TigerOps Collector config (otel-collector.yaml)
receivers:
prometheus:
config:
scrape_configs:
- job_name: rabbitmq
scrape_interval: 15s
static_configs:
- targets:
- rabbitmq-node-1:15692
- rabbitmq-node-2:15692
- rabbitmq-node-3:15692
relabel_configs:
- source_labels: [__address__]
target_label: instance
exporters:
otlphttp:
endpoint: https://ingest.tigerops.io/v1/metrics
headers:
Authorization: "Bearer ${TIGEROPS_API_KEY}"
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [otlphttp]
# Key alert thresholds
# rabbitmq_queue_messages_ready > 10000 → queue backup
# rabbitmq_queue_consumers == 0 → no active consumers
# rabbitmq_process_resident_memory_bytes → node memory usage
# rabbitmq_disk_space_available_bytes < 5GB → disk alarm imminentCommon Questions
Which RabbitMQ versions support the built-in Prometheus plugin?
The rabbitmq_prometheus plugin has been available since RabbitMQ 3.8.0. RabbitMQ 3.9, 3.10, 3.11, 3.12, and 3.13 are fully supported. For older versions, use the prometheus_rabbitmq_exporter community plugin.
Will enabling per_object_metrics impact RabbitMQ performance?
Per-object metrics do increase the size and generation time of the /metrics response. For clusters with thousands of queues, consider using per_object_metrics = false and using the aggregated metrics instead, or setting a dedicated monitoring vhost.
How does TigerOps monitor RabbitMQ on Kubernetes?
Deploy the RabbitMQ Cluster Operator and add the prometheus.io/scrape: "true" annotation to the RabbitMQ pods. The TigerOps Collector's Kubernetes SD configuration will auto-discover and scrape the pods.
Can TigerOps alert when a specific queue is growing too fast?
Yes. Create a TigerOps alert with metric rabbitmq_queue_messages_ready{queue="your-queue-name"} and set a threshold. You can also alert on the rate of change to catch queues growing faster than consumers can drain them.
Does TigerOps support RabbitMQ Stream queues?
Yes. RabbitMQ Streams (introduced in 3.9) expose metrics via the same Prometheus endpoint. Stream-specific metrics including publisher confirms, consumer offsets, and stream segment counts are supported.
Full RabbitMQ Observability via Prometheus Plugin
Queue depth, message rates, consumer utilization, and node health — one plugin enable command.