BullMQ Integration
Monitor job processing rates, queue wait times, and worker concurrency for your Node.js BullMQ infrastructure. Detect worker saturation and failed job spikes before they impact your application users.
How It Works
Install the TigerOps BullMQ Plugin
Add the @tigerops/bullmq package to your Node.js application. Wrap your existing Queue and Worker instances with the TigerOps instrumentation — no changes to job logic required.
Configure the Metrics Endpoint
Set your TIGEROPS_API_KEY environment variable. The plugin automatically collects queue depths, job durations, wait times, and worker concurrency, then pushes metrics via the OpenTelemetry OTLP exporter.
Set Job SLOs per Queue
Define per-queue wait time and processing time SLOs. TigerOps alerts when p95 job wait time exceeds your threshold and predicts queue saturation from current worker throughput rates.
Correlate with Redis & Application Traces
TigerOps links BullMQ queue delays with Redis latency, Node.js event loop lag, and distributed traces from the services enqueuing jobs — providing full context for every queue slowdown.
What You Get Out of the Box
Job Processing Rate Tracking
Per-queue completed, failed, and delayed job rates with p50/p95/p99 processing duration histograms. Track throughput trends and identify queues approaching worker saturation.
Queue Wait Time Monitoring
Measure the time jobs spend waiting in the queue before a worker picks them up. Detect wait time spikes caused by insufficient worker concurrency or slow job processing blocking the pool.
Worker Concurrency Metrics
Track active worker counts, concurrency utilization, and worker stall events per queue. Alert when worker pools are fully saturated and new jobs are stacking up.
Failed & Retry Job Tracking
Monitor failed job counts, retry attempt distributions, and jobs landing in the dead-letter queue. Correlate failure spikes with deployments or downstream service degradation.
Delayed & Scheduled Job Health
Track the delayed job count, scheduler accuracy, and repeat job execution lag. Alert when delayed jobs pile up due to Redis sorted set performance issues or scheduler process failures.
AI Root Cause Analysis
When queue wait times spike, TigerOps AI correlates Redis memory pressure, worker process restarts, Node.js event loop lag, and job failure rate trends to surface the root cause.
Instrumenting BullMQ Workers
Add TigerOps instrumentation to your existing BullMQ queues and workers in minutes.
# Install the TigerOps BullMQ package
npm install @tigerops/bullmq
// worker.ts — wrap your existing Queue and Worker instances
import { Queue, Worker } from 'bullmq'
import { instrument } from '@tigerops/bullmq'
const connection = { host: 'redis', port: 6379 }
// Initialize TigerOps instrumentation before creating queues
instrument({
apiKey: process.env.TIGEROPS_API_KEY,
serviceName: 'payments-worker',
// Optional: per-queue SLOs
queues: {
'payment-processing': {
waitTimeWarningMs: 5000, // alert if jobs wait > 5s
waitTimeCriticalMs: 30000,
processingTimeWarningMs: 10000,
failureRateWarning: 0.01, // 1%
},
'email-delivery': {
waitTimeWarningMs: 60000,
failureRateWarning: 0.05,
},
},
})
// Your existing code — no changes required
const paymentQueue = new Queue('payment-processing', { connection })
const paymentWorker = new Worker(
'payment-processing',
async (job) => {
// your job processor
await processPayment(job.data)
},
{ connection, concurrency: 10 }
)
// TigerOps automatically tracks:
// - job wait time (enqueued -> active)
// - job processing time (active -> completed)
// - failed/retried/dead-letter counts
// - worker concurrency utilizationCommon Questions
Which BullMQ versions does TigerOps support?
TigerOps supports BullMQ 3.x and 4.x. The @tigerops/bullmq package uses BullMQ's native event emitters (Queue events and Worker events) so no monkey-patching is required. Bull (the predecessor) is also supported via the separate @tigerops/bull package.
Does the TigerOps plugin affect BullMQ job processing performance?
No. The plugin attaches lightweight event listeners to BullMQ's existing event system and batches metrics asynchronously. Benchmarks show less than 0.1ms added latency per job event. Metric flushing is done on a background interval and never blocks job processing.
Can TigerOps monitor BullMQ queues running in multiple Node.js processes?
Yes. Each worker process running the TigerOps plugin reports metrics independently. TigerOps aggregates them at ingest time using queue name as the grouping dimension, giving you a unified view of queue health regardless of how many worker processes are running.
How does TigerOps handle BullMQ flow producers and job dependencies?
TigerOps tracks flow jobs as a hierarchy. Parent job completion latency is measured from when all children complete. You can alert on flow completion SLOs and see which step in a multi-job flow is introducing the most latency.
Can I set different alert thresholds for different queues?
Yes. TigerOps supports per-queue alert threshold configuration. Set distinct wait time, processing duration, and failure rate thresholds for your critical queues (e.g., payment processing) versus lower-priority background queues (e.g., email delivery).
Stop Discovering Failed Jobs After Users Complain
Queue wait time SLOs, worker saturation alerts, and AI root cause analysis. Two lines of code to deploy.