All Integrations
MessagingGCP Cloud Monitoring API integration

Google Cloud Tasks Integration

Monitor task dispatch rates, execution latency, and retry counts across your Google Cloud Tasks queues. Detect retry spirals and worker failure patterns before they exhaust your task budgets.

Setup

How It Works

01

Connect via Google Cloud Monitoring

Create a TigerOps GCP service account with Monitoring Viewer permissions. TigerOps polls the Cloud Monitoring API for Cloud Tasks metrics across all your queues and regions using the cloudtasks.googleapis.com metric namespace.

02

Configure Queue Scope

Specify GCP projects and optionally filter by queue name pattern. TigerOps auto-discovers all Cloud Tasks queues and begins streaming task_count, execution_attempt_count, and request_count metrics.

03

Set Dispatch & Latency Thresholds

Define per-queue task dispatch rate expectations and execution latency SLOs. TigerOps alerts when task execution latency p95 exceeds your threshold or when the queue task count grows unexpectedly.

04

Correlate with Worker Services

TigerOps links Cloud Tasks execution failures with your Cloud Run, GKE, or App Engine worker service metrics — error rates, instance counts, and request latency — to identify whether failures are queue-side or worker-side.

Capabilities

What You Get Out of the Box

Task Dispatch Rate Monitoring

Track tasks dispatched per second per queue, broken down by response code. Monitor dispatch rates against your queue's max dispatch rate configuration and alert before throttling degrades throughput.

Execution Latency Percentiles

p50/p95/p99 task execution latency from dispatch to first attempt and from first dispatch to final completion. Identify slow worker handlers before latency breaches your end-to-end SLOs.

Retry Count Distribution

Track the distribution of execution attempt counts across tasks. A growing proportion of high-retry tasks indicates systematic worker failures. Alert when the average attempt count per task exceeds your configured threshold.

Queue Depth Tracking

Monitor task_count per queue and alert when queues grow beyond expected levels. Correlate queue growth spikes with producer service deployments or worker service outages.

Response Code Analysis

Break down task execution responses by HTTP status code (2xx, 4xx, 5xx) from your worker handlers. Detect when 5xx rates from worker services cause exponential retry buildup.

AI Root Cause Analysis

When Cloud Tasks retry rates spike, TigerOps AI correlates worker service error rates, database latency, cold start rates (for Cloud Run workers), and recent deployments to surface the root cause.

Configuration

GCP Service Account & TigerOps Config

Create the GCP service account and configure TigerOps to poll your Cloud Tasks metrics.

setup-gcp-tigerops.sh
# Step 1: Create a GCP service account for TigerOps
gcloud iam service-accounts create tigerops-monitoring \
  --display-name="TigerOps Monitoring" \
  --project=my-gcp-project

# Step 2: Grant Monitoring Viewer role
gcloud projects add-iam-policy-binding my-gcp-project \
  --member="serviceAccount:[email protected]" \
  --role="roles/monitoring.viewer"

# Step 3: Create and download a JSON key
gcloud iam service-accounts keys create tigerops-sa-key.json \
  --iam-account=tigerops-monitoring@my-gcp-project.iam.gserviceaccount.com

# Step 4: Configure TigerOps (tigerops.yaml)
integrations:
  googleCloudTasks:
    projects:
      - projectId: my-gcp-project
        credentialsFile: /etc/tigerops/tigerops-sa-key.json
      - projectId: my-other-project
        credentialsFile: /etc/tigerops/other-sa-key.json
    # Queue filter (empty = monitor all queues)
    queueFilter: "projects/*/locations/*/queues/*"
    pollInterval: 60s

    # Per-queue SLOs
    queueSLOs:
      "email-dispatch":
        p95LatencyMs: 30000
        maxRetryRatePerMinute: 10
        minThroughputPerMinute: 50
      "payment-webhook":
        p95LatencyMs: 5000
        maxRetryRatePerMinute: 2
FAQ

Common Questions

What GCP permissions does TigerOps require for Cloud Tasks monitoring?

TigerOps requires the roles/monitoring.viewer IAM role on the GCP project to read Cloud Monitoring metrics. No access to your actual task payloads or queue configurations is needed. Create a dedicated service account and download a JSON key, or use Workload Identity for keyless authentication.

Can TigerOps monitor Cloud Tasks queues across multiple GCP projects?

Yes. Add multiple GCP projects to TigerOps by providing service account credentials for each. Metrics are labeled with project_id so you can filter dashboards and alerts by project or view aggregated cross-project health.

How does TigerOps measure Cloud Tasks execution latency?

TigerOps uses the cloudtasks.googleapis.com/queue/task_attempt_latencies metric from Cloud Monitoring, which records the latency between when Cloud Tasks dispatches a task and when it receives the HTTP response from your worker handler.

Can TigerOps alert when a Cloud Tasks queue is draining too slowly?

Yes. Configure a drain rate SLO — the minimum expected task throughput per minute for a queue. TigerOps alerts when completed task rate drops below the threshold while task_count remains elevated, indicating workers are slower than producers.

Does TigerOps support Cloud Tasks HTTP targets with OIDC authentication?

TigerOps monitors the queue and execution metrics regardless of whether your tasks use OIDC or service account authentication for worker invocation. Authentication failures appear as 401/403 responses in the response code breakdown and trigger high retry rate alerts.

Get Started

Stop Discovering Cloud Tasks Retry Spirals After Costs Spike

Task dispatch SLOs, retry rate monitoring, and worker failure correlation. Connect via GCP service account in minutes.