All Integrations
DatabasesTigerOps agent + Helm chart

Weaviate Integration

Monitor vector database query latency, schema metrics, and replication health across your Weaviate clusters. Get per-class SLO tracking and AI query pattern analysis for your semantic search infrastructure.

Setup

How It Works

01

Enable Weaviate Prometheus Metrics

Set PROMETHEUS_MONITORING_ENABLED=true in your Weaviate environment configuration. The built-in Prometheus endpoint exposes over 80 metrics including query latency histograms, import rates, and HNSW index memory usage.

02

Deploy TigerOps Agent via Helm

Install the TigerOps Helm chart in your Weaviate namespace. The agent scrapes all Weaviate node metrics endpoints and forwards class-level query latency, object count, and replication state to your workspace.

03

Configure Class-Level SLOs

TigerOps auto-discovers all Weaviate classes from the schema endpoint and creates per-class dashboards. Set p99 query latency and import rate thresholds per class for granular SLO compliance tracking.

04

Monitor Multi-Node Replication

For Weaviate multi-tenant or replication-enabled deployments, TigerOps tracks per-shard replication factor compliance, node availability, and async replication lag across all cluster nodes.

Capabilities

What You Get Out of the Box

Vector Query Latency

Per-class near-vector, near-text, and hybrid query latency at p50, p95, and p99. Track latency changes correlated with object count growth, HNSW index parameter changes, or schema updates.

Schema & Class Metrics

Per-class object counts, property count, vector index type (HNSW vs. flat), and schema migration events. Track class growth rates and alert on unexpected schema changes in production.

Replication Health Monitoring

Shard replication factor compliance, async replication lag between nodes, repair operation rates, and node availability status for multi-node Weaviate cluster deployments.

HNSW Index Memory Usage

Per-class HNSW index memory consumption, ef and efConstruction parameter impact on memory, index tombstone counts, and cleanup task completion rates for memory-efficient operation.

Import & Batch Performance

Object import rates, batch import latency, vectorization time per object (for auto-vectorization modules), and import queue depth to keep your data pipeline running at target throughput.

AI Query Pattern Analysis

TigerOps AI detects when Weaviate query latency increases correlate with class growth, identifies inefficient filter expressions causing post-filtering overhead, and surfaces index tuning recommendations.

Configuration

TigerOps Agent for Weaviate

Enable Prometheus metrics in Weaviate and configure TigerOps to forward them to your monitoring workspace.

tigerops-weaviate.yaml
# Weaviate environment config (Docker Compose or K8s)
# PROMETHEUS_MONITORING_ENABLED=true
# PROMETHEUS_MONITORING_PORT=2112

# TigerOps Weaviate Agent Configuration
receivers:
  weaviate_prometheus:
    endpoints:
      - url: "http://weaviate-node-0:2112/metrics"
        node: "node-0"
      - url: "http://weaviate-node-1:2112/metrics"
        node: "node-1"
      - url: "http://weaviate-node-2:2112/metrics"
        node: "node-2"
    collection_interval: 15s

  weaviate_schema:
    # Schema discovery for class auto-detection
    endpoint: "http://weaviate:8080"
    api_key_env: WEAVIATE_API_KEY
    schema_poll_interval: 60s

    # Per-class SLO overrides
    class_slos:
      - class: "Article"
        query_latency_p99_ms: 100
        min_object_count: 1000
      - class: "Product"
        query_latency_p99_ms: 80

exporters:
  tigerops:
    endpoint: "https://ingest.atatus.net/api/v1/write"
    bearer_token: "${TIGEROPS_API_KEY}"
    send_interval: 15s

alerts:
  hnsw_memory_usage_gb: 8
  replication_factor_compliance: true
  import_queue_depth: 10000
  tombstone_cleanup_backlog: 50000
FAQ

Common Questions

Which Weaviate versions and deployment modes does TigerOps support?

TigerOps supports Weaviate 1.19 and later with built-in Prometheus monitoring. Weaviate Cloud Services (WCS) and self-hosted Kubernetes deployments are both supported. For WCS, TigerOps uses the Weaviate gRPC health API alongside the metrics endpoint.

How does TigerOps track Weaviate query latency per class?

TigerOps collects the weaviate_query_durations_ms histogram metric which is labeled by class_name and query_type (nearVector, nearText, hybrid, bm25). This gives per-class, per-query-type latency percentiles without any application-side changes.

Can TigerOps monitor Weaviate multi-tenancy health?

Yes. TigerOps tracks per-tenant shard metrics including tenant activation status, inactive tenant count, and per-tenant object counts. You can set alerts when tenant shards fail to activate or when a tenant reaches its object count threshold.

How does TigerOps handle Weaviate backup and restore monitoring?

TigerOps monitors Weaviate backup operations via the /v1/backups REST API and tracks backup duration, backup size, and restore operation success rates. Alerts fire when a backup job fails or when backup duration exceeds your configured SLO.

Does TigerOps support Weaviate module metrics for vectorization?

Yes. TigerOps tracks per-module vectorization latency for text2vec-openai, text2vec-cohere, text2vec-transformers, and other modules. You can see exactly how much of your import time is spent on vectorization versus HNSW index insertion.

Get Started

Stop Discovering Weaviate Query Degradation in Your Semantic Search

Per-class SLO tracking, replication health monitoring, and AI query pattern analysis. Deploy in 5 minutes.