All Integrations
Languagestigerops Cargo crate

Rust Integration

Instrument Rust services with one crate dependency. Async trace propagation across await boundaries, Tokio runtime metrics, and tracing-subscriber integration — zero runtime overhead.

Setup

How It Works

01

Add to Cargo.toml

Run cargo add tigerops or add tigerops = "0.1" to your [dependencies]. The crate bundles the opentelemetry and opentelemetry-otlp crates with a pre-configured TigerOps exporter and Tokio-compatible async runtime.

02

Init the Tracing Subscriber

Call tigerops::init() at the top of main() before spawning any Tokio tasks. It registers a tracing-subscriber layer that exports spans and metrics to TigerOps via OTLP/gRPC in the background.

03

Set Environment Variables

Export TIGEROPS_API_KEY, TIGEROPS_SERVICE_NAME, and TIGEROPS_ENVIRONMENT. The SDK reads these at init time. No config files are required — all settings can also be passed programmatically via the builder API.

04

Spans & Tokio Metrics Flow

Traces annotated with #[tracing::instrument] and manual span! calls are exported automatically. Tokio runtime metrics (task poll counts, scheduler steal counts, worker thread counts) are reported every 10 seconds.

Capabilities

What You Get Out of the Box

Tokio Runtime Metrics

Worker thread count, active task count, scheduler steal counts, task poll durations, and I/O driver event counts from the tokio-metrics crate. TigerOps surfaces Tokio runtime bottlenecks before they affect latency.

tracing-subscriber Integration

TigerOps registers as a tracing-subscriber Layer, meaning all existing #[tracing::instrument] annotations and span!/event! macros are automatically exported. No code changes required for already-instrumented services.

Async Context Propagation

Trace context is propagated correctly across .await boundaries, tokio::spawn tasks, and rayon thread pools using the OpenTelemetry context API. Child tasks correctly link back to their parent span.

Axum & Actix Middleware

Drop-in tower middleware for Axum and Actix-web that creates root spans for every HTTP request with route, method, status, and server-timing headers. W3C TraceContext extraction from incoming requests is automatic.

SQLx & Diesel Query Spans

Database query spans for SQLx (PostgreSQL, MySQL, SQLite) and Diesel via custom execute hooks. Each query span includes normalized SQL, affected rows, and connection acquisition time.

Zero-Overhead in Hot Paths

When sampling is disabled for a trace, span creation is a single atomic check — no allocation occurs. TigerOps uses Rust's tracing crate disabled-span optimization to ensure zero overhead in non-sampled hot paths.

Configuration

Install & Initialize

One crate. One init call. Full Rust async observability.

Cargo.toml + src/main.rs
# Add TigerOps to your project
cargo add tigerops

# Cargo.toml
[dependencies]
tigerops = "0.1"
tokio = { version = "1", features = ["full"] }
tracing = "0.1"
axum = "0.7"

# Set environment variables
export TIGEROPS_API_KEY="your-api-key"
export TIGEROPS_SERVICE_NAME="my-rust-service"
export TIGEROPS_ENVIRONMENT="production"

# src/main.rs
use axum::{routing::get, Router};
use tigerops::TigerOpsBuilder;
use tracing::instrument;

#[tokio::main]
async fn main() {
    // Initialize tracing + metrics + OTLP export
    let _guard = TigerOpsBuilder::new()
        .service_name("my-rust-service")
        .tokio_metrics(true)          // Enable Tokio runtime metrics
        .sample_rate(1.0)             // 100% in dev
        .init()
        .expect("TigerOps init failed");

    let app = Router::new()
        .route("/orders/:id", get(get_order));

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();

    // Flush remaining spans before exit
    tigerops::shutdown().await;
}

// Automatic span from #[instrument]
#[instrument(fields(order.id = %id))]
async fn get_order(
    axum::extract::Path(id): axum::extract::Path<String>,
) -> axum::Json<serde_json::Value> {
    let order = fetch_order_from_db(&id).await;
    axum::Json(order)
}

// Manual span for custom operations
async fn process_payment(order_id: &str, amount: u64) {
    use tracing::Span;
    let span = tracing::info_span!("payment.process",
        "order.id" = order_id,
        "payment.amount" = amount,
    );
    let _enter = span.enter();

    // business logic here
    charge_card(order_id, amount).await;
}
FAQ

Common Questions

Does tigerops work with both tokio and async-std runtimes?

The default tigerops crate targets tokio 1.x. An async-std feature flag (tigerops = { features = ["async-std"] }) is available for async-std 1.x environments. The underlying OpenTelemetry spans and metrics APIs are runtime-agnostic.

How does TigerOps handle graceful shutdown without dropping spans?

Call tigerops::shutdown().await before exiting main(). This flushes the in-memory span buffer and waits for the OTLP exporter to confirm delivery. A configurable timeout (default 5s) prevents indefinite blocking.

Can I use TigerOps with existing OpenTelemetry Rust instrumentation?

Yes. tigerops::init() returns the OtelGuard which wraps the standard opentelemetry GlobalTracerProvider. If you already call opentelemetry::global::set_tracer_provider(), you can instead add TigerOps as a second SpanExporter to your existing pipeline.

Does the crate support no_std or embedded environments?

No. The tigerops crate requires std and a Tokio or async-std runtime for its background export task. For constrained environments, use the opentelemetry crate directly with a custom synchronous exporter.

How do I instrument gRPC services built with tonic?

Add the tigerops-tonic feature flag and call TigerOpsInterceptor::new() as a tonic interceptor. It automatically extracts incoming W3C TraceContext headers and creates server spans. Client-side interceptors inject propagation headers on outbound calls.

Get Started

Full Rust Observability in One Cargo Add

Tokio metrics, async trace propagation, tracing-subscriber integration — zero overhead when sampling is off.