C / C++ Integration
Native OpenTelemetry C++ SDK integration for C and C++ applications. Manual RAII span instrumentation, gRPC interceptor tracing, system-level process metrics, and a pure C API for legacy codebases.
How It Works
Add via CMake or vcpkg
Add the tigerops-cpp package via vcpkg (vcpkg install tigerops-cpp) or fetch it with CMake FetchContent. The package bundles the OpenTelemetry C++ SDK, OTLP gRPC and HTTP exporters, and TigerOps-specific initialization helpers for both C and C++ projects.
Initialize the Tracer Provider
Call tigerops::init() at program startup before creating any spans. The function accepts a TigerOpsConfig struct with api_key, service_name, and environment fields. It configures the OTLP exporter, sampler, and batch span processor with TigerOps defaults.
Set Environment Variables
Set TIGEROPS_API_KEY, TIGEROPS_SERVICE_NAME, and TIGEROPS_ENVIRONMENT. The SDK reads these at tigerops::init() time if the config struct fields are empty. For embedded or containerized deployments, use the config struct to avoid environment variable dependency.
Manual Spans & gRPC Traces Flow
TigerOps begins receiving manually created spans from your C++ code, gRPC interceptor spans for client and server calls, system metrics (CPU, RSS, page faults), and custom metrics created via the OpenTelemetry C++ Meter API.
What You Get Out of the Box
OpenTelemetry C++ SDK Integration
TigerOps wraps the official OpenTelemetry C++ SDK with TigerOps-specific defaults: OTLP/HTTP exporter, 512-span batch processor, and a parent-based sampler. The tracer provider is a global singleton compatible with all OTel C++ instrumentation libraries.
Manual Span Instrumentation
Use the tigerops::Span RAII wrapper to create spans around critical code sections. tigerops::Span span("db.query") starts a span; it ends automatically when the object goes out of scope. SetAttribute, AddEvent, and SetStatus are available on the span object.
gRPC Client & Server Interceptors
Register TigerOps gRPC interceptors for both client channels and server builders. Client interceptors inject W3C traceparent into metadata; server interceptors extract context and create root spans. Unary, client streaming, server streaming, and bidirectional RPC are all traced.
System & Process Metrics
TigerOps C++ SDK emits process-level metrics via procfs (Linux) and getrusage: RSS memory, virtual memory, CPU user/system time, page fault count, voluntary and involuntary context switch rates. Metrics are exported on a configurable interval.
Async & Thread-Safe Span Propagation
Trace context is stored in thread_local storage for synchronous code. For async frameworks (Boost.Asio, libev, libuv), use tigerops::Context::attach() and tigerops::Context::detach() to propagate context across I/O callbacks and completion handlers.
Custom Metrics via OTel Meter API
Create counters, histograms, and gauges using the standard OpenTelemetry C++ Meter API. TigerOps exports all custom metrics alongside traces to the same OTLP endpoint. Prometheus-compatible metric names are supported via the semantic conventions helper.
Install & Initialize
CMakeLists.txt integration, tigerops::init(), and RAII span instrumentation for C++.
# Install via vcpkg
vcpkg install tigerops-cpp
# CMakeLists.txt
cmake_minimum_required(VERSION 3.14)
project(my_cpp_app)
find_package(tigerops-cpp CONFIG REQUIRED)
add_executable(my_app main.cpp)
target_link_libraries(my_app PRIVATE tigerops::tigerops-cpp)
// main.cpp
#include <tigerops/tigerops.h>
#include <tigerops/span.h>
int main() {
// Initialize TigerOps at startup
tigerops::TigerOpsConfig config;
config.api_key = std::getenv("TIGEROPS_API_KEY");
config.service_name = "my-cpp-service";
config.environment = "production";
tigerops::init(config);
// RAII span — ends automatically at scope exit
{
tigerops::Span span("request.process");
span.SetAttribute("request.id", "req-123");
span.SetAttribute("user.id", 42);
// Nested child span
{
tigerops::Span dbSpan("db.query");
dbSpan.SetAttribute("db.statement", "SELECT * FROM orders");
auto rows = db.execute("SELECT * FROM orders WHERE user_id = ?", 42);
dbSpan.SetAttribute("db.rows_affected", (int64_t)rows.size());
} // dbSpan ends here
processResults(rows);
span.SetStatus(tigerops::StatusCode::Ok);
} // span ends here
// Flush remaining spans before exit
tigerops::shutdown();
return 0;
}Common Questions
Which C++ standards and compilers are supported?
C++14 and later are supported. GCC 9+, Clang 10+, and MSVC 2019+ are tested. The library uses C++17 features when available (if constexpr, std::string_view) with C++14 fallbacks. CMake 3.14+ and Bazel 5+ are supported as build systems.
How do I instrument C code without C++ wrappers?
The tigerops-c header (tigerops/tigerops.h) provides a pure C API: tigerops_init(), tigerops_span_start(), tigerops_span_end(), tigerops_span_set_attr_string(), and tigerops_span_set_status(). The C API is a thin wrapper over the C++ implementation with a stable ABI.
Does TigerOps support cross-compilation for embedded and RTOS targets?
The full SDK requires dynamic allocation and is not suitable for bare-metal targets. For Linux-based embedded systems (Yocto, Buildroot) with glibc or musl, the SDK compiles normally. A stripped-down no_std variant that writes spans to a serial port or UDP socket is available for constrained environments.
How does gRPC interceptor tracing handle TLS and mTLS connections?
The interceptors inject trace context into gRPC metadata, not into TLS handshake data. TLS and mTLS configurations are orthogonal to TigerOps tracing. The interceptor works with all gRPC channel credential types including InsecureChannelCredentials, SslCredentials, and CompositeChannelCredentials.
What is the overhead of the RAII Span wrapper?
The tigerops::Span constructor and destructor each take under 200 nanoseconds when tracing is enabled at the default 10% sampling rate. At 0% sampling, the overhead is under 5 nanoseconds due to the parent-based sampler short-circuit. Span creation does not allocate on the heap in the common path.
Native C / C++ Observability via OpenTelemetry
RAII span instrumentation, gRPC interceptor tracing, system metrics, and a pure C API — built for performance-critical native applications.