journald Integration
Ingest systemd journal logs with structured metadata preservation from journald to TigerOps. Native Journal API integration with cursor-based tracking for zero log loss.
How It Works
Install TigerOps Agent
Install the TigerOps agent on your systemd-based Linux host. The agent includes a journald reader that uses the native Journal API (sd_journal_open) for reliable cursor-based log tracking.
Configure Journal Reader
Enable the journald integration in the TigerOps agent config. Specify which systemd units to monitor, set the cursor save path for crash recovery, and configure maximum batch size.
Preserve Structured Fields
TigerOps reads all journal fields: MESSAGE, SYSLOG_IDENTIFIER, _PID, _UID, _SYSTEMD_UNIT, _COMM, PRIORITY, and user-defined fields set by journald-aware applications via sd_journal_send.
Alert on Unit Failures
Configure alerts for systemd unit state changes (failed, activating, deactivating) and OOM kills. TigerOps correlates journal events with host metrics for root cause analysis.
What You Get Out of the Box
Native Journal API Integration
TigerOps agent uses sd_journal_open and sd_journal_next directly — no journalctl subprocess required. Cursor-based position tracking ensures no log duplication or loss across agent restarts.
Full Structured Field Preservation
All Journal entry fields are preserved as indexed labels — including _SYSTEMD_UNIT, _HOSTNAME, _BOOT_ID, _MACHINE_ID, _TRANSPORT, SYSLOG_FACILITY, and application-defined journal fields.
Systemd Unit State Monitoring
TigerOps parses systemd state transition events from the journal and tracks unit-level state history (active, failed, restarting). Alert on unexpected unit failures or repeated restart loops.
Kernel and OOM Message Parsing
Capture kernel messages (_TRANSPORT=kernel) and OOM killer events from the journal. TigerOps correlates OOM events with container and process memory metrics for precise root cause analysis.
Coredump Journal Ingestion
Ingest systemd-coredump journal entries with full executable path, signal number, and process metadata. Link coredump events to application error spikes in your observability timeline.
Multi-Boot Log Access
TigerOps agent can read historical journal entries from previous boot IDs on restart. Capture logs from the boot cycle before a crash to reconstruct the sequence of events leading to a failure.
TigerOps Agent journald Config
Configure the TigerOps agent to read from systemd journal with cursor-based tracking.
integrations:
journald:
enabled: true
# Journal path (default: system journal)
journal_path: /var/log/journal
# Cursor file for restart recovery
cursor_path: /var/lib/tigerops-agent/journald.cursor
# Read from current position on first run
# Options: head | tail | cursor
seek_position: tail
# Batch and flush settings
max_batch_size: 1000
flush_interval: 5s
# Filter to specific units (empty = all)
unit_filters:
- nginx.service
- postgresql.service
- myapp.service
# Include kernel messages (_TRANSPORT=kernel)
include_kernel: true
# Include coredump events
include_coredumps: true
# Additional fields to extract as labels
extra_fields:
- SYSLOG_IDENTIFIER
- _SYSTEMD_UNIT
- _COMM
- _EXE
- _CMDLINECommon Questions
How does TigerOps avoid duplicating journal logs after an agent restart?
The TigerOps agent saves the Journal cursor position to disk after each batch flush. On restart, the agent resumes reading from the saved cursor position, ensuring exactly-once delivery even after crashes or updates.
Can TigerOps monitor only specific systemd units?
Yes. Configure unit_filters in the TigerOps agent journald section to include or exclude specific unit names (e.g., nginx.service, postgresql.service). Filtering happens at the Journal API level for efficiency.
Does TigerOps support journald field encryption (SEALING)?
TigerOps reads unsealed journal files via the standard Journal API. If FSS (Forward Secure Sealing) is enabled, TigerOps reads entries that have been unsealed. Verify the journal with journalctl --verify before reading.
How does TigerOps handle very high journald write rates?
The TigerOps agent batches journal entries up to a configurable max_batch_size (default 1000 entries) and flushes every flush_interval (default 5s). High-throughput hosts can increase batch size to reduce HTTP overhead.
Can I forward journald logs to TigerOps without the TigerOps agent?
Yes. Use systemd-journal-remote with a TigerOps syslog or OTLP endpoint, or configure journald to forward to syslog and use rsyslog with the omhttp module. The TigerOps agent is recommended for full structured field support.
Unlock the Full Power of systemd Journal Logs
Native Journal API integration, structured field preservation, and AI unit failure detection. Install in minutes.