Skip to main content
Skip to main content
Edit this page

Ingesting with OpenTelemetry

All data is ingested into ClickStack via an OpenTelemetry (OTel) collector instance, which acts as the primary entry point for logs, metrics, traces, and session data. We recommend using the official ClickStack distribution of the collector for this instance.

Users send data to this collector from language SDKs or through data collection agents collecting infrastructure metrics and logs (such OTel collectors in an agent role or other technologies e.g. Fluentd or Vector).

Sending OpenTelemetry data

Installing ClickStack OpenTelemetry collector

To send data to Managed ClickStack, an OTel collector should be deployed in a gateway role. OTel compatible instrumentation will send events to this collector via OTLP over HTTP or gRPC.

We recommend using the ClickStack OpenTelemetry collector

This allows you to benefit from standardized ingestion, enforced schemas, and out-of-the-box compatibility with the ClickStack UI (HyperDX). Using the default schema enables automatic source detection and preconfigured column mappings.

For further details see "Deploying the collector".

Sending data to the collector

To send data to Managed ClickStack, point your OpenTelemetry instrumentation to the following endpoints made available by the OpenTelemetry collector:

  • HTTP (OTLP): http://localhost:4318
  • gRPC (OTLP): localhost:4317

For language SDKs and telemetry libraries that support OpenTelemetry, you can simply set OTEL_EXPORTER_OTLP_ENDPOINT environment variable in your application:

export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

If deploying a contrib distribution of the OTel collector in the agent role, can use the OTLP exporter to send to the ClickStack collector. An example agent config consuming this structured log file, is shown below.

# clickhouse-agent-config.yaml
receivers:
  filelog:
    include:
      - /opt/data/logs/access-structured.log
    start_at: beginning
    operators:
      - type: json_parser
        timestamp:
          parse_from: attributes.time_local
          layout: '%Y-%m-%d %H:%M:%S'
exporters:
  # HTTP setup
  otlphttp/hdx:
    endpoint: 'http://localhost:4318'
    compression: gzip
 
  # gRPC setup (alternative)
  otlp/hdx:
    endpoint: 'localhost:4317'
    compression: gzip
processors:
  batch:
    timeout: 5s
    send_batch_size: 1000
service:
  telemetry:
    metrics:
      address: 0.0.0.0:9888 # Modified as 2 collectors running on same host
  pipelines:
    logs:
      receivers: [filelog]
      processors: [batch]
      exporters: [otlphttp/hdx]