Skip to main content
Skip to main content

Getting Started with Managed ClickStack

Beta feature. Learn more.

The easiest way to get started is by deploying Managed ClickStack on ClickHouse Cloud, which provides a fully managed, secure backend while retaining complete control over ingestion, schema, and observability workflows. This removes the need to operate ClickHouse yourself and delivers a range of benefits:

  • Automatic scaling of compute, independent of storage
  • Low-cost and effectively unlimited retention based on object storage
  • The ability to independently isolate read and write workloads with warehouses.
  • Integrated authentication
  • Automated backups
  • Security and compliance features
  • Seamless upgrades

Signup to ClickHouse Cloud

To create a Managed ClickStack service in ClickHouse Cloud first complete the first step of the ClickHouse Cloud quickstart guide.

Scale vs Enterprise

We recommend this Scale tier for most ClickStack workloads. Choose the Enterprise tier if you require advanced security features such as SAML, CMEK, or HIPAA compliance. It also offers custom hardware profiles for very large ClickStack deployments. In these cases, we recommend contacting support.

Select the Cloud provider and region.

When specifying the select CPU and memory, estimate it based on your expected ClickStack ingestion throughput. The table below provides guidance for sizing these resources.

Monthly ingest volumeRecommended compute
< 10 TB / month2 vCPU × 3 replicas
10–50 TB / month4 vCPU × 3 replicas
50–100 TB / month8 vCPU × 3 replicas
100–500 TB / month30 vCPU × 3 replicas
1 PB+ / month59 vCPU × 3 replicas

These recommendations are based on the following assumptions:

  • Data volume refers to uncompressed ingest volume per month and applies to both logs and traces.
  • Query patterns are typical for observability use cases, with most queries targeting recent data, usually the last 24 hours.
  • Ingestion is relatively uniform across the month. If you expect bursty traffic or spikes, you should provision additional headroom.
  • Storage is handled separately via ClickHouse Cloud object storage and is not a limiting factor for retention. We assume data retained for longer periods is infrequently accessed.

More compute may be required for access patterns that regularly query longer time ranges, perform heavy aggregations, or support a high number of concurrent users.

Although two replicas can meet the CPU and memory requirements for a given ingestion throughput, we recommend using three replicas where possible to achieve the same total capacity and improve service redundancy.

Note

These values are estimates only and should be used as an initial baseline. Actual requirements depend on query complexity, concurrency, retention policies, and variance in ingestion throughput. Always monitor resource usage and scale as needed.

Once you have specified the requirements, your Managed ClickStack service will take several minutes to provision. Feel free to explore the rest of the ClickHouse Cloud console whilst waiting for provisioning.

Once provisioning is complete, the 'ClickStack' option on the left menu will be enabled.

Setup ingestion

Once your service has been provisioned, ensure the the service is selected and click "ClickStack" from the left menu.

Select "Start Ingestion" and you'll be prompted to select an ingestion source. Managed ClickStack supports OpenTelemetry and Vector as its main ingestion sources. However, users are also free to send data directly to ClickHouse in their own schema using any of the ClickHouse Cloud support integrations.

OpenTelemetry recommended

Use of the OpenTelemetry is strongly recommended as the ingestion format. It provides the simplest and most optimized experience, with out-of-the-box schemas that are specifically designed to work efficiently with ClickStack.

To send OpenTelemetry data to Managed ClickStack, you're recommended to use an OpenTelemetry Collector. The collector acts as a gateway that receives OpenTelemetry data from your applications (and other collectors) and forwards it to ClickHouse Cloud.

If you don't already have one running, start a collector using the steps below. If you have existing collectors, a configuration example is also provided.

Start a collector

The following assumes the recommended path of using the ClickStack distribution of the OpenTelemetry Collector, which includes additional processing and is optimized specifically for ClickHouse Cloud. If you're looking to use your own OpenTelemetry Collector, see "Configure existing collectors."

To get started quickly, copy and run the Docker command shown.

This command should include your connection credentials pre-populated.

Deploying to production

While this command uses the default user to connect Managed ClickStack, you should create a dedicated user when going to production and modifying your configuration.

Running this single command starts the ClickStack collector with OTLP endpoints exposed on ports 4317 (gRPC) and 4318 (HTTP). If you already have OpenTelemetry instrumentation and agents, you can immediately begin sending telemetry data to these endpoints.

Configure existing collectors

It's also possible to configure your own existing OpenTelemetry Collectors or use your own distribution of the collector.

ClickHouse exporter required

If you're using your own distribution, for example the contrib image, ensure that it includes the ClickHouse exporter.

For this purpose, you're provided with an example OpenTelemetry Collector configuration that uses the ClickHouse exporter with appropriate settings and exposes OTLP receivers. This configuration matches the interfaces and behavior expected by the ClickStack distribution.

An example of this configuration is shown below (environment variables will be pre-populated if copying from the UI):

receivers:
  otlp/hyperdx:
    protocols:
      grpc:
        include_metadata: true
        endpoint: "0.0.0.0:4317"
      http:
        cors:
          allowed_origins: ["*"]
          allowed_headers: ["*"]
        include_metadata: true
        endpoint: "0.0.0.0:4318"
processors:
  batch:
  memory_limiter:
    # 80% of maximum memory up to 2G, adjust for low memory environments
    limit_mib: 1500
    # 25% of limit up to 2G, adjust for low memory environments
    spike_limit_mib: 512
    check_interval: 5s
connectors:
  routing/logs:
    default_pipelines: [logs/out-default]
    error_mode: ignore
    table:
      - context: log
        statement: route() where IsMatch(attributes["rr-web.event"], ".*")
        pipelines: [logs/out-rrweb]
exporters:
  debug:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
  clickhouse/rrweb:
    database: default
    endpoint: <clickhouse_cloud_endpoint>
    password: <your_password_here>
    username: default
    ttl: 720h
    logs_table_name: hyperdx_sessions
    timeout: 5s
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s
  clickhouse:
    database: default
    endpoint: <clickhouse_cloud_endpoint>
    password: <your_password_here>
    username: default
    ttl: 720h
    timeout: 5s
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s

service:
  pipelines:
    traces:
      receivers: [otlp/hyperdx]
      processors: [memory_limiter, batch]
      exporters: [clickhouse]
    metrics:
      receivers: [otlp/hyperdx]
      processors: [memory_limiter, batch]
      exporters: [clickhouse]
    logs/in:
      receivers: [otlp/hyperdx]
      exporters: [routing/logs]
    logs/out-default:
      receivers: [routing/logs]
      processors: [memory_limiter, batch]
      exporters: [clickhouse]
    logs/out-rrweb:
      receivers: [routing/logs]
      processors: [memory_limiter, batch]
      exporters: [clickhouse/rrweb]

For further details on configuring OpenTelemetry collectors, see "Ingesting with OpenTelemetry."

Start ingestion (optional)

If you have existing applications or infrastructure to instrument with OpenTelemetry, navigate to the relevant guides linked from the UI.

To instrument your applications to collect traces and logs, use the supported language SDKs which send data to your OpenTelemetry Collector acting as a gateway for ingestion into Managed ClickStack.

Logs can be collected using OpenTelemetry Collectors running in agent mode, forwarding data to the same collector. For Kubernetes monitoring, follow the dedicated guide. For other integrations, see our quickstart guides.

Demo data

Alternatively, if you don't have existing data, try one of our sample datasets.

  • Example dataset - Load an example dataset from our public demo. Diagnose a simple issue.
  • Local files and metrics - Load local files and monitor the system on OSX or Linux using a local OTel collector.

Select 'Launch ClickStack' to access the ClickStack UI (HyperDX). You will automatically authenticated and redirected.

Data sources will be pre-created for any OpenTelemetry data.


And that’s it — you’re all set. 🎉

Go ahead and explore ClickStack: start searching logs and traces, see how logs, traces, and metrics correlate in real time, build dashboards, explore service maps, uncover event deltas and patterns, and set up alerts to stay ahead of issues.

Next Steps

Record default credentials

If you have not recorded your default credentials during the above steps, navigate to the service and select Connect, recording the password and HTTP/native endpoints. Store these admin credentials securely, which can be reused in further guides.

To perform tasks such as provisioning new users or adding further data sources, see the deployment guide for Managed ClickStack.