Skip to main content
Skip to main content

Going to Production

When deploying ClickStack in production, there are several additional considerations to ensure security, stability, and correct configuration. These vary depending on the distribution - Open Source or Managed - being used.

For production deployments, Managed ClickStack is recommended. It applies industry-standard security practices by default - including enhanced encryption, authentication and connectivity, and managed access controls, as well as providing the following benefits:

  • Automatic scaling of compute independent of storage
  • Low-cost and effectively unlimited retention based on object storage
  • The ability to independently isolate read and write workloads with Warehouses.
  • Integrated authentication
  • Automated backups
  • Seamless upgrades

Follow these best practices for ClickHouse Cloud when using Managed ClickStack.

Secure ingestion

By default, the ClickStack OpenTelemetry Collector is not secured when deployed outside of the Open Source distributions and does not require authentication on its OTLP ports.

To secure ingestion, specify an authentication token when deploying the collector using the OTLP_AUTH_TOKEN environment variable. See "Securing the collector" for further details.

Create an ingestion user

It's recommended to create a dedicated user for the OTel collector for ingestion into Managed ClickHouse and ensuring ingestion is sent to a specific database e.g. otel. See "Creating an ingestion user" for further details.

Configure Time To Live (TTL)

Ensure the Time To Live (TTL) has been appropriately configured for your Managed ClickStack deployment. This controls how long data is retained for - the default of 3 days often needs to be modified.

Estimating Resources

When deploying Managed ClickStack, it is important to provision sufficient compute resources to handle both ingestion and query workloads. The estimates below provide a baseline starting point based on the volume of observability data you plan to ingest.

These recommendations are based on the following assumptions:

  • Data volume refers to uncompressed ingest volume per month and applies to both logs and traces.
  • Query patterns are typical for observability use cases, with most queries targeting recent data, usually the last 24 hours.
  • Ingestion is relatively uniform across the month. If you expect bursty traffic or spikes, you should provision additional headroom.
  • Storage is handled separately via ClickHouse Cloud object storage and is not a limiting factor for retention. We assume data retained for longer periods is infrequently accessed.

More compute may be required for access patterns that regularly query longer time ranges, perform heavy aggregations, or support a high number of concurrent users.

Monthly ingest volumeRecommended compute
< 10 TB / month2 vCPU × 3 replicas
10–50 TB / month4 vCPU × 3 replicas
50–100 TB / month8 vCPU × 3 replicas
100–500 TB / month30 vCPU × 3 replicas
1 PB+ / month59 vCPU × 3 replicas
Note

These values are estimates only and should be used as an initial baseline. Actual requirements depend on query complexity, concurrency, retention policies, and variance in ingestion throughput. Always monitor resource usage and scale as needed.

Isolating observability workloads

If you are adding ClickStack to an existing ClickHouse Cloud service that already supports other workloads, such as real-time application analytics, isolating observability traffic is strongly recommended.

Use Managed Warehouses to create a child service dedicated to ClickStack. This allows you to:

  • Isolate ingest and query load from existing applications
  • Scale observability workloads independently
  • Prevent observability queries from impacting production analytics
  • Share the same underlying datasets across services when needed

This approach ensures your existing workloads remain unaffected while allowing ClickStack to scale independently as observability data grows.

For larger deployments or custom sizing guidance, please contact support for a more precise estimate.