Skip to main content

· One min read

In order to execute the same query on all nodes of a ClickHouse Cloud service, we can use clusterAllReplicas.

For example, in order to get entries from a (node-local) system table from all nodes, you can use:

SELECT ... FROM clusterAllReplicas(default, system.TABLE) ...;

Similarly, you can execute the same SYSTEM statement on all nodes with a single statement, by using the ON CLUSTER clause:

SYSTEM ... ON CLUSTER default;

For example for dropping the filesystem cache from all nodes, you can use:

SYSTEM DROP FILESYSTEM CACHE ON CLUSTER default;

· One min read

ClickHouse provides a simple and intuitive way to write filtered aggregates. For example, compare the standard SQL way to write filtered aggregates (which work fine in ClickHouse) with the shorthand syntax using the -If aggregate function combinator, which can be appended to any aggregate function:

--standard SQL
SELECT
avg(number)
FILTER (WHERE number > 50)
FROM numbers(100)

--ClickHouse using an aggregate combinator
SELECT
avgIf(number, number > 50)
FROM numbers(100)

Similarly, there is a -Distinct aggregate combinator:

--standard SQL
SELECT avg(DISTINCT number)

--ClickHouse using an aggregate combinator
SELECT avgDistinct(number)

Why are filtered aggregates are important? Because they allow you to implement the "segment comparison" feature in web analytics services. For example:

WITH
Region = 'us' AS segment1,
Browser = 'Chrome' AS segment2
SELECT
uniqIf(UserID, segment1),
uniqIf(UserID, segment2)
WHERE segment1 OR segment2

Check out the aggregate function combinator page in the docs for more details.

· One min read

ClickHouse uses threads from the Global Thread pool to process queries and also perform background operations like merges and mutations. If there is no idle thread to process a query, then a new thread is created in the pool.

The maximum size of the global thread pool is determined by the max_thread_pool_size setting, which defaults to 10,000. You can modify this value in your config - here we set it to 20,000:

<max_thread_pool_size>20000</max_thread_pool_size>

If you modify max_thread_pool_size, we recommend changing thread_pool_queue_size to be the same value. The thread_pool_queue_size setting is the maximum number of jobs that can be scheduled on the Global Thread pool:

<thread_pool_queue_size>20000</thread_pool_queue_size>

You can also free up resources if your server has a lot of idle threads - using the max_thread_pool_free_size setting. The default is 1,000, which means your Global Thread pool will never have more than 1,000 idle threads. The following example increases the value to 2,000:

<max_thread_pool_free_size>2000</max_thread_pool_free_size>

Check out the docs for more details on the settings above and other settings that affect the Global Thread pool.

· One min read

When a user-level setting is specified in the wrong place, the server won't start and an exception message is sent to the log. However, you can tell ClickHouse to ignore the incorrect setting using the skip_check_for_incorrect_settings setting:

Add the following to config.xml:

<skip_check_for_incorrect_settings>1</skip_check_for_incorrect_settings>
note

User-level settings should be specified in users.xml inside a <profile> section for the specific user profile, (or in <default> for default settings.

· One min read

A client can view the server logs - even at a different level than what the server log level is configured to - by setting the send_logs_level client setting.

For example, suppose the client runs:

SET send_logs_level = 'trace';

The client will receive trace logs even if the server has log level set to info.

One useful scenario is to use send_logs_level to monitor the insertion of rows into a Distributed table:

  • Enable logs in clickhouse-client using SET send_logs_level = 'trace';
  • Run your INSERT query
  • Inserts into a distributed table are asynchronous by default. The data is written into a local buffer on disk, then sent to remote servers in background.
  • Logs will be sent from all nodes participating in the query processing (distributed tracing)

To check the status of distributed inserts, check the system.distribution_queue table. This table contains information about local files that are in the queue to be sent to the shards. These local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode.

· One min read

Pricing

For pricing information see the ClickHouse Cloud Pricing page. To understand what can affect your bill, and ways that you can manage your spend, keep reading.

Examples

note

Prices reflect AWS us-east-1 pricing.

Development: From $51 per month

Best for: Starter projects & staging

  • Development service
  • 16 GiB RAM, 2 vCPU
  • 1 TB Data

Pricing breakdown for this example:

10% active50% activeAlways on
Compute$16$79$158
Storage$35$35$35
Total$51$114$193
note

Consumption can be even lower if less than 1TB disk is used

Production (Idling, Auto-scaling): From $172 per month

Best for: Cost-sensitive ad-hoc analytics applications

  • Production Service
  • Active workload ~25% time
  • Idling on with default settings
  • Auto-scaling maximum set to prevent runaway bills

Pricing breakdown for this example:

SmallMediumLarge
Compute24 GiB RAM, 6 vCPU
$125
192 GiB RAM, 48 vCPU
$1000
720 GiB RAM, 180 vCPU
$3750
Storage1 TB Data
$47
5 TB Data
$235
10 TB Data
$470
Total$172$1,235$4,220

Production (Always-on, Reserved capacity): From $550 per month​

Best for: Latency-sensitive applications

  • Production Service
  • Active workload ~100% time
  • Auto-scaling minimum set to reserve capacity

Pricing breakdown for this example:

SmallMediumLarge
Compute24 GiB RAM, 6 vCPU
$503
96 GiB RAM, 24 vCPU
$2,012
360 GiB RAM, 90 vCPU
$7,545
Storage1 TB Data
$47
4 TB Data
$188
8 TB Data
$376
Total$550$2,200$7,921

For help with further estimation, please contact support if you are already a ClickHouse Cloud user, or sales@clickhouse.com otherwise.

FAQs

How is compute metered?

ClickHouse Cloud meters compute on a per-minute basis, in 8G RAM increments.

How is storage on disk calculated?

ClickHouse Cloud uses cloud object storage and is metered on the compressed size of data stored in ClickHouse tables.

Do backups count toward total storage?

ClickHouse Cloud offers two free backups for production services, and one free backup for development services. Backups do not count toward storage.

How do I estimate compression?

Compression can vary quite a bit by dataset. It is dependent on how compressible the data is in the first place (number of high vs. low cardinality fields), and how the user sets up the schema (using optional codecs or not, for instance). It can be on the order of 10x for common types of analytical data, but it can be significantly lower or higher as well. See the optimizing documentation for guidance and this Uber blog for a detailed logging use case example. The only practical way to know exactly is to ingest your dataset into ClickHouse and compare the size of the dataset with the size stored in ClickHouse.

You can use the query SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = <your table name>.

What tools does ClickHouse offer to estimate the cost of running a service in the cloud if I have a self-managed deployment?

The ClickHouse query log captures key metrics that can be used to estimate the cost of running a workload in ClickHouse Cloud. For details on migrating from self-managed to ClickHouse Cloud please refer to the migration documentation, and contact ClickHouse Cloud support if you have further questions.

What billing options are available for ClickHouse Cloud?

ClickHouse Cloud supports the following billing options:

  • Self-service monthly (in USD, via credit card)
  • Direct-sales annual / multi-year (through pre-paid "ClickHouse Credits", in USD, with additional payment options)

How long is the billing cycle?

Billing follows a monthly billing cycle and the start date is tracked as the date when the ClickHouse Cloud organization was created.

What controls does ClickHouse Cloud offer to manage costs for Production services?

  • Trial and Annual Commit customers will be notified with automated emails when the consumption hits certain thresholds - 50%, 75%, and 90%, so that users can take action.
  • ClickHouse Cloud allows users to set a maximum auto-scaling limit on their compute via Advanced scaling control, a significant cost factor for analytical workloads.
  • The Advanced scaling control lets you set memory limits with an option to control the behavior of pausing/idling during inactivity.

What controls does ClickHouse Cloud offer to manage costs for Developer services?

  • The Advanced scaling control lets you control the behavior of pausing/idling during inactivity. Adjusting memory allocation is not supported for Developer services
  • Note that the default setting pauses the service after a period of inactivity

If I have multiple services, do I get an invoice per service or a consolidated invoice?

A consolidated invoice is generated for all services in a given organization for a billing period.

If I add my credit card and upgrade before my trial period and credits expire, will I be charged?

When a user converts from trial to paid before the 30-day trial period ends, but with credits remaining from the trial credit allowance, we continue to draw down from the trial credits during the initial 30-day trial period, and then charge the credit card.

How can I keep track of my spending?

ClickHouse Cloud console includes a Usage display that gives detailed information about usage per service on compute and storage. This can be used to understand the cost breakdown by metered units.

How do I access my invoice for my AWS marketplace subscription to the ClickHouse Cloud service?

All marketplace subscriptions will be billed and invoiced by AWS. You can download the invoice from the AWS Billing Dashboard.

Why do the dates on the Usage statements not match my AWS Marketplace Invoice?

AWS Marketplace billing follows the calendar month cycle eg: For usage between dates 01-Dec-2022 and 01-Jan-2023, an invoice will be generated between 3-Jan - 5-Jan-2023

ClickHouse Cloud usage statements follow a different billing cycle where usage is metered and reported over 30 days starting from the day of sign up

The usage and invoice dates will differ if these dates are not the same. Since usage statements track usage by day for a given service, users can rely on statements to see the breakdown of costs.

What are the best practices?

There are several areas of optimization, some of these are:

  • Batching inserts in place of frequent small-size inserts
  • Having fewer columns in tables
  • Choosing a partition key so that inserts go into a fewer number of partitions
  • Avoiding write-heavy operations in ClickHouse, such as mutations, OPTIMIZE FINAL, and Nullable columns

· One min read

How can I start using ClickHouse Cloud connected to my AWS/GCP/Azure account billing?

AWS (GCP and AZURE coming soon)

  • Log into the AWS console using your AWS account
  • Navigate to the ClickHouse Cloud at aws marketplace
  • Click "View purchase options"
  • In the "Contract Options" section of the page, enter any number in the Units field. This will not affect the price your pay as the price for these units for the public offering is $0. These units are usually used when accepting a private offer from ClickHouse Cloud.
  • Click "Create contract"
  • Click "Set up your account".
  • You will be redirected to the special aws marketplace ClickHouse Cloud login page. Please, complete your sign-in / sign-up at this page so we can bind your ClickHouse Cloud organization to AWS billing:
    • If you are a new CH Cloud user, click "Register" at the bottom of the page. You will be prompted to create a new user and verify the email. After verifying your email, you can leave the ClickHouse Cloud login page and login using the new username at the https://clickhouse.cloud.
    • If you are an existing CH Cloud user, simply log in using your credentials.
  • After successful log in, a new ClickHouse Cloud organization will be created. This organization will be connected to your AWS billing account.

How can I understand that my organization is connected to the aws marketplace billing?

In ClickHouse Cloud console, navigate to Admin -> Billing. You should see the name of the marketplace and the link in the Payment details section

I am an existing ClickHouse Cloud user. What will happen if I subscribe to the CH Cloud via aws marketplace?

A separate organization connected to the marketplace will be created. Your existing services and organizations will remain and they will not be connected to the marketplace billing.

You can switch between organizations in the top right corner of the ClickHouse Cloud console.

I am an existing ClickHouse Cloud user and I want my existing services to be billed via marketplace.

Please contact ClickHouse Cloud support in this case.

I subscribed as a marketplace user and now I want to unsubscribe from the ClickHouse Cloud.

Note that you can simply stop using ClickHouse Cloud and delete all existing ClickHouse Cloud services. Even though the subscription will still be active, you will not be paying anything as ClickHouse Cloud doesn't have any recurring fees.

If you want to unsubscribe, please navigate to the Cloud Provider console and cancel the subscription renewal there. Once the subscription ends, all existing services will be stopped and you will be prompted to add a credit card. If no card was added, after two weeks all existing services will be deleted.

Previously I subscribed to ClickHouse Cloud as a marketplace user, then I unsubscribed, but now I want to subscribe back.

In that case please subscribe to the ClickHouse Cloud as usual (see "How can I start using ClickHouse Cloud connected to my AWS/GCP/Azure account billing?"). Note that a new ClickHouse Cloud organization will be created and connected to the marketplace.

How do I access my invoice for my AWS marketplace subscription to the ClickHouse Cloud service?

All marketplace subscriptions will be billed and invoiced by AWS. You can download the invoice from the AWS Billing Dashboard.

Why do the dates on the Usage statements not match my AWS Marketplace Invoice?

AWS Marketplace billing follows the calendar month cycle. For example, for usage between December 1st and January 1st, an invoice will be generated between January 3rd and January 5th

ClickHouse Cloud usage statements follow a different billing cycle where usage is metered and reported over 30 days starting from the day of sign up

The usage and invoice dates will differ if these dates are not the same. Since usage statements track usage by day for a given service, users can rely on statements to see the breakdown of costs.

Where can I find general billing information

Please see the billing documentation.

· One min read

ClickHouse Cloud Troubleshooting

Unable to access a ClickHouse Cloud service

If you are seeing an error message like one of these, your IP Access List may be denying access:

curl: (35) error:02FFF036:system library:func(4095):Connection reset by peer

or

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to HOSTNAME.clickhouse.cloud:8443

or

Code: 210. DB::NetException: SSL connection unexpectedly closed (e46453teek.us-east-2.aws.clickhouse-staging.com:9440). (NETWORK_ERROR)

Check the IP Access List, if you are attempting to connect from outside the allowed list then your connection will fail.

Self-managed ClickHouse Troubleshooting

Installation

You Cannot Get Deb Packages from ClickHouse Repository with Apt-get

  • Check firewall settings.
  • If you cannot access the repository for any reason, download packages as described in the install guide article and install them manually using the sudo dpkg -i <packages> command. You will also need the tzdata package.

You Cannot Update Deb Packages from ClickHouse Repository with Apt-get

  • The issue may be happened when the GPG key is changed.

Please use the following scripts to resolve the issue:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
sudo apt-get update

You Get Different Warnings with apt-get update

  • The completed warning messages are as one of following:
N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://packages.clickhouse.com/deb stable InRelease' doesn't support architecture 'i386'
E: Failed to fetch https://packages.clickhouse.com/deb/dists/stable/main/binary-amd64/Packages.gz  File has unexpected size (30451 != 28154). Mirror sync in progress?
E: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Origin' value from 'Artifactory' to 'ClickHouse'
E: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Label' value from 'Artifactory' to 'ClickHouse'
N: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Suite' value from 'stable' to ''
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.
Err:11 https://packages.clickhouse.com/deb stable InRelease
400 Bad Request [IP: 172.66.40.249 443]

To resolve the above issue, please use the following script:

sudo rm /var/lib/apt/lists/packages.clickhouse.com_* /var/lib/dpkg/arch /var/lib/apt/lists/partial/packages.clickhouse.com_*
sudo apt-get clean
sudo apt-get autoclean

You Can't Get Packages With Yum Because Of Wrong Signature

Possible issue: the cache is wrong, maybe it's broken after updated GPG key in 2022-09.

The solution is to clean out the cache and lib directory for yum:

sudo find /var/lib/yum/repos/ /var/cache/yum/ -name 'clickhouse-*' -type d -exec rm -rf {} +
sudo rm -f /etc/yum.repos.d/clickhouse.repo

After that follow the install guide

Connecting to the Server

Possible issues:

  • The server is not running.
  • Unexpected or wrong configuration parameters.

Server Is Not Running

Check if server is running

Command:

$ sudo service clickhouse-server status

If the server is not running, start it with the command:

$ sudo service clickhouse-server start

Check logs

The main log of clickhouse-server is in /var/log/clickhouse-server/clickhouse-server.log by default.

If the server started successfully, you should see the strings:

  • <Information> Application: starting up. — Server started.
  • <Information> Application: Ready for connections. — Server is running and ready for connections.

If clickhouse-server start failed with a configuration error, you should see the <Error> string with an error description. For example:

2019.01.11 15:23:25.549505 [ 45 ] {} <Error> ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused

If you do not see an error at the end of the file, look through the entire file starting from the string:

<Information> Application: starting up.

If you try to start a second instance of clickhouse-server on the server, you see the following log:

2019.01.11 15:25:11.151730 [ 1 ] {} <Information> : Starting ClickHouse 19.1.0 with revision 54413
2019.01.11 15:25:11.154578 [ 1 ] {} <Information> Application: starting up
2019.01.11 15:25:11.156361 [ 1 ] {} <Information> StatusFile: Status file ./status already exists - unclean restart. Contents:
PID: 8510
Started at: 2019-01-11 15:24:23
Revision: 54413

2019.01.11 15:25:11.156673 [ 1 ] {} <Error> Application: DB::Exception: Cannot lock file ./status. Another server instance in same directory is already running.
2019.01.11 15:25:11.156682 [ 1 ] {} <Information> Application: shutting down
2019.01.11 15:25:11.156686 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2019.01.11 15:25:11.156716 [ 2 ] {} <Information> BaseDaemon: Stop SignalListener thread

See system.d logs

If you do not find any useful information in clickhouse-server logs or there aren’t any logs, you can view system.d logs using the command:

$ sudo journalctl -u clickhouse-server

Start clickhouse-server in interactive mode

$ sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml

This command starts the server as an interactive app with standard parameters of the autostart script. In this mode clickhouse-server prints all the event messages in the console.

Configuration Parameters

Check:

  • Docker settings.

    If you run ClickHouse in Docker in an IPv6 network, make sure that network=host is set.

  • Endpoint settings.

    Check listen_host and tcp_port settings.

    ClickHouse server accepts localhost connections only by default.

  • HTTP protocol settings.

    Check protocol settings for the HTTP API.

  • Secure connection settings.

    Check:

    Use proper parameters while connecting. For example, use the port_secure parameter with clickhouse_client.

  • User settings.

    You might be using the wrong user name or password.

Query Processing

If ClickHouse is not able to process the query, it sends an error description to the client. In the clickhouse-client you get a description of the error in the console. If you are using the HTTP interface, ClickHouse sends the error description in the response body. For example:

$ curl 'http://localhost:8123/' --data-binary "SELECT a"
Code: 47, e.displayText() = DB::Exception: Unknown identifier: a. Note that there are no tables (FROM clause) in your query, context: required_names: 'a' source_tables: table_aliases: private_aliases: column_aliases: public_columns: 'a' masked_columns: array_join_columns: source_columns: , e.what() = DB::Exception

If you start clickhouse-client with the stack-trace parameter, ClickHouse returns the server stack trace with the description of an error.

You might see a message about a broken connection. In this case, you can repeat the query. If the connection breaks every time you perform the query, check the server logs for errors.

Efficiency of Query Processing

If you see that ClickHouse is working too slowly, you need to profile the load on the server resources and network for your queries.

You can use the clickhouse-benchmark utility to profile queries. It shows the number of queries processed per second, the number of rows processed per second, and percentiles of query processing times.

· 3 min read

Problem

Map lookup such as a['key'] works with linear complexity (mentioned here) and can be inefficient. This is because selecting a value with a specific key from a table would require iterating through all keys (~M) across all rows (N) in the Map column, resulting in ~MxN lookups.

A lookup using Map can be 10x slower than a String column. The experiment below also shows ~10x slowdown for cold query, and difference in multiple magnitudes of data processed (7.21 MB vs 5.65 GB).

-- create table with SpanNAme as String and ResourceAttributes as Map
DROP TABLE IF EXISTS tbl;
CREATE TABLE tbl (
`Timestamp` DateTime64(9) CODEC (Delta(8), ZSTD(1)),
`TraceId` String CODEC (ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC (ZSTD(1)),
`Duration` UInt8 CODEC (ZSTD(1)), -- Int64
`SpanName` LowCardinality(String) CODEC (ZSTD(1)),
`ResourceAttributes` Map(LowCardinality(String), String) CODEC (ZSTD(1))
)
ENGINE = MergeTree
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SpanName, toUnixTimestamp(Timestamp), TraceId);

-- create UDF to generate random Map data for ResourceAttributes
DROP FUNCTION IF EXISTS genmap;
CREATE FUNCTION genmap AS (n) -> arrayMap (x-> (x::String, (x*rand32())::String), range(1, n));

-- check that genmap is working as intended
SELECT genmap(10)::Map(String, String);

-- insert 1M rows
INSERT INTO tbl
SELECT
now() - randUniform(1, 1000000.) as Timestamp,
randomPrintableASCII(2) as TraceId,
randomPrintableASCII(2) as ServiceName,
rand32() as Duration,
randomPrintableASCII(2) as SpanName,
genmap(rand64()%500)::Map(String, String) as ResourceAttributes
FROM numbers(1_000_000);

-- querying for SpanName is faster
-- [cold] 0 rows in set. Elapsed: 0.642 sec. Processed 1.00 million rows, 7.21 MB (1.56 million rows/s., 11.22 MB/s.)
-- [warm] 0 rows in set. Elapsed: 0.164 sec. Processed 1.00 million rows, 7.21 MB (6.10 million rows/s., 43.99 MB/s.)
SELECT
COUNT(*),
avg(Duration/1E6) as average,
quantile(0.95)(Duration/1E6) as p95,
quantile(0.99)(Duration/1E6) as p99,
SpanName
FROM tbl
GROUP BY SpanName ORDER BY 1 DESC LIMIT 50 FORMAT Null;

-- query for ResourceAttributes is slower
-- [cold] 0 rows in set. Elapsed: 6.432 sec. Processed 1.00 million rows, 5.65 GB (155.46 thousand rows/s., 879.07 MB/s.)
-- [warm] 0 rows in set. Elapsed: 5.935 sec. Processed 1.00 million rows, 5.65 GB (168.50 thousand rows/s., 952.81 MB/s.)
SELECT
COUNT(*),
avg(Duration/1E6) as average,
quantile(0.95)(Duration/1E6) as p95,
quantile(0.99)(Duration/1E6) as p99,
ResourceAttributes['1'] as hostname
FROM tbl
GROUP BY hostname ORDER BY 1 DESC LIMIT 50 FORMAT Null;

Solution To improve the query, we can add another column with the value defaulting to a particular key in the Map column, and then materializing it to populate value for existing rows. This way, we extract and store the necessary value at insertion time, thereby speeding up the lookup at query time.

-- solution is to add a column with value defaulting to a particular key in Map
ALTER TABLE tbl ADD COLUMN hostname LowCardinality(String) DEFAULT ResourceAttributes['1'];
ALTER TABLE tbl MATERIALIZE COLUMN hostname;

-- query for hostname (new column) is now faster
-- [cold] 0 rows in set. Elapsed: 2.215 sec. Processed 1.00 million rows, 21.67 MB (451.52 thousand rows/s., 9.78 MB/s.)
-- [warm] 0 rows in set. Elapsed: 0.541 sec. Processed 1.00 million rows, 21.67 MB (1.85 million rows/s., 40.04 MB/s.)
SELECT
COUNT(*),
avg(Duration/1E6) as average,
quantile(0.95)(Duration/1E6) as p95,
quantile(0.99)(Duration/1E6) as p99,
hostname
FROM tbl
GROUP BY hostname ORDER BY 1 DESC LIMIT 50 FORMAT Null;

-- drop cache to run query cold
SYSTEM DROP FILESYSTEM CACHE;