cluster, clusterAllReplicas
Allows to access all shards (configured in the remote_servers
section) of a cluster without creating a Distributed table. Only one replica of each shard is queried.
clusterAllReplicas
function — same as cluster
, but all replicas are queried. Each replica in a cluster is used as a separate shard/connection.
All available clusters are listed in the system.clusters table.
Syntax
cluster(['cluster_name', db.table, sharding_key])
cluster(['cluster_name', db, table, sharding_key])
clusterAllReplicas(['cluster_name', db.table, sharding_key])
clusterAllReplicas(['cluster_name', db, table, sharding_key])
Arguments
cluster_name
– Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers, setdefault
if not specified.db.table
ordb
,table
- Name of a database and a table.sharding_key
- A sharding key. Optional. Needs to be specified if the cluster has more than one shard.
Returned value
The dataset from clusters.
Using Macros
cluster_name
can contain macros — substitution in curly brackets. The substituted value is taken from the macros section of the server configuration file.
Example:
SELECT * FROM cluster('{cluster}', default.example_table);
Usage and Recommendations
Using the cluster
and clusterAllReplicas
table functions are less efficient than creating a Distributed
table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the Distributed
table ahead of time, and do not use the cluster
and clusterAllReplicas
table functions.
The cluster
and clusterAllReplicas
table functions can be useful in the following cases:
- Accessing a specific cluster for data comparison, debugging, and testing.
- Queries to various ClickHouse clusters and replicas for research purposes.
- Infrequent distributed requests that are made manually.
Connection settings like host
, port
, user
, password
, compression
, secure
are taken from <remote_servers>
config section. See details in Distributed engine.
See Also