CloudQuery Platform
  • Introduction
    • Welcome to CloudQuery Platform
    • Getting Help
  • Quickstart
    • Creating a New Account
    • Platform Activation
  • Core Concepts
    • Integrations
    • Syncs
    • Filters & Queries
    • SQL Console
    • Reports
      • Built-in Report Templates
      • Reports Yaml Documentation with Examples
        • Full Report Example
  • Integration Guides
    • Setting up an AWS Integration
    • Setting up an AWS Cost and Usage Integration
    • Setting up a GCP Integration
    • Setting up an Azure Integration
    • Setting up a GitHub Integration
    • Setting up a K8s Integration
      • Using AWS EKS
      • Using Azure AKS
      • Using GCP GKE
    • General Integration Setup Guide
    • General Destination Setup Guide
  • Syncs
    • Setting up a Sync
    • Monitoring Sync Status
  • Cloud insights
    • From cloud asset inventory to insights
      • Security-focused queries
      • Compliance-focused queries
      • FinOps-focused queries
  • Production Deployment
    • Enabling Single Sign-on (SSO)
      • Single Sign-On with Google
      • Single Sign-On with Microsoft
      • Single Sign-On with Okta
  • User Management
    • Platform Roles Overview
    • Workspace Roles Overview
  • Advanced Topics
    • Custom Columns
    • Understanding Platform Views
    • Performance Tuning
  • Reference
    • Search & Filter Query Syntax
  • API Reference
  • CLI Docs
  • CloudQuery Hub
Powered by GitBook
On this page
  • Identifying Slow Tables
  • Tune Concurrency to work around rate limiting
  • Adjust Batch Size
  • Use a Different Scheduler

Was this helpful?

  1. Advanced Topics

Performance Tuning

Tips and tricks for improving sync performance for large cloud estates.

PreviousUnderstanding Platform ViewsNextSearch & Filter Query Syntax

Last updated 1 month ago

Was this helpful?

This article describes performance tuning of Syncs run on CloudQuery Platform. For CloudQuery CLI specific options, see

Identifying Slow Tables

The first step in improving the performance of a sync is to identify which tables are taking the longest to sync. Open sync run details to see the individual tables synced and browse through the tables with the highest amount of rows and check their run time.

Consider whether you actually need the tables or services to be synced.

Tune Concurrency to work around rate limiting

There is currently one main lever to control the rate at which CloudQuery fetches resources from cloud providers. This option is called concurrency and is available in most source integrations. It can be specified as part of the integration source configuration when using Yaml, or as an independent input when configuring a new Integration.

The concurrency option provides rough control over the number of concurrent requests that will be made while performing a sync. Setting this to a low number will reduce the number of concurrent requests, reducing the memory used and making the sync less likely to hit rate limits. The trade-off is that syncs will take longer to complete.

Adjust Batch Size

Most destination integrations have batching related settings that can be adjusted to improve performance. Tuning these can improve performance, but it can also increase the memory usage of the sync process. Here are the batching related settings you will come across:

  • batch_size: The number of rows that are inserted into the destination at once. The default value for this setting is usually between 1000 to 10000 rows, depending on the destination integration.

  • batch_size_bytes: Maximum size of items that may be grouped together to be written in a single write. This is useful for limiting the memory usage of the sync process. The default value for this varies between 4 MB to 100 MB, depending on the destination integration.

  • batch_timeout: Maximum interval between batch writes. Even if data stops coming in, the batch will be written after this interval. The default value for this setting is usually between 10 seconds and 1 minute, depending on the destination integration.

Some destination integrations (such as file or S3 destinations) start a new object or file for every batch, and some simply buffer the data in memory to be written at once.

You should check the documentation for the destination integration you are using to see what the default values are and consider how they can be adjusted to suit your use case.

Here's a conservative example for the PostgreSQL destination integration that reduces the overall memory usage, but may also increase the time it takes to sync:

kind: destination
spec:
  name: "postgresql"
  path: "cloudquery/postgresql"
  registry: "cloudquery"
  version: "v8.8.5" # latest version of destination postgresql plugin
  spec:
    connection_string: "postgres://user:pass@localhost:5432/mydb?sslmode=disable" # replace with your connection string
    batch_size: 10000 # 10000 rows, default
    batch_size_bytes: 4194304 # 4 MB, dramatically tuned down from the 100 MB default
    batch_timeout: "30s" # 30 seconds, tuned down from 60 seconds

With this configuration, the PostgreSQL destination integration will write 10,000 rows at a time, or 4 MB of data at a time, or every 30 seconds, whichever comes first.

Use a Different Scheduler

This option is available only when setting up an integration via API or using Yaml configuration.

By default, CloudQuery syncs will fetch all tables in parallel, writing data to the destination(s) as they come in. However, the concurrency setting, mentioned above, places a limit on how many table-clients can be synced at a time. What "table-client" means depends on the source integration and the table. In AWS, for example, a client is usually a combination of account and region. Get all the combinations of accounts and regions for all tables, and you have all the table-clients for a sync. For the GCP source integration, clients generally map to projects.

The default CloudQuery scheduler, known as dfs, will sync up to concurrency / 100 table-clients at a time (we are ignoring child relations for the purposes of this discussion). Let's take an example GCP cloud estate with 5000 projects, syncing 100 tables. This makes for approximately 500,000 table-client pairs, and a concurrency of 10,000 will allow 100 table-client pairs to be synced at a time. The dfs scheduler will start with the first table and its first 100 projects, and then move on to finish all projects for that table before moving on to the next table. This means, in practice, only one table is really being synced at a time!

Usually this works out fine, as long as the cloud platform's rate limits are aligned with the clients. But if rate limits are applied per-table, rather than per-project, dfs can be suboptimal. A better strategy in this case would be to choose the first client for every table before moving on to the next client. This is what the round-robin scheduler does.

Only some integrations support this setting. The following example configuration enables round-robin scheduling for the GCP source integration:

kind: source
spec:
  name: "gcp"
  path: "cloudquery/gcp"
  registry: "cloudquery"
  version: "v18.9.2" # latest version of source gcp plugin
  tables: ["gcp_storage_*", "gcp_compute_*"]
  destinations: ["postgresql"]
  spec:
    scheduler: "round-robin"
    project_ids: ...

Finally, the shuffle strategy aims to provide a balance between dfs and round-robin by randomizing the order in which table-client pairs are chosen. The following example enables shuffle for the GCP integration, which can help reduce the likelihood of hitting rate limits by randomly mixing the underlying services to which API calls that are made concurrently, rather than hitting a single API with all calls at once:

kind: source
spec:
  name: "gcp"
  path: "cloudquery/gcp"
  registry: "cloudquery"
  version: "v18.9.2" # latest version of source gcp plugin
  tables: ["gcp_storage_*", "gcp_compute_*"]
  destinations: ["postgresql"]
  spec:
    project_ids: ...
    scheduler: "shuffle"
    # ...

The shuffle scheduler is the default for the AWS source integration.

https://cli-docs.cloudquery.io/docs/advanced-topics/performance-tuning