Menu
Grafana Cloud

Confluent Cloud integration for Grafana Cloud

The Confluent Cloud integration enables you to quickly pull in Confluent Cloud metrics to Grafana Cloud. The integration provides a number of prebuilt dashboards to help you monitor your Confluent Cloud service. No agent is required, and you can create multiple configurations called scrape jobs to organize your data.

Install Confluent Cloud integration for Grafana Cloud

  1. In your Grafana instance, click Connections.

  2. Navigate to the Confluent Cloud tile and review the prerequisites. When you’re ready, click on the Install button located at the bottom of the page.

  3. Create a Cloud API key in Confluent Cloud. You can navigate to Confluent to retrieve this information by clicking Open Confluent Cloud.

  4. Enter a name for this specific connection under Scrape job name.

  5. Copy and paste the API Key from Confluent Cloud into API Key.

  6. Copy and paste the API Secret from Confluent Cloud into API Secret.

  7. Click Test Connection. If you encounter an error, be sure that you’ve entered the right API credentials from your Confluent Cloud account.

Configure resource settings

You must add at least one resource to the Confluent Cloud integration in order to save the scrape job.

  1. Select the desired Confluent Cloud resource under Resource Type.

  2. Copy and paste the associated Confluent Cloud resource ID into Resource ID.

  3. When you’ve added all of the resources you wish to monitor, click Save Scrape Job.

You’ll see a success page and can navigate to the dashboards that have been installed.

Dashboards

After you have successfully configured the Confluent Cloud integration, prebuilt dashboards will be installed in your Grafana instance to help you monitor your Confluent Cloud services.

Managing Your integration

After you’ve successfully configured a scrape job, no other management is needed. Grafana Cloud will manage the scraping of metrics from Confluent Cloud into Grafana Cloud.

You can view, edit, or delete your existing scrape jobs at any time by navigating back to the integration from the Connections section and selecting the Confluent Cloud tile.

Resources

The Confluent Cloud integration allows you to pull in metrics from the following Confluent Cloud resources:

  • Kafka Cluster
  • Kafka Connector
  • KsqlDB
  • Schema Registry

Metrics

Below is a list of the metrics per resource that are automatically written to your Grafana Cloud instance when you select a resource to connect to. The metrics will be named using the following naming convention:

Kafka Cluster

Metric NameLabels
confluent_kafka_server_active_connection_countprincipal_id
confluent_kafka_server_cluster_link_countmode, link_name, link_state
confluent_kafka_server_cluster_link_destination_response_byteslink_name
confluent_kafka_server_cluster_link_mirror_topic_byteslink_name, topic
confluent_kafka_server_cluster_link_mirror_topic_countlink_name, link_mirror_topic_state
confluent_kafka_server_cluster_link_mirror_topic_offset_laglink_name, topic
confluent_kafka_server_cluster_link_mirror_transition_in_errormode, link_name, link_mirror_topic_state, link_mirror_topic_reason
confluent_kafka_server_cluster_link_source_response_bytes
confluent_kafka_server_cluster_link_task_countmode, link_name, link_task_name, link_task_reason, link_task_state
confluent_kafka_server_cluster_load_percent
confluent_kafka_server_consumer_lag_offsetstopic, consumer_group_id
confluent_kafka_server_hot_partition_egresstopic
confluent_kafka_server_hot_partition_ingresstopic
confluent_kafka_server_partition_count
confluent_kafka_server_received_bytestopic
confluent_kafka_server_received_recordstopic
confluent_kafka_server_request_bytestype, principal_id
confluent_kafka_server_request_counttype, principal_id
confluent_kafka_server_response_bytestype, principal_id
confluent_kafka_server_rest_produce_request_bytes
confluent_kafka_server_retained_bytestopic
confluent_kafka_server_sent_bytestopic
confluent_kafka_server_sent_recordstopic
confluent_kafka_server_successful_authentication_countprincipal_id

Kafka Connector

Metric NameLabels
confluent_kafka_connect_dead_letter_queue_records
confluent_kafka_connect_received_bytes
confluent_kafka_connect_received_records
confluent_kafka_connect_sent_bytes
confluent_kafka_connect_sent_records

KsqlDB

Metric NameLabels
confluent_kafka_ksql_committed_offset_lagtask_id, query_id, topic
confluent_kafka_ksql_consumed_total_bytesquery_id
confluent_kafka_ksql_offsets_processed_totaltask_id, query_id, topic
confluent_kafka_ksql_processing_errors_totalquery_id
confluent_kafka_ksql_produced_total_bytesquery_id
confluent_kafka_ksql_query_restartsquery_id
confluent_kafka_ksql_query_saturationquery_id
confluent_kafka_ksql_storage_utilization
confluent_kafka_ksql_streaming_unit_count
confluent_kafka_ksql_task_stored_bytesksql_node_id, task_id,query_id

Schema Registry

Metric NameLabels
confluent_kafka_schema_registry_num_deks
confluent_kafka_schema_registry_num_keks
confluent_kafka_schema_registry_num_keks_shared
confluent_kafka_schema_registry_request_count
confluent_kafka_schema_registry_schema_count
confluent_kafka_schema_registry_schema_operations_countmethod, status