Confluent Cloud integration for Grafana Cloud
The Confluent Cloud integration enables you to quickly pull in Confluent Cloud metrics to Grafana Cloud. The integration provides a number of prebuilt dashboards to help you monitor your Confluent Cloud service. No agent is required, and you can create multiple configurations called scrape jobs to organize your data.
Install Confluent Cloud integration for Grafana Cloud
In your Grafana instance, click Connections.
Navigate to the Confluent Cloud tile and review the prerequisites. When you’re ready, click on the Install button located at the bottom of the page.
Create a Cloud API key in Confluent Cloud. You can navigate to Confluent to retrieve this information by clicking Open Confluent Cloud.
Enter a name for this specific connection under Scrape job name.
Copy and paste the API Key from Confluent Cloud into API Key.
Copy and paste the API Secret from Confluent Cloud into API Secret.
Click Test Connection. If you encounter an error, be sure that you’ve entered the right API credentials from your Confluent Cloud account.
Configure resource settings
You must add at least one resource to the Confluent Cloud integration in order to save the scrape job.
Select the desired Confluent Cloud resource under Resource Type.
Copy and paste the associated Confluent Cloud resource ID into Resource ID.
When you’ve added all of the resources you wish to monitor, click Save Scrape Job.
You’ll see a success page and can navigate to the dashboards that have been installed.
Dashboards
After you have successfully configured the Confluent Cloud integration, prebuilt dashboards will be installed in your Grafana instance to help you monitor your Confluent Cloud services.
Managing Your integration
After you’ve successfully configured a scrape job, no other management is needed. Grafana Cloud will manage the scraping of metrics from Confluent Cloud into Grafana Cloud.
You can view, edit, or delete your existing scrape jobs at any time by navigating back to the integration from the Connections section and selecting the Confluent Cloud tile.
Resources
The Confluent Cloud integration allows you to pull in metrics from the following Confluent Cloud resources:
- Kafka Cluster
- Kafka Connector
- KsqlDB
- Schema Registry
Metrics
Below is a list of the metrics per resource that are automatically written to your Grafana Cloud instance when you select a resource to connect to. The metrics will be named using the following naming convention:
Kafka Cluster
Metric Name | Labels |
---|---|
confluent_kafka_server_active_connection_count | principal_id |
confluent_kafka_server_cluster_link_count | mode, link_name, link_state |
confluent_kafka_server_cluster_link_destination_response_bytes | link_name |
confluent_kafka_server_cluster_link_mirror_topic_bytes | link_name, topic |
confluent_kafka_server_cluster_link_mirror_topic_count | link_name, link_mirror_topic_state |
confluent_kafka_server_cluster_link_mirror_topic_offset_lag | link_name, topic |
confluent_kafka_server_cluster_link_mirror_transition_in_error | mode, link_name, link_mirror_topic_state, link_mirror_topic_reason |
confluent_kafka_server_cluster_link_source_response_bytes | |
confluent_kafka_server_cluster_link_task_count | mode, link_name, link_task_name, link_task_reason, link_task_state |
confluent_kafka_server_cluster_load_percent | |
confluent_kafka_server_consumer_lag_offsets | topic, consumer_group_id |
confluent_kafka_server_hot_partition_egress | topic |
confluent_kafka_server_hot_partition_ingress | topic |
confluent_kafka_server_partition_count | |
confluent_kafka_server_received_bytes | topic |
confluent_kafka_server_received_records | topic |
confluent_kafka_server_request_bytes | type, principal_id |
confluent_kafka_server_request_count | type, principal_id |
confluent_kafka_server_response_bytes | type, principal_id |
confluent_kafka_server_rest_produce_request_bytes | |
confluent_kafka_server_retained_bytes | topic |
confluent_kafka_server_sent_bytes | topic |
confluent_kafka_server_sent_records | topic |
confluent_kafka_server_successful_authentication_count | principal_id |
Kafka Connector
Metric Name | Labels |
---|---|
confluent_kafka_connect_dead_letter_queue_records | |
confluent_kafka_connect_received_bytes | |
confluent_kafka_connect_received_records | |
confluent_kafka_connect_sent_bytes | |
confluent_kafka_connect_sent_records |
KsqlDB
Metric Name | Labels |
---|---|
confluent_kafka_ksql_committed_offset_lag | task_id, query_id, topic |
confluent_kafka_ksql_consumed_total_bytes | query_id |
confluent_kafka_ksql_offsets_processed_total | task_id, query_id, topic |
confluent_kafka_ksql_processing_errors_total | query_id |
confluent_kafka_ksql_produced_total_bytes | query_id |
confluent_kafka_ksql_query_restarts | query_id |
confluent_kafka_ksql_query_saturation | query_id |
confluent_kafka_ksql_storage_utilization | |
confluent_kafka_ksql_streaming_unit_count | |
confluent_kafka_ksql_task_stored_bytes | ksql_node_id, task_id,query_id |
Schema Registry
Metric Name | Labels |
---|---|
confluent_kafka_schema_registry_num_deks | |
confluent_kafka_schema_registry_num_keks | |
confluent_kafka_schema_registry_num_keks_shared | |
confluent_kafka_schema_registry_request_count | |
confluent_kafka_schema_registry_schema_count | |
confluent_kafka_schema_registry_schema_operations_count | method, status |