Self-hosted Grafana Loki integration for Grafana Cloud
Send self-monitoring metrics and logs from Grafana Loki or GEL running in your Kubernetes cluster to Grafana Cloud. The integration comes with dashboards to help monitor the health of your Loki or GEL cluster as well as understand per-tenant usage and behavior.
This integration includes 4 useful alerts and 10 pre-built dashboards to help monitor and visualize Self-hosted Grafana Loki metrics and logs.
Before you begin
This integration works with the Loki helm chart deployed on Kubernetes.
Install Self-hosted Grafana Loki integration for Grafana Cloud
- In your Grafana Cloud stack, click Connections in the left-hand menu.
- Find Self-hosted Grafana Loki and click its tile to open the integration.
- Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Self-hosted Grafana Loki metrics and logs to your Grafana Cloud instance.
- Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Self-hosted Grafana Loki setup.
Configuration snippets for Grafana Alloy
Simple mode
This integration does not officially support Grafana Alloy yet, but you can use the following snippet as a reference on how to scrape Loki deployed with the helm chart using Grafana Alloy.
Metrics snippets
discovery.relabel "loki" {
targets = discovery.kubernetes.services.targets
rule {
source_labels = ["__meta_kubernetes_service_label_<loki_pod_label>"]
regex = "<loki_pod_label_value>"
action = "keep"
}
rule {
source_labels = ["__meta_kubernetes_service_port_number"]
regex = "<loki_prometheus_port_number>"
action = "keep"
}
rule {
source_labels = ["__meta_kubernetes_pod_name"]
target_label = "instance"
}
}
prometheus.scrape "loki" {
job_name = "integrations/loki"
targets = discovery.relabel.loki.output
honor_labels = true
forward_to = [prometheus.relabel.metrics_service.receiver]
}
Grafana Agent static configuration (deprecated)
The following section shows configuration for running Grafana Agent in static mode which is deprecated. You should use Grafana Alloy for all new deployments.
Before you begin
This integration works with the Loki helm chart deployed on Kubernetes.
Install Self-hosted Grafana Loki integration for Grafana Cloud
- In your Grafana Cloud stack, click Connections in the left-hand menu.
- Find Self-hosted Grafana Loki and click its tile to open the integration.
- Review the prerequisites in the Configuration Details tab and set up Grafana Agent to send Self-hosted Grafana Loki metrics and logs to your Grafana Cloud instance.
- Click Install to add this integration’s pre-built dashboards and alerts to your Grafana Cloud instance, and you can start monitoring your Self-hosted Grafana Loki setup.
Post-install configuration for the Self-hosted Grafana Loki integration
This integration comes with pre-built dashboards to help monitor the health of your Loki or GEL cluster and understand per-tenant usage and behavior.
Configuration snippets for Grafana Agent
Full example configuration for Grafana Agent
Refer to the following Grafana Agent configuration for a complete example that contains all the snippets used for the Self-hosted Grafana Loki integration. This example also includes metrics that are sent to monitor your Grafana Agent instance.
integrations:
prometheus_remote_write:
- basic_auth:
password: <your_prom_pass>
username: <your_prom_user>
url: <your_prom_url>
agent:
enabled: true
relabel_configs:
- action: replace
source_labels:
- agent_hostname
target_label: instance
- action: replace
target_label: job
replacement: "integrations/agent-check"
metric_relabel_configs:
- action: keep
regex: (prometheus_target_sync_length_seconds_sum|prometheus_target_scrapes_.*|prometheus_target_interval.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
source_labels:
- __name__
# Add here any snippet that belongs to the `integrations` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
logs:
configs:
- clients:
- basic_auth:
password: <your_loki_pass>
username: <your_loki_user>
url: <your_loki_url>
name: integrations
positions:
filename: /tmp/positions.yaml
scrape_configs:
# Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
metrics:
configs:
- name: integrations
remote_write:
- basic_auth:
password: <your_prom_pass>
username: <your_prom_user>
url: <your_prom_url>
scrape_configs:
# Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
# For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
global:
scrape_interval: 60s
wal_directory: /tmp/grafana-agent-wal
Dashboards
The Self-hosted Grafana Loki integration installs the following dashboards in your Grafana Cloud instance to help monitor your system.
- Loki / Chunks
- Loki / Deletion
- Loki / Logs
- Loki / Operational
- Loki / Reads
- Loki / Reads Resources
- Loki / Recording Rules
- Loki / Retention
- Loki / Writes
- Loki / Writes Resources
Alerts
The Self-hosted Grafana Loki integration includes the following useful alerts:
Alert | Description |
---|---|
LokiRequestErrors | Critical: {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf “%.2f” $value }}% errors. |
LokiRequestPanics | Critical: {{ $labels.job }} is experiencing {{ printf “%.2f” $value }}% increase of panics. |
LokiRequestLatency | Critical: {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf “%.2f” $value }}s 99th percentile latency. |
LokiTooManyCompactorsRunning | Warning: {{ $labels.namespace }} has had {{ printf “%.0f” $value }} compactors running for more than 5m. Only one compactor should run at a time. |
Metrics
The most important metrics provided by the Self-hosted Grafana Loki integration, which are used on the pre-built dashboards and Prometheus alerts, are as follows:
- container_cpu_usage_seconds_total
- container_fs_writes_bytes_total
- container_memory_working_set_bytes
- container_network_receive_bytes_total
- container_network_transmit_bytes_total
- container_spec_cpu_period
- container_spec_cpu_quota
- container_spec_memory_limit_bytes
- cortex_dynamo_consumed_capacity_total
- cortex_dynamo_dropped_requests_total
- cortex_dynamo_failures_total
- cortex_dynamo_query_pages_count
- cortex_dynamo_request_duration_seconds_bucket
- cortex_dynamo_request_duration_seconds_count
- cortex_dynamo_throttled_total
- cortex_ingester_flush_queue_length
- go_gc_duration_seconds
- go_goroutines
- go_memstats_heap_inuse_bytes
- job:loki_request_duration_seconds_bucket:sum_rate
- job:loki_request_duration_seconds_count:sum_rate
- job:loki_request_duration_seconds_sum:sum_rate
- job_route:loki_request_duration_seconds_bucket:sum_rate
- job_route:loki_request_duration_seconds_count:sum_rate
- job_route:loki_request_duration_seconds_sum:sum_rate
- kube_deployment_created
- kube_persistentvolumeclaim_labels
- kube_pod_container_info
- kube_pod_container_status_last_terminated_reason
- kube_pod_container_status_restarts_total
- kubelet_volume_stats_capacity_bytes
- kubelet_volume_stats_used_bytes
- loki_azure_blob_request_duration_seconds_bucket
- loki_azure_blob_request_duration_seconds_count
- loki_bigtable_request_duration_seconds_bucket
- loki_bigtable_request_duration_seconds_count
- loki_boltdb_shipper_compact_tables_operation_duration_seconds
- loki_boltdb_shipper_compact_tables_operation_last_successful_run_timestamp_seconds
- loki_boltdb_shipper_compact_tables_operation_total
- loki_boltdb_shipper_compactor_running
- loki_boltdb_shipper_query_readiness_duration_seconds
- loki_boltdb_shipper_request_duration_seconds_bucket
- loki_boltdb_shipper_request_duration_seconds_count
- loki_boltdb_shipper_request_duration_seconds_sum
- loki_boltdb_shipper_retention_marker_count_total
- loki_boltdb_shipper_retention_marker_table_processed_duration_seconds_bucket
- loki_boltdb_shipper_retention_marker_table_processed_duration_seconds_count
- loki_boltdb_shipper_retention_marker_table_processed_duration_seconds_sum
- loki_boltdb_shipper_retention_marker_table_processed_total
- loki_boltdb_shipper_retention_sweeper_chunk_deleted_duration_seconds_bucket
- loki_boltdb_shipper_retention_sweeper_chunk_deleted_duration_seconds_count
- loki_boltdb_shipper_retention_sweeper_chunk_deleted_duration_seconds_sum
- loki_boltdb_shipper_retention_sweeper_marker_file_processing_current_time
- loki_boltdb_shipper_retention_sweeper_marker_files_current
- loki_build_info
- loki_chunk_store_deduped_chunks_total
- loki_chunk_store_index_entries_per_chunk_count
- loki_chunk_store_index_entries_per_chunk_sum
- loki_compactor_delete_requests_processed_total
- loki_compactor_delete_requests_received_total
- loki_compactor_deleted_lines
- loki_compactor_load_pending_requests_attempts_total
- loki_compactor_oldest_pending_delete_request_age_seconds
- loki_compactor_pending_delete_requests_count
- loki_consul_request_duration_seconds_bucket
- loki_discarded_samples_total
- loki_distributor_bytes_received_total
- loki_distributor_ingester_append_failures_total
- loki_distributor_lines_received_total
- loki_gcs_request_duration_seconds_bucket
- loki_gcs_request_duration_seconds_count
- loki_ingester_chunk_age_seconds_bucket
- loki_ingester_chunk_age_seconds_count
- loki_ingester_chunk_age_seconds_sum
- loki_ingester_chunk_bounds_hours_bucket
- loki_ingester_chunk_bounds_hours_count
- loki_ingester_chunk_bounds_hours_sum
- loki_ingester_chunk_entries_bucket
- loki_ingester_chunk_entries_count
- loki_ingester_chunk_entries_sum
- loki_ingester_chunk_size_bytes_bucket
- loki_ingester_chunk_utilization_bucket
- loki_ingester_chunk_utilization_count
- loki_ingester_chunk_utilization_sum
- loki_ingester_chunks_flushed_total
- loki_ingester_memory_chunks
- loki_ingester_memory_streams
- loki_ingester_streams_created_total
- loki_memcache_request_duration_seconds_bucket
- loki_memcache_request_duration_seconds_count
- loki_panic_total
- loki_request_duration_seconds_bucket
- loki_request_duration_seconds_count
- loki_request_duration_seconds_sum
- loki_ruler_wal_appender_ready
- loki_ruler_wal_disk_size
- loki_ruler_wal_prometheus_remote_storage_highest_timestamp_in_seconds
- loki_ruler_wal_prometheus_remote_storage_queue_highest_sent_timestamp_seconds
- loki_ruler_wal_prometheus_remote_storage_samples_pending
- loki_ruler_wal_prometheus_remote_storage_samples_total
- loki_ruler_wal_samples_appended_total
- loki_ruler_wal_storage_created_series_total
- loki_s3_request_duration_seconds_bucket
- loki_s3_request_duration_seconds_count
- namespace_job_route:loki_request_duration_seconds:99quantile
- node_disk_read_bytes_total
- node_disk_written_bytes_total
- node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
- promtail_custom_bad_words_total
- up
Changelog
# 0.0.2 - August 2023
* Add regex filter for logs datasource
# 0.0.1 - September 2022
* Initial Release
Cost
By connecting your Self-hosted Grafana Loki instance to Grafana Cloud, you might incur charges. To view information on the number of active series that your Grafana Cloud account uses for metrics included in each Cloud tier, see Active series and dpm usage and Cloud tier pricing.