Migrate a Kube-Prometheus Helm stack to Grafana Cloud
With the following instructions, you set up the Kube-Prometheus stack in your Kubernetes Cluster, then configure it to send its core set of metrics to Grafana Cloud for long-term storage, querying, visualization, and alerting. You can also migrate the stack’s core assets (recording rules and alerting rules) to Grafana Cloud. This uses Grafana Cloud’s scalability, availability, and efficient performance, as well as reduces load on your local Prometheus instances.
Note
Consider sending metrics to Grafana Cloud using Grafana Alloy, an open source OpenTelemetery collector with built-in Prometheus pipelines and support for metrics, logs, traces, and profiles. To get started with Alloy and Grafana Cloud, refer to configuration for Kubernetes Monitoring. Kubernetes Monitoring bundles a set of preconfigured Kubernetes manifests to deploy Alloy into your Clusters.
You can find additional deployment manifests for Grafana Agent in its GitHub repository. Refer to the Agent documentation for more information.
Migrate a Kube-Prometheus Helm stack and send metrics to Grafana Cloud with these steps:
- Install the Kube-Prometheus stack Helm chart into a Kubernetes Cluster using the Helm package manager.
- Configure your local Prometheus instance to send metrics to Grafana Cloud using
remote_write
.
Optionally, you can complete any of these steps:
- Import the Kube-Prometheus recording and alerting rules into your Cloud Prometheus instance.
- Limit which metrics you send from your local Cluster to reduce your active series usage.
- Turn off local stack components such as Grafana and Alertmanager.
Before you begin
Before you begin, have the following available:
- A Kubernetes Cluster with role-based access control (RBAC) enabled
- A Grafana Cloud Pro account or trial. To create an account, refer to Grafana Cloud. You can use a free tier account with these instructions if you meet the conditions detailed on the web page. Otherwise, a Cloud Pro account is necessary to import more dashboards, rules, and metrics from Kube-Prometheus.
- The
kubectl
command-line tool installed on your local machine, configured to connect to your Cluster. For more about installingkubectl
, refer to the official documentation. - The
helm
Kubernetes package manager installed on your local machine. To install Helm, refer to Installing Helm.
Install the Kube-Prometheus stack into your Cluster
Use Helm to install the Kube-Prometheus stack into your Kubernetes Cluster. The Kube-Prometheus stacks installs the following observability components:
- Prometheus Operator
- Highly-available Prometheus (with 1 replica by default)
- Highly-available Alertmanager (with 1 replica by default)
- Prometheus
node_exporter
- Prometheus Adapter for Kubernetes Metrics APIs
- kube-state-metrics
- Grafana
The Kube-Prometheus stack scrapes several endpoints in your Cluster by default, such as:
cadvisor
kubelet
node-exporter
/metrics
endpoints on Kubernetes Nodes- Kubernetes API server metrics endpoint
kube-state-metrics
endpoints
To get a full list of configured scrape targets, refer to the Kube-Prometheus Helm chart’s values.yaml
. To find scrape targets, search for serviceMonitor
objects. Configuration of the Kube-Prometheus stack’s scrape targets is beyond the scope of these instructions. To learn more, refer to the ServiceMonitor
spec in the Prometheus Operator GitHub repository.
The Kube-Prometheus stack also provisions several monitoring mixins. A mixin is a collection of prebuilt Grafana dashboards, Prometheus recording rules, and Prometheus alerting rules. In particular, it includes:
- The kubernetes-mixin, which includes several useful dashboards and alerts for monitoring Kubernetes Clusters and their workloads
- The Node Mixin, which does the same for
node_exporter
metrics - The Prometheus Mixin
Mixins are written in Jsonnet, a data templating language. They generate JSON dashboard files and rules YAML files. Configuration and modification of the underlying mixins goes beyond the scope of these instructions. Mixins are imported as-is into Grafana Cloud. To learn more, refer to:
- Generate config files
- Grizzly, a tool for working with Jsonnet-defined assets against the Grafana Cloud API.
To install the Kube-Prometheus stack into your Cluster:
Add the
prometheus-community
Helm repository and update Helm:helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update
Install the
kube-prometheus-stack
chart using the following Helm command, replacingfoo
with your desired release name:helm install foo prometheus-community/kube-prometheus-stack
Note
Note that this command installs the Kube-Prometheus stack into thedefault
Namespace. To modify this, use avalues.yaml
file to override the defaults or pass in a--set
flag. To learn more, refer to Values Files.After Helm has finished installing the chart, you should see the following:
NAME: foo LAST DEPLOYED: Fri Jun 25 15:30:30 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace default get pods -l "release=foo" Refer to https://github.com/prometheus-operator/kube-prometheus for instructions on how to create and configure Alertmanager and Prometheus instances using the Operator.
Use
kubectl
to inspect what is installed in the Cluster:kubectl get pod
alertmanager-foo-kube-prometheus-stack-alertmanager-0 2/2 Running 0 7m3s foo-grafana-8547c9db6-vp8pf 2/2 Running 0 7m6s foo-kube-prometheus-stack-operator-6888bf88f9-26c42 1/1 Running 0 7m6s foo-kube-state-metrics-76fbc7d6ff-vj872 1/1 Running 0 7m6s foo-prometheus-node-exporter-8qbrz 1/1 Running 0 7m6s foo-prometheus-node-exporter-d4dk4 1/1 Running 0 7m6s foo-prometheus-node-exporter-xplv4 1/1 Running 0 7m6s prometheus-foo-kube-prometheus-stack-prometheus-0 2/2 Running 1 7m3s
This example shows Alertmanager, Grafana, Prometheus Operator, kube-state-metrics, node-exporter, and Prometheus running in the Cluster. In addition to these Pods, the stack installs several Kubernetes custom resources (CRDs).
To see the Kubernetes custom resources, run
kubectl get crd
.To access your Prometheus instance, use the
kubectl port-forward
command to forward a local port into the Cluster:kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
Replace
foo-kube-prometheus-stack-prometheus
with the appropriate service name.Enter
http://localhost:9090
in your browser.You should see the Prometheus web interface. Click Status, then Targets to see a list of preconfigured scrape targets. You can use a similar procedure to access the Grafana and Alertmanager web interfaces.
Send metrics to Grafana Cloud
Configure Prometheus to send scraped metrics to Grafana Cloud.
Warning
When you send your Kubernetes Prometheus metrics to Grafana Cloud usingremote_write
, this can result in a significant increase in your active series usage and monthly bill. To estimate the number of series you will be sending, go to the Prometheus web UI in your Cluster. Click Status, then TSDB Status to see your Prometheus instance’s statistics. Number of series describes the rough number of active series you’ll be sending to Grafana Cloud. In a later step, you can configure Prometheus to drop many of these to control your active series usage. Since you are only billed at the 95th percentile of active series usage, temporary spikes should not result in any cost increase. To learn more, refer to 95th percentile billing.
Configure Prometheus using the remoteWrite configuration section of the Helm chart’s values.yaml
file. Then update the release using helm upgrade
.
To send metrics to Grafana Cloud:
Create a Kubernetes Secret to store your Grafana Cloud Prometheus username and password.
To find your username, navigate to your stack in the Cloud portal, and click Details next to the Prometheus panel.
Your password corresponds to a Cloud Access Policy token that you can generate by clicking on Generate now in this same panel. To create a Cloud Access Policy, refer to Create a Grafana Cloud Access Policy.
You can create a Secret by using a manifest file or create it directly using
kubectl
. In these instructions, you create it directly usingkubectl
. To learn more about Kubernetes Secrets, consult Secrets.Run the following command to create a Secret called
kubepromsecret
:kubectl create secret generic kubepromsecret \ --from-literal=username=<your_grafana_cloud_prometheus_username>\ --from-literal=password='<your_grafana_cloud_access_policy_token>'\ -n default
If you deployed your monitoring stack in a namespace other than
default
, change the-n default
flag to the appropriate namespace in the above command. To learn more about this command, refer to Managing Secrets using kubectl.Create a Helm values file named
values.yaml
in an editor, and paste in the snippet below. The snippet defines Prometheus’remote_write
configuration and applies the new configuration to the Kube-Prometheus release.prometheus: prometheusSpec: remoteWrite: - url: "<Your Cloud Prometheus instance remote_write endpoint>" basicAuth: username: name: kubepromsecret key: username password: name: kubepromsecret key: password replicaExternalLabelName: "__replica__" externalLabels: {cluster: "test"}
The Helm values file lets you set configuration variables that are passed in to Helm’s chart templates. To see the default values file for Kube-Prometheus stack, refer to
values.yaml
.The snippet:
- Sets the
remote_write
URL andbasic_auth
username and password using the Secret created in the previous step - Configures two additional parameters:
replicaExternalLabelName
andexternalLabels
Replace
test
with an appropriate name for your Kubernetes Cluster. Prometheus adds thecluster: test
and__replica__: prometheus-foo-kube-prometheus-stack-prometheus-0
labels to any samples sent to Grafana Cloud.When you configure these parameters, you enable automatic metric deduplication in Grafana Cloud. This means you can create additional Prometheus instances in a high-availability configuration without storing duplicate samples in your Grafana Cloud Prometheus instance. To learn more, refer to Sending data from multiple high-availability Prometheus instances.
If you are sending data from multiple Kubernetes Clusters, set the
cluster
external label to identify the source Cluster. This takes advantage of multi-Cluster support in many of the Kube-Prometheus dashboards, recording rules, and alerting rules.- Sets the
Save and close the file.
Apply the changes with
helm upgrade
:helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
Replace
your_release_name
with the name of the release you used to install Kube-Prometheus. You can get a list of installed releases usinghelm list
.After the changes have been applied, use
port-forward
to navigate to the Prometheus UI:kubectl port-forward svc/foo-kube-prometheus-stack-prometheus 9090
Navigate to
http://localhost:9090
in your browser, and then click Status and Configuration. Verify that theremote_write
block you appended above has propagated to your running Prometheus instance.Log in to your managed Grafana instance to begin querying your Cluster data. You can use the Billing/Usage dashboard to inspect incoming data rates in the last five minutes to confirm the flow of data to Grafana Cloud.
For more about the difference between Active Series and DPM, refer to Active series and DPM for billing calculations.
Import Dashboards
Now that you are sending metrics to Grafana Cloud and have configured the appropriate external labels, you can import your Kube-Prometheus dashboards into your hosted Grafana instance. Import the prebuilt Kube-Prometheus dashboards from your local Grafana instance into your managed Grafana instance.
Note
To enable multi-Cluster support for Kube-Prometheus dashboards, refer to Enable multi-Cluster support.
These steps use Grafana’s HTTP API to bulk export and import dashboards, which you can also do using Grafana’s Web UI. You use a lightweight bash script to perform the dump and load. Note that the script does not preserve folder hierarchy. It naively downloads all dashboards from a source Grafana instance and uploads them to a target Grafana instance.
To import dashboards:
Navigate to Exporting and importing dashboards to hosted Grafana using the HTTP API, and save the bash script into a file called
dash_migrate.sh
.Create a temporary directory called
temp_dir
:mkdir temp_dir
Make the script executable:
chmod +x dash_migrate.sh
Forward a local port to the Grafana service running in your Cluster.
kubectl port-forward svc/foo-grafana 8080:80
Replace
foo-grafana
with the name of the Grafana service. You can find this usingkubectl get svc
.With a port forwarded, to log in to your Grafana instance, visit
http://localhost:8080
and enteradmin
as the username and the value configured for theadminPassword
parameter.If you did not modify this value, you can find the default in the
values.yaml
file.To create an API key, click the cog in the left-hand navigation menu, and then click API keys.
Note the API key and local Grafana URL, and complete the variables at the top of the bash script with the appropriate values:
SOURCE_GRAFANA_ENDPOINT='http://localhost:8080' SOURCE_GRAFANA_API_KEY='your_api_key_here' . . .
Repeat this process for your hosted Grafana instance.
To access the instance, navigate to your Cloud portal. Click Details next to your stack, then click Log In in the Grafana card. Ensure the API key has the Admin role. After noting the endpoint URL and API key, modify the remaining values in the bash script:
. . . DEST_GRAFANA_API_KEY='your_hosted_grafana_api_key_here' DEST_GRAFANA_ENDPOINT='https://your_stack_name.grafana.net' TEMP_DIR=temp_dir
Save and close the file.
Run the script:
./dash_migrate.sh -ei
The
-e
flag exports all dashboards from the source Grafana and saves them intemp_dir
, and the-i
flag imports the dashboards intemp_dir
into the destination Grafana instance.Navigate to your managed Grafana instance, and click Dashboards in the left-hand nav, then Manage. From here you can access the default Kube Prometheus dashboards that you just imported.
The following open-source tools can help you manage dashboards with Grafana using the HTTP API:
- Grizzly: Also allows you to work directly with the Jsonnet source used to generate the Kube-Prometheus stack configuration, as well as the generated JSON dashboard files.
- Grafana Terraform provider
Note that these instructions use the Helm version of the Kube-Prometheus stack, which templates manifest files generated from the underlying Kube-Prometheus project.
Disable local components (optional)
After importing the Kube Prometheus dashboards to Grafana Cloud, you might want to shut down some of the stack’s components locally. In this section, you turn off the following Kube Prometheus components:
- Alertmanager, given that Grafana Cloud provisions a hosted Alertmanager instance integrated into the Grafana UI
- Grafana
To disable Alertmanager and Grafana:
Add the following to your
values.yaml
Helm configuration file:grafana: enabled: false alertmanager: enabled: false
Apply the changes with
helm upgrade
:helm upgrade -f values.yaml your_release_name prometheus-community/kube-prometheus-stack
Refer to Disable local Prometheus rules evaluation to learn how to disable recording and alerting rule evaluation.
Next steps
Your Cluster-local Prometheus instance continues to evaluate alerting rules and recording rules. You can optionally migrate these by following the steps in Import recording and alerting rules.
By default, Kube Prometheus scrapes almost every available endpoint in your Cluster, which sends tens of thousands (possibly hundreds of thousands) of active series to Grafana Cloud.
To configure Prometheus to send only the metrics referenced in the dashboards you just uploaded, refer to Reduce your Prometheus active series usage. You lose long-term retention for these series, however, they are still be available locally for Prometheus’ default configured retention period.