Find and use dashboards in Grafana Cloud
Grafana Cloud provisions and maintains a hosted Grafana instance from which you can visualize your metrics and logs data using dashboards.
To learn how to import a Grafana dashboard, please see Export and Import from the Grafana docs. Monitoring a Linux host using Prometheus and node_exporter also contains a Dashboard import example.
You can find the complete documentation for Grafana Dashboards in the Dashboards section of the Grafana OSS docs.
You can find hosted Grafana API documentation in the HTTP API Reference section of the Grafana OSS docs. Note that this API differs from the Grafana Cloud API and the API for hosted Prometheus metrics. To learn more about the Prometheus metrics API, please see HTTP API from the Cortex docs. Note that some functionality may differ slightly from documented Cortex features.
You can also manage dashboards programmatically using the Grafana Terraform Provider, and other open-source tools that leverage Grafana’s HTTP API.
Exporting and importing dashboards to managed Grafana using the HTTP API
This bash
script demonstrates how to export and import a set of JSON dashboards from a Grafana instance to another using the HTTP API.
To use the script you will need the following installed and available:
bash
curl
jq
- An API key for the hosted Grafana instance or other source instance from which you’ll download dashboards. To learn how to create an API key, please see Create API Token from the Grafana docs. Note that you can only create this key from your Grafana instance, not the Grafana Cloud Web Portal.
- An API key for the hosted Grafana instance or other target instance to which you’ll upload dashboards.
Be sure to set the appropriate variables and TEMP_DIR
directory name before running the script.
TEMP_DIR
will contain the downloaded dashboard files.
#!/usr/bin/env bash
set -eo pipefail
SOURCE_GRAFANA_ENDPOINT='source_grafana_endpoint_here'
SOURCE_GRAFANA_API_KEY='source_grafana_api_key_here'
DEST_GRAFANA_API_KEY='dest_grafana_api_key_here'
DEST_GRAFANA_ENDPOINT='dest_grafana_endpoint_here'
TEMP_DIR=TEMP_DIR_NAME_HERE
if ! command -v curl &> /dev/null
then
echo "Please install curl before running this script" && exit 1
fi
if ! command -v jq &> /dev/null
then
echo "Please install jq before running this script" && exit 1
fi
function usage() {
echo "Usage: ${0} -e -i" >&2
}
if [[ $# -eq 0 ]]; then
usage
exit 1
fi
while getopts "ei" opt; do
case "${opt}" in
e)
export_dash="true"
;;
i)
import_dash="true"
;;
*)
usage
exit 1
;;
esac
done
if [[ "${export_dash}" == "true" ]]; then
echo "Creating ${TEMP_DIR}"
mkdir -p ${TEMP_DIR} && pushd "$_" > /dev/null
echo "Downloading dash-db"
curl -sSH "Authorization: Bearer ${SOURCE_GRAFANA_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"type":"dash-db"}' \
-X GET \
"${SOURCE_GRAFANA_ENDPOINT}/api/search" > dashboards.json
echo "Downloaded dash-db"
for uid in $(jq -r ".[].uid" dashboards.json); do
echo "Downloading ${uid}"
curl -sSH "Authorization: Bearer ${SOURCE_GRAFANA_API_KEY}" \
-X GET \
"${SOURCE_GRAFANA_ENDPOINT}/api/dashboards/uid/${uid}" > "dashboard.${uid}.json"
echo "Downloaded $(jq .dashboard.title dashboard."${uid}".json)"
done
popd > /dev/null
fi
if [[ "${import_dash}" == "true" ]]; then
if [ ! -d "${TEMP_DIR}" ]; then
echo "${TEMP_DIR} doesn't exist" && exit 1
fi
pushd "${TEMP_DIR}" > /dev/null
if ! compgen -G "dashboard.*.json" > /dev/null; then
echo "No dashboard.*.json files in ${TEMP_DIR}" && exit 1
fi
for dashboard in dashboard.*.json; do
echo "Uploading $(jq .dashboard.title "${dashboard}") to Grafana"
dashboard_json='{"dashboard": '"$(jq '.dashboard | del(.id)' "${dashboard}")}"
curl -sSH "Content-Type: application/json" \
-H "Authorization: Bearer $DEST_GRAFANA_API_KEY" \
-d "${dashboard_json}" "${DEST_GRAFANA_ENDPOINT}/api/dashboards/db"
echo
done
fi
Note
This script does not save or upload folder metadata and does not perform thorough error checking. It is meant as a demonstrative example for working against the Grafana HTTP API. It will also overwrite existing dashboard files with the same filename inTEMP_DIR
.
Make the script executable before running it:
$ chmod +x dash_migrate.sh
To download dashboards into TEMP_DIR
from the source Grafana instance, run the following command:
$ ./dash_migrate.sh -e
Creating temp_dash_dir
Downloading dash-db
Downloaded dash-db
Downloading vkQ0UHxik
Downloaded "CoreDNS"
Downloading E7EZ0IeGk
Downloaded "etcd"
Downloading 09ec8aa1e996d6ffcd6817bbaff4db1b
Downloaded "Kubernetes / API server"
Downloading efa86fd1d0c121a26444b636a3f509a8
Downloaded "Kubernetes / Compute Resources / Cluster"
Downloading 85a562078cdf77779eaa1add43ccec1e
The -e
(export) flag downloads dashboards from the source Grafana instance into TEMP_DIR
.
To upload dashboards from TEMP_DIR
into the destination Grafana instance, run the following command:
$ ./dash_migrate.sh -i
Uploading "Kubernetes / API server" to Grafana
{"id":114,"slug":"kubernetes-api-server","status":"success","uid":"09ec8aa1e996d6ffcd6817bbaff4db1b","url":"/d/09ec8aa1e996d6ffcd6817bbaff4db1b/kubernetes-api-server","version":2}
Uploading "Kubernetes / Compute Resources / Node (Pods)" to Grafana
{"id":115,"slug":"kubernetes-compute-resources-node-pods","status":"success","uid":"200ac8fdbfbb74b39aff88118e4d1c2c","url":"/d/200ac8fdbfbb74b39aff88118e4d1c2c/kubernetes-compute-resources-node-pods","version":2}
Uploading "Kubernetes / Scheduler" to Grafana
{"id":116,"slug":"kubernetes-scheduler","status":"success","uid":"2e6b6a3b4bddf1427b3a55aa1311c656","url":"/d/2e6b6a3b4bddf1427b3a55aa1311c656/kubernetes-scheduler","version":2}
Uploading "Kubernetes / Kubelet" to Grafana
{"id":117,"slug":"kubernetes-kubelet","status":"success","uid":"3138fa155d5915769fbded898ac09fd9","url":"/d/3138fa155d5915769fbded898ac09fd9/kubernetes-kubelet","version":2}
The -i
(import) flag uploads dashboards found in TEMP_DIR
to the destination Grafana instance.
Note that by default the script only uploads files that match the dashboard.*.json
glob pattern.
You can modify the pattern to *.json
to match all JSON files.