Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Deploy GEM on Kubernetes
This guide will layout a step by step approach to deploying a Grafana Enterprise Metrics cluster on an existing Kubernetes namespace. This guide assumes you have a working Kubernetes cluster and the ability to deploy to that cluster using the kubectl
tool. At the end of this guide you should have a functional GEM cluster running on Kubernetes using Minio as a storage backend. If you do not currently have access to a Kubernetes cluster consider following Linux deployment guide instead. See GEM hardware requirements for guidance on the type of hardware to use when running a GEM cluster.
Deploy Minio
For this guide we will be using Minio as our object storage backend. Minio is a open source S3 compatible object storage service that is freely available and easy to run on Kubernetes. If you want to follow this guide but use a different object storage backend, refer to Grafana Enterprise Metrics configuration.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
initContainers:
- name: create-buckets
image: busybox:1.28
command:
[
"sh",
"-c",
"mkdir -p /storage/grafana-metrics-tsdb && mkdir -p /storage/grafana-metrics-admin",
]
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/storage"
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
Copy the above yaml into a file called minio.yaml
and run the following command:
kubectl create -f minio.yaml
You can confirm you have setup Minio correctly by port-forwarding it and navigating to it in your browser:
kubectl port-forward service/minio 9000:9000
Then you can navigate to the minio admin console using your browser. The login credentials are the same as those set above, minio
and minio123
.
Create a license Secret
Next you will need to take the license.jwt
file associated with you cluster and run the following command to load it as a Kubernetes Secret.
kubectl create secret generic enterprise-metrics-license --from-file license.jwt
Verify you have successfully created the secret by running the following command:
kubectl get secret enterprise-metrics-license -oyaml
The above command should print a Kubernetes Secret object with a license.jwt
field with a long base64 encoded value string.
Create a GEM config ConfigMap
Next you will create a configuration file for you cluster and deploy it as a Kubernetes ConfigMap. Copy the text and save it as config.yaml
.
auth:
type: enterprise
target: all
license:
path: /etc/enterprise-metrics/license/license.jwt
admin_client:
storage:
type: s3
s3:
endpoint: minio:9000
bucket_name: grafana-metrics-admin
access_key_id: minio
secret_access_key: minio123
insecure: true
distributor:
pool:
health_check_ingesters: true
memberlist:
abort_if_cluster_join_fails: false
bind_port: 7946
join_members:
- enterprise-metrics-discovery
ingester:
ring:
num_tokens: 512
kvstore:
store: memberlist
replication_factor: 3
compactor:
data_dir: "/data"
sharding_enabled: true
sharding_ring:
kvstore:
store: memberlist
blocks_storage:
tsdb:
dir: /data/cortex/tsdb
bucket_store:
sync_dir: /data/cortex/tsdb-sync
backend: s3
s3:
endpoint: minio:9000
bucket_name: grafana-metrics-tsdb
access_key_id: minio
secret_access_key: minio123
insecure: true
Next run the following command to create the ConfigMap.
kubectl create configmap enterprise-metrics-config --from-file=config.yaml
Create the services for Grafana Enterprise Metrics
Two Kubernetes Services are required to run Grafana Enterprise Metrics as a StatefulSet. First is a service to support GRPC requests between replicas. Second is a gossip service port to allow the replicas to join together and form a hash ring to coordinate work.
Copy the following content into services.yaml
:
---
apiVersion: v1
kind: Service
metadata:
labels:
name: enterprise-metrics-discovery
name: enterprise-metrics-discovery
spec:
clusterIP: None
ports:
- name: enterprise-metrics-grpc
port: 9095
targetPort: 9095
- name: enterprise-metrics-gossip
port: 7946
targetPort: 7946
publishNotReadyAddresses: true
selector:
name: enterprise-metrics
---
apiVersion: v1
kind: Service
metadata:
labels:
name: enterprise-metrics
name: enterprise-metrics
spec:
ports:
- name: enterprise-metrics-http-metrics
port: 80
targetPort: 80
selector:
name: enterprise-metrics
Create the services:
kubectl apply -f services.yaml
Deploy the Grafana Enterprise Metrics StatefulSet
Copy the following content into the statefulset.yaml
file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
name: enterprise-metrics
name: enterprise-metrics
spec:
replicas: 3
selector:
matchLabels:
name: enterprise-metrics
serviceName: enterprise-metrics
template:
metadata:
labels:
name: enterprise-metrics
spec:
containers:
- args:
- -config.file=/etc/enterprise-metrics/config.yaml
image: grafana/enterprise-metrics:v2.7.1
imagePullPolicy: IfNotPresent
name: enterprise-metrics
ports:
- containerPort: 80
name: http-metrics
- containerPort: 9095
name: grpc
- containerPort: 7946
name: gossip
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
resources:
requests:
cpu: "2"
memory: 4Gi
volumeMounts:
- mountPath: /data
name: data
- mountPath: /etc/enterprise-metrics
name: enterprise-metrics-config
- mountPath: /etc/enterprise-metrics/license
name: enterprise-metrics-license
securityContext:
fsGroup: 10001
runAsUser: 10001
runAsGroup: 10001
runAsNonRoot: true
terminationGracePeriodSeconds: 300
volumes:
- name: enterprise-metrics-config
configMap:
name: enterprise-metrics-config
- name: enterprise-metrics-license
secret:
secretName: enterprise-metrics-license
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
Create the StatefulSet:
kubectl apply -f statefulset.yaml
Start the GEM compactor as a Kubernetes deployment
The single binary of GEM does not enable the compactor component by default. Therefore, you need to run the compactor separately as a Kubernetes StatefulSet. For more information about what the compactor does, refer to the Compactor section of the Grafana Mimir documentation.
- Copy the following content into the
compactor.yaml
file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
name: enterprise-metrics-compactor
name: enterprise-metrics-compactor
spec:
replicas: 1
selector:
matchLabels:
name: enterprise-metrics-compactor
serviceName: enterprise-metrics-compactor
template:
metadata:
labels:
name: enterprise-metrics-compactor
spec:
containers:
- args:
- -target=compactor
- -config.file=/etc/enterprise-metrics/config.yaml
image: grafana/enterprise-metrics:v2.7.1
imagePullPolicy: IfNotPresent
name: enterprise-metrics-compactor
ports:
- containerPort: 80
name: http-metrics
- containerPort: 9095
name: grpc
- containerPort: 7946
name: gossip
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
resources:
limits:
cpu: 1200m
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
- mountPath: /etc/enterprise-metrics
name: enterprise-metrics-config
- mountPath: /etc/enterprise-metrics/license
name: enterprise-metrics-license
securityContext:
fsGroup: 10001
runAsUser: 10001
runAsGroup: 10001
runAsNonRoot: true
terminationGracePeriodSeconds: 300
volumes:
- name: enterprise-metrics-config
configMap:
name: enterprise-metrics-config
- name: enterprise-metrics-license
secret:
secretName: enterprise-metrics-license
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
- Create the compactor StatefulSet:
kubectl apply -f compactor.yaml
Generate an admin token
To generate a token, use a Kubernetes Job.
Copy the following content it into the
tokengen-job.yaml
file:
apiVersion: batch/v1
kind: Job
metadata:
name: enterprise-metrics-tokengen
spec:
template:
spec:
containers:
- name: enterprise-metrics-tokengen
image: grafana/enterprise-metrics:v2.7.1
args:
- --config.file=/etc/enterprise-metrics/config.yaml
- --target=tokengen
volumeMounts:
- mountPath: /etc/enterprise-metrics
name: enterprise-metrics-config
- mountPath: /etc/enterprise-metrics/license
name: enterprise-metrics-license
securityContext:
runAsUser: 10001
runAsGroup: 10001
runAsNonRoot: true
securityContext:
fsGroup: 10001
volumes:
- name: enterprise-metrics-config
configMap:
name: enterprise-metrics-config
- name: enterprise-metrics-license
secret:
secretName: enterprise-metrics-license
restartPolicy: Never
Create the tokengen
Kubernetes Job:
$ kubectl apply -f tokengen-job.yaml
- Check the status of the Pod, and after it displays ‘Completed’, check the logs for the new admin token:
kubectl logs job.batch/enterprise-metrics-tokengen
The output of the above command should contain a token in string in the logs:
Token created: Ym9vdHN0cmFwLXRva2VuOmA3PzkxOF0zfVx0MTlzMVteTTcjczNAPQ==
Be sure to note down this token since it will be required later when setting up your cluster.
Verify your cluster is working
To verify your cluster is working you can run the following command using the token you generated in the previous step.
First, port-forward the enterprise-metrics
Service to port 9000 on your machine:
kubectl port-forward service/enterprise-metrics 9000:80
Then run the following command:
curl -u :<your_token> localhost:9000/ready
After running the above command you should see the following output:
ready
Next steps
After you have a GEM deployment working locally, refer to Set up the GEM plugin for Grafana to integrate your metrics cluster into Grafana. Doing so gives you a UI via which you can interact with the Admin API.