Note
Fleet Management is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available. For bug reports or questions, fill out our feedback form.
Onboard collectors deployed in Kubernetes to Fleet Management
Learn how to register your collectors in Kubernetes with Grafana Fleet Management.
Note
Fleet Management support for Kubernetes deployments is under active development.
Grafana Alloy Helm chart
Onboard and configure your collector using the Alloy Helm chart.
Create a secret you can use as an access token for Fleet Management.
kubectl create secret --namespace NAMESPACE generic gc-token --from-literal=token=VALUE
Create a ConfigMap from a file and add the following
remotecfg
block to the configuration.remotecfg { url = "<URL>" id = sys.env("GCLOUD_FM_COLLECTOR_ID") attributes = { "platform" = "kubernetes" } basic_auth { username = "<USERNAME>" password = sys.env("GCLOUD_RW_API_KEY") } }
Replace
<URL>
with the base URL of the Fleet Management service and<USERNAME>
with your instance ID, both of which can be found on the API tab in the Fleet Management interface.Update your Helm chart as follows.
gc: secret: name: gc-token alloy: configMap: create: false name: alloy-config key: config.alloy stabilityLevel: "public-preview" extraEnv: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: GCLOUD_FM_COLLECTOR_ID value: "clusterName-$(NAMESPACE)-$(POD_NAME)" - name: GCLOUD_RW_API_KEY valueFrom: secretKeyRef: name: gc-token key: token
In addition to specifying the Alloy configuration file, this chart section also sets:
- The
stabilityLevel
topublic-preview
. Alloy requires by default that its components be generally available. Since theremotecfg
block is in public preview, thestabilityLevel
must be set topublic-preview
. - The environment variables
GCLOUD_FM_COLLECTOR_ID
andGCLOUD_RW_API_KEY
. These variables must be set so that the self-monitoring configuration pipelines are properly assigned.GCLOUD_FM_COLLECTOR_ID
is set to the pod name. The variable is reset each time the pod gets a new name, which means a new collector appears in the Inventory tab. You can delete old, unused collectors from your inventory.GCLOUD_RW_API_KEY
is set to the secret you created in step 1.
- The
Grafana Kubernetes Monitoring Helm chart
From v2.0.0 of the Kubernetes Monitoring Helm chart, support for Fleet Management is built in.
You can enable Fleet Management by setting remoteConfig
to true
:
cluster:
name: remote-config-example-cluster
alloy-metrics:
enabled: true
alloy:
stabilityLevel: public-preview
remoteConfig:
enabled: true
url: "<URL>"
auth:
type: "basic"
username: "<USERNAME>"
password: "<PASSWORD>"
Replace <URL>
with the base URL of the Fleet Management service and <USERNAME>
with your instance ID, both of which can be found on the API tab in the Fleet Management interface.
The password
value is the token assigned to your Fleet Management access policy. Refer to Create an access policy for more details.
Fleet Management is not recommended for use with earlier versions of the Kubernetes Monitoring Helm chart.
Self-monitoring configuration pipelines
When you visit the Fleet Management interface in Grafana Cloud after registering a collector, a set of self-monitoring configuration pipelines are automatically created and assigned to registered collectors.
The internal telemetry collected by the self-monitoring pipelines powers the health dashboards and logs in the collector’s details view in the Fleet Management interface.
These pipelines, which begin with self_monitoring_*
, rely on environment variables to authenticate requests and set collector_id
labels that match telemetry to collectors.
In order to see a collector’s health details in Fleet Management, make sure the environment variables GCLOUD_RW_API_KEY
and GCLOUD_FM_COLLECTOR_ID
are set wherever the collector is running.
Although development of self-monitoring configuration pipelines is ongoing, the logs pipeline does not work out of the box with Kubernetes deployments.
Next steps
- Upgrade your collectors to make sure the new configuration takes effect.
- Add attributes to your collectors for greater control over which configurations are applied and when.