Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Set up a test application for a Tempo cluster
Once you’ve set up a Grafana Tempo cluster, you need to write some traces to it and then query the traces from within Grafana. This procedure uses Tempo in microservices mode. For example, if you set up Tempo using the Kubernetes with Tanka procedure, then you can use this procedure to test your set up.
Before you begin
You’ll need:
- Grafana 10.0.0 or higher
- Microservice deployments require the Tempo querier URL, for example:
http://tempo-cluster-query-frontend.tempo.svc.cluster.local:3100/
- OpenTelemetry telemetrygen for generating tracing data
Refer to Deploy Grafana on Kubernetes if you are using Kubernetes. Otherwise, refer to Install Grafana for more information.
Configure Grafana Agent Flow to remote-write to Tempo
Caution
Grafana Alloy is the new name for our distribution of the OTel collector. Grafana Agent has been deprecated and is in Long-Term Support (LTS) through October 31, 2025. Grafana Agent will reach an End-of-Life (EOL) on November 1, 2025. Read more about why we recommend migrating to Grafana Alloy.
This section uses a Grafana Agent Helm chart deployment to send traces to Tempo.
To do this, you need to create a configuration that can be used by Grafana Agent to receive and export traces in OTLP protobuf
format.
Create a new
values.yaml
file which we’ll use as part of the Agent install.Edit the
values.yaml
file and add the following configuration to it:agent: extraPorts: - name: otlp-grpc port: 4317 targetPort: 4317 protocol: TCP configMap: create: true content: |- // Creates a receiver for OTLP gRPC. // You can easily add receivers for other protocols by using the correct component // from the reference list at: https://grafana.com/docs/agent/latest/flow/reference/components/ otelcol.receiver.otlp "otlp_receiver" { // Listen on all available bindable addresses on port 4317 (which is the // default OTLP gRPC port) for the OTLP protocol. grpc { endpoint = "0.0.0.0:4317" } // Output straight to the OTLP gRPC exporter. We would usually do some processing // first, most likely batch processing, but for this example we pass it straight // through. output { traces = [ otelcol.exporter.otlp.tempo.input, ] } } // Define an OTLP gRPC exporter to send all received traces to GET. // The unique label 'tempo' is added to uniquely identify this exporter. otelcol.exporter.otlp "tempo" { // Define the client for exporting. client { // Send to the locally running Tempo instance, on port 4317 (OTLP gRPC). endpoint = "http://tempo-cluster-distributor.tempo.svc.cluster.local:4317" // Disable TLS for OTLP remote write. tls { // The connection is insecure. insecure = true // Do not verify TLS certificates when connecting. insecure_skip_verify = true } } }
Ensure that you use the specific namespace you’ve installed Tempo in for the OTLP exporter. In the line:
endpoint = "http://tempo-cluster-distributor.tempo.svc.cluster.local:3100"
change
tempo
to reference the namespace where Tempo is installed, for example:http://tempo-cluster-distributor.my-tempo-namespaces.svc.cluster.local:3100
.Deploy the Agent using Helm:
helm install -f values.yaml grafana-agent grafana/grafana-agent
If you wish to deploy the agent into a specific namespace, make sure to create the namespace first and specify it to Helm by appending
--namespace=<grafana-agent-namespace>
to the end of the command.
Create a Grafana Tempo data source
To allow Grafana to read traces from Tempo, you must create a Tempo data source.
Navigate to Connections > Data Sources.
Click on Add data source.
Select Tempo.
Set the URL to
http://<TEMPO-QUERY-FRONTEND-SERVICE>:<HTTP-LISTEN-PORT>/
, filling in the path to Tempo’s query frontend service, and the configured HTTP API prefix. If you have followed the Deploy Tempo with Helm installation example, the query frontend service’s URL will look something like this:http://tempo-cluster-query-frontend.<namespace>.svc.cluster.local:3100
Click Save & Test.
You should see a message that says Data source is working
.
Visualize your data
Once you have created a data source, you can visualize your traces in the Grafana Explore page. For more information, refer to Tempo in Grafana.
Use OpenTelemetry telemetrygen
to generate tracing data
Next, you can use OpenTelemetry telemetrygen
to generate tracing data to test your Tempo installation.
In the following instructions we assume the endpoints for both the Grafana Agent and the Tempo distributor are those described above, for example:
grafana-agent.grafana-agent.svc.cluster.local
for Grafana Agenttempo-cluster-distributor.tempo.svc.cluster.local
for the Tempo distributor Replace these appropriately if you have altered the endpoint targets for the following examples.
Install
telemetrygen
using the installation procedure. NOTE: You don’t need to configure an OpenTelemetry Collector as we are using the Grafana Agent.Generate traces using
telemetrygen
:telemetrygen traces --otlp-insecure --rate 20 --duration 5s --otlp-endpoint grafana-agent.grafana-agent.svc.cluster.local:4317
This configuration sends traces to Grafana Agent for 5 seconds, at a rate of 20 traces per second.
Optionally, you can also send the trace directly to the Tempo database without using Grafana Agent as a collector by using the following:
telemetrygen traces --otlp-insecure --rate 20 --duration 5s --otlp-endpoint tempo-cluster-distributor.tempo.svc.cluster.local:4317
If you’re running telemetrygen
on your local machine, ensure that you first port-forward to the relevant Agent or Tempo distributor service, for example:
kubectl port-forward services/grafana-agent 4317:4317 --namespace grafana-agent
- Alternatively, a cronjob can be created to send traces periodically based on this template:
apiVersion: batch/v1
kind: CronJob
metadata:
name: sample-traces
spec:
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 2
schedule: "0 * * * *"
jobTemplate:
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 3600
template:
spec:
containers:
- name: traces
image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.96.0
args:
- traces
- --otlp-insecure
- --rate
- "20"
- --duration
- 5s
- --otlp-endpoint
- grafana-agent.grafana-agent.svc.cluster.local:4317
restartPolicy: Never
To view the tracing data:
Go to Grafana and select Explore.
Select the Tempo data source from the list of data sources.
Select the
Search
Query type.Select Run query.
Confirm that traces are displayed in the traces Explore panel. You should see 5 seconds worth of traces, 100 traces in total per run of
telemetrygen
.
Test your configuration using the Intro to MLTP application
The Intro to MLTP application provides an example five-service appliation generates data for Tempo, Mimir, Loki, and Pyroscope. This procedure installs the application on your cluster so you can generate meaningful test data.
- Navigate to https://github.com/grafana/intro-to-mltp to get the Kubernetes manifests for the Intro to MLTP application.
- Clone the repository using commands similar to the ones below:
git clone git+ssh://github.com/grafana/intro-to-mltp cp intro-to-mltp/k8s/mythical/* ~/tmp/intro-to-mltp-k8s
- Change to the cloned repository:
cd intro-to-mltp/k8s/mythical
- In the
mythical-beasts-deployment.yaml
manifest, alter eachTRACING_COLLECTOR_HOST
environment variable instance value to point to the Grafana Agent location. For example, based on the a Grafana Agent install in the default namespace called and with a Helm installation calledtest
:- env: ... - name: TRACING_COLLECTOR_HOST value: grafana-agent.grafana-agent.svc.cluster.local
- Deploy the Intro to MLTP application. It deploys into the default namespace.
kubectl apply -f mythical-beasts-service.yaml,mythical-beasts-persistentvolumeclaim.yaml,mythical-beasts-deployment.yaml
- Once the application is deployed, go to Grafana Enterprise and select the Explore menu item.
- Select the Tempo data source from the list of data sources.
- Select the
Search
Query type for the data source. - Select Run query.
- Traces from the application will be displayed in the traces Explore panel.