Menu
Grafana Cloud

Get started with Explore Traces

Note

Explore Traces is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.

You can use traces to identify errors in your apps and services and then to optimize and streamline them.

When working with traces, start with the big picture. Then drill down using primary signals, RED metrics, filters, and structural or trace list tabs to explore your data. To learn more, refer to Concepts.

Note

Expand your observability journey and learn about the Explore apps suite.

Before you begin

To use Explore Traces with Grafana Cloud, you need:

  • A Grafana Cloud account
  • A Grafana stack in Grafana Cloud with a configured Tempo data source

To use Explore Traces with self-managed Grafana, you need:

  • Your own Grafana v11.2 or later instance with a configured Tempo data source
  • Installed Explore Traces plugin

For more details, refer to Access Explore Traces.

Explore your tracing data

Most investigations follow these steps:

  1. Select the primary signal.
  2. Choose the metric you want to use: rates, errors, or duration.
  3. Define filters to refine the view of your data.
  4. Use the structural or trace list to drill down into the issue.
Give it a try using Grafana Play
Give it a try using Grafana Play

With Grafana Play, you can explore and see how it works, learning from practical examples to accelerate your development. This feature can be seen on the Grafana Play site.

Example: Investigate source of errors

As an example, you want to uncover the source of errors in your spans. For this, you need to compare the errors in the traces to locate the problem trace. Here’s how this works.

Choose a signal type and metric

First, you select Full traces as the signal type, then choose the Errors metric. Use Full traces to gain insight into the errors in the root of your traces or at the edge of your application. If you’re interested in any entrypoint to any service or Database calls (if you’re concerned about databases), use the Server spans signal type.

Select the signal type and metric type

Correlate attributes

To correlate attribute values with errors, use the Breakdown tab. This tab surfaces attributes values that heavily correlate with erroring spans. The results are ordered by the difference in those attributes by the highest ones first. This helps you see what’s causing the errors immediately. You can see here that 99.34% of the time the span name was equal to HTTP GET /api/datasources/proxy/uid/:uid/* the span was also erroring.

Errors are immediately visible by the large red bars

Inspect the problem

To dig deeper, select Inspect to focus in on the problem. It’s easy to spot the problem: the tall, red bar indicates that the problems are happening with HTTP GET /api/datasources/proxy/uid/:uid/*. Next, use Add to filters to focus just on the erroring API call.

Add to filters to focus on the API call

Use Root cause errors

Select the Root cause errors tab for an aggregated view of all of the traces that have errors in them. To view additional details, right-click on a line and select HTTP Outgoing Request.

Contextual menu available in the Root cause errors tab

To examine a single example transaction, click on an entry to open one of the individual traces used to construct that aggregate view.

Link to span data from Root cause errors