Menu
Grafana Cloud

Configure Application Observability

Admin users can configure the Application Observability plugin.

Data source

You can configure the default data source for each signal type, when you select an option the changes are automatically saved.

On the Data source tab, you can configure the default data source for each signal type: metrics, logs, and traces.

By default, Application Observability uses the automatically provisioned data sources for the stack. This gives users simple getting started workflow that only needs them to send traces, and Application Observability generated the necessary metrics.

If you are using a custom data source for metrics you don’t need the automatic metrics generation provided Application Observability and can disable it to reduce your Grafana Cloud usage and bill.

Logs query

On the Logs query tab, you can configure the default log queries and formatting Application Observability uses to query and format logs.

Default log query format

Select a log query format that matches your logs ingestion path, and customize your default log query:

The log query format needs to match the log storage and query format used by the ingestion method. If your log queries use a previous log format version, Application Observability prompts you to update the query.

Warning

Updating the default log query format overwrites changes you have made to your default log queries. Copy any queries you would like to keep before updating.

You can switch between log query formats on the Logs tab, changes won’t persist beyond the current session. If you would like to permanently change the default log query format, update the setting from the Configuration page.

Logs query, without namespace

You can configure the logs query to execute in the logs tab if service.namespace resource attribute isn’t provided. Available variables: $job, and $serviceName.

Logs query, with namespace

You can configure the logs query to execute in the logs tab when you supply the service.namespace resource attribute. Available variables: $job, $serviceName, and $serviceNamespace.

Logs query formatting

You can specify the formatting section of a log query to automatically appended to the end of logs queries to improve readability.

For example, the default formatting appended to log queries is:

LogQL
line_format "\u001b[1m{{if .severity}}{{alignRight 5 .severity}}{{end}}\u001b[0m \u001b[90m[{{alignRight 10 .resources_service_instance_id}}{{if .attributes_thread_name}}/{{alignRight 20 .attributes_thread_name}}{{end}}]\u001b[0m \u001b[36m{{if .instrumentation_scope_name }}{{alignRight 40 .instrumentation_scope_name}}{{end}}\u001b[0m{{if .traceid}} \u001b[37m\u001b[3m[traceid={{.traceid}}]{{end}}: {{.body}}"

Important: the formatting is automatically applied at the end of the actual query. Messing this up can lead to invalid queries.

Settings

You can use the settings tab to configure the behavior of different Application Observability features.

UI

By default, Application Observability does not allow you to sort services on the service inventory page. This prevents performance issues for very large lists of services.

If the amount of services is not too large, you can select the toggle to enable service inventory sorting and gain the ability to sort by any of the Rate, Errors or Duration columns, as well as the always available Name and Namespace columns.

Deployment environment attribute and value

By default Application Observability uses the OpenTelemetry recommended attribute deployment.environment to distinguish environments. You can select a custom attribute from a list of possible attributes from the data.

Note

When you select a custom attribute to represent your environment, ensure that all your traces include the attribute.

Group and filter attributes

To enable data grouping and filtering in Application Observability, first add the necessary attributes to traces that you would like to group and filter by, and then select them from a list of available attributes in the Application Observability configuration.

By default you can group and filter by the deployment environment.

Warning

When you add additional group and filter attributes it contributes to your Grafana Cloud data usage and bill.

Think carefully about the cardinality of your data and select attributes with minimal variation in value, for example geographical region and cloud provider have less cardinality than instance id.

Include web applications and mobile devices

You can enable Application Observability to generate metrics for spans with a CLIENT and PRODUCER spanKind to view web applications, including services like load generators, and mobile devices in Application Observability.

If you generate metrics with span metrics connector, you need to update your configuration to not ignore CLIENT and PRODUCER spans:

To view the Grafana Alloy configuration, select the river tab below. To view the OpenTelemetry Collector configuration, select the yaml tab below.

river
otelcol.processor.filter "drop_unneeded_span_metrics" {
	// https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.filter/
	error_mode = "ignore"

	metrics {
		datapoint = [
			"IsMatch(metric.name, \"calls|duration\") and IsMatch(attributes[\"span.kind\"], \"SPAN_KIND_INTERNAL\")",
		]
	}

	output {
		metrics = [otelcol.processor.batch.default.input]
	}
}
yaml
filter/drop_unneeded_span_metrics:
    # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor
    error_mode: ignore
    metrics:
      datapoint:
        - 'IsMatch(metric.name, "calls|duration") and IsMatch(attributes["span.kind"], "SPAN_KIND_INTERNAL")'

System

Use the system configuration tab to configure system-wide settings.

Span metrics source

Application Observability uses metrics generated from traces for most of the visualizations in the application. This data can come from different places depending on your setup. You can change which source you use from the System tab, under the Span metrics source section:

The following metric names are used by each of the options:

Tempo (Grafana Cloud)OpenTelemetry Collector, Grafana Alloy, Grafana AgentLegacy
traces_target_infotarget_infotarget_info
traces_service_graph_dropped_spans_totaln/atraces_service_graph_dropped_spans_total
traces_service_graph_unpaired_spans_totaln/atraces_service_graph_unpaired_spans_total
traces_service_graph_request_totaltraces_service_graph_request_totaltraces_service_graph_request_total
traces_service_graph_request_failed_totaltraces_service_graph_request_failed_totaltraces_service_graph_request_failed_total
traces_service_graph_request_client_seconds_buckettraces_service_graph_request_client_seconds_buckettraces_service_graph_request_client_seconds_bucket
traces_service_graph_request_client_seconds_counttraces_service_graph_request_client_seconds_counttraces_service_graph_request_client_seconds_count
traces_service_graph_request_client_seconds_sumtraces_service_graph_request_client_seconds_sumtraces_service_graph_request_client_seconds_sum
traces_service_graph_request_server_seconds_buckettraces_service_graph_request_server_seconds_buckettraces_service_graph_request_server_seconds_bucket
traces_service_graph_request_server_seconds_counttraces_service_graph_request_server_seconds_counttraces_service_graph_request_server_seconds_count
traces_service_graph_request_server_seconds_sumtraces_service_graph_request_server_seconds_sumtraces_service_graph_request_server_seconds_sum
traces_spanmetrics_calls_totalcalls_totaltraces_spanmetrics_calls_total
traces_spanmetrics_latency_bucketduration_seconds_buckettraces_spanmetrics_latency_bucket
traces_spanmetrics_latency_countduration_seconds_counttraces_spanmetrics_latency_count
traces_spanmetrics_latency_sumduration_seconds_sumtraces_spanmetrics_latency_sum

Metrics generation

Enable or disable metrics generation for the default provisioned Traces data source for the stack.

Note

You can safely disable metrics generation if you are using a custom data source.

Sampling

To configure head or tail sampling to reduce the cost of processing and storing large volumes of data, consult the sampling documentation.