Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Version 2.5 release notes
Grafana Labs is excited to announce the release of Grafana Enterprise Metrics (GEM) 2.5, which is built on top of Grafana Mimir 2.5.
GEM 2.5 inherits all of the features, enhancements, and bugfixes that are in the Grafana Mimir 2.5 release. Given this, it’s best to start with the Grafana Mimir 2.5 release notes.
Features and enhancements
TLS is now more configurable for both internal and external connections
- Added
-*.tls-min-version
to configure the minimum TLS version - Added
-*.tls-cipher-suites
to allow overriding the default cipher suites provided by the Go runtime
- Added
GEM Self Monitoring cpu metrics now include a
clustername
label to distinguish multiple installations of Grafana Enterprise Metrics, Grafana Enterprise Logs and Grafana Enterprise Traces.- cortex_quota_cpu_count
- cortex_quota_gomaxprocs
- cortex_quota_cgroup_cpu_max
- cortex_quota_cgroup_cpu_period
GEM Self Monitoring bundles 16 additional recording rules from the Grafana Mimir monitoring mixin.
- target:cortex_ingester_queried_exemplars:99quantile
- target:cortex_ingester_queried_exemplars:50quantile
- target:cortex_ingester_queried_exemplars:avg
- target:cortex_ingester_queried_exemplars_bucket:sum_rate
- target:cortex_ingester_queried_exemplars_sum:sum_rate
- target:cortex_ingester_queried_exemplars_count:sum_rate
- target_instance:cortex_alertmanager_alerts:sum
- target_instance:cortex_alertmanager_silences:sum
- target:cortex_alertmanager_state_replication_total:rate5m
- target:cortex_alertmanager_state_replication_failed_total:rate5m
- cortex_alertmanager_alerts_invalid_total:rate5m
- target:cortex_alertmanager_alerts_received_total:rate5m
- target:cortex_alertmanager_partial_state_merges_total:rate5m
- target:cortex_alertmanager_partial_state_merges_failed_total:rate5m
- target_integration:cortex_alertmanager_notifications_total:rate5m
- target_integration:cortex_alertmanager_notifications_failed_total:rate5m
The object storage parameters
-*.azure.msi-resource
are now ignored, and will be removed in GEM 2.7. This setting is now chosen automatically by Azure.The Graphite Querier’s default cache TTL has been lowered to 10 minutes. This ensures consistent query results when out-of-order ingestion is enabled.
The storage aggregation method set for the Graphite Querier in
storage-aggregation.conf
can no longer be overridden during runtime usingconsolidateBy
when metrictank is used as a render engine. This matches Graphite-Web’s behavior.Fixed a bug in the Graphite Querier where render requests that failed to be processed by the native engine were not being proxied to Graphite-Web.
Helm changes
The mimir-distributed Helm chart is the best way to install GEM on Kubernetes. Notable changes follow. For the full list of changes, see the Helm chart changelog.
Zone aware replication Helm now supports deploying the ingesters and store-gateways as different availability zones. The replication is also zone-aware, therefore multiple instances of one zone can fail without any service interruption and roll outs can be performed faster because many instances of each zone can be restarted together, as opposed to them all restarting in sequence.
This is a breaking change, for details on how to upgrade please review the Helm changelog.
This release also inherits the helm changes from the Grafana Mimir 2.5 release.
Upgrade considerations
- There are no breaking changes in this release of GEM
- If you are deploying via Helm, note the Helm changes above.
This release also inherits the upgrade considerations from the Grafana Mimir 2.5 release.
After you upgrade to GEM 2.5, upgrade your GEM plugin to the latest version. For more information about the most recent enhancements and bugfixes in the GEM plugin, see the Grafana Enterprise Metrics: Changelog.
Bug fixes
v2.5.1
- Fixed empty buildinfo in GEM binary
v2.5.2
- Fix issue where authentication caches were not sized correctly resulting in poor performance