This is documentation for the next version of Mimir. For the latest stable release, go to the latest version.
Grafana Mimir version 2.11 release notes
Grafana Labs is excited to announce version 2.11 of Grafana Mimir.
The highlights that follow include the top features, enhancements, and bugfixes in this release. For the complete list of changes, see the changelog.
Features and enhancements
- Sampled logging of errors in the ingester. A high-traffic Mimir cluster can occasionally become bogged down logging high volumes of repeated errors. You can now reduce the amount of errors outputted to logs by setting a sample rate via the
-ingester.error-sample-rate
CLI flag. - Add total request size instance limit for ingesters. This limit protects the ingesters against requests that together may cause an OOM. Enable this feature by setting the
-ingester.instance-limits.max-inflight-push-requests-bytes
CLI flag in combination with the-ingester.limit-inflight-requests-using-grpc-method-limiter
CLI flag. - Reduce the resolution of incoming native histograms samples if the incoming sample has too many buckets compared to
-validation.max-native-histogram-buckets
. This is enabled by default but can be turned off by setting the-validation.reduce-native-histogram-over-max-buckets
CLI flag tofalse
. - Improved query-scheduler performance under load. This is particularly apparent for clusters with large numbers of queriers.
- Ingester to querier chunks streaming reduces the memory utilization of queriers and reduces the likelihood of OOMs.
- Ingester query request minimization reduces the number of query requests to ingesters, improving performance and resource utilization for both ingesters and queriers.
- The
markblocks
tool is released in compiled form together with the rest of Mimir. It is used to mark blocks for deletion or as “non-compactable”.
Experimental features
Grafana Mimir 2.11 includes new features that are considered experimental and disabled by default. Please use them with caution and report any issue you encounter:
- Block specified queries on a per-tenant basis. This is configured via the
blocked_queries
limit. See the docs for more information. - Store metadata when ingesting metrics via OTLP. This makes metric description and type available when ingesting metrics via OTLP. You can enable this feature by setting the CLI flag
-distributor.enable-otlp-metadata-storage
totrue
. - Reject gRPC push requests that the ingester/distributor is unable to accept before reading them into memory. You can enable this feature by using the
-ingester.limit-inflight-requests-using-grpc-method-limiter
and/or the-distributor.limit-inflight-requests-using-grpc-method-limiter
CLI flags for the ingester and/or the distributor, respectively. - Customize the memcached client write and read buffer size. The buffer allocated for each memcached connection can be configured via the following CLI flags:
- For the blocks storage:
-blocks-storage.bucket-store.chunks-cache.memcached.read-buffer-size-bytes
-blocks-storage.bucket-store.chunks-cache.memcached.write-buffer-size-bytes
-blocks-storage.bucket-store.index-cache.memcached.read-buffer-size-bytes
-blocks-storage.bucket-store.index-cache.memcached.write-buffer-size-bytes
-blocks-storage.bucket-store.metadata-cache.memcached.read-buffer-size-bytes
-blocks-storage.bucket-store.metadata-cache.memcached.write-buffer-size-bytes
- For the query frontend:
-query-frontend.results-cache.memcached.read-buffer-size-bytes
-query-frontend.results-cache.memcached.write-buffer-size-bytes
- For the ruler storage:
-ruler-storage.cache.memcached.read-buffer-size-bytes
-ruler-storage.cache.memcached.write-buffer-size-bytes
- For the blocks storage:
- Configure the number of long-living workers used to process gRPC requests. This can decrease CPU usage by reducing the number of stack allocations. Configure this feature by using the
-server.grpc.num-workers
CLI flag. - Enforce a limit in bytes on the
PostingsForMatchers
cache used by ingesters. This limit can be configured via the-blocks-storage.tsdb.head-postings-for-matchers-cache-max-bytes
and-blocks-storage.tsdb.block-postings-for-matchers-cache-max-bytes
CLI flags. - Pre-allocate the pool of workers in the distributor that are used to send push requests to ingesters. This can decrease CPU usage by reducing the number of stack allocations. You can enable this feature by using the
-distributor.reusable-ingester-push-worker
flag. - Include a
Retry-After
header in recoverable error responses from the distributor. This can protect your Mimir cluster from clients including Prometheus that default to retrying very quickly. Enable this feature by setting the-distributor.retry-after-header.enabled
CLI flag.
Helm chart improvements
The Grafana Mimir and Grafana Enterprise Metrics Helm chart is now released independently. See the Grafana Mimir Helm chart documentation.
Important changes
In Grafana Mimir 2.11 the following behavior has changed:
- The utilization-based read path limiter now operates on Go heap size instead of RSS from the Linux proc file system.
The following configuration options had been previously deprecated and are removed in Grafana Mimir 2.11:
- The CLI flag
-querier.iterators
. - The CLI flag
-query.batch-iterators
. - The CLI flag
-blocks-storage.bucket-store.bucket-index.enabled
. - The CLI flag
-blocks-storage.bucket-store.chunk-pool-min-bucket-size-bytes
. - The CLI flag
-blocks-storage.bucket-store.chunk-pool-max-bucket-size-bytes
. - The CLI flag
-blocks-storage.bucket-store.max-chunk-pool-bytes
.
The following configuration options are deprecated and will be removed in Grafana Mimir 2.13:
- The CLI flag
-log.buffered
; this is now the default behavior.
The following metrics are removed:
cortex_query_frontend_workers_enqueued_requests_total
; usecortex_query_frontend_enqueue_duration_seconds_count
instead.
The following configuration option defaults were changed:
- The CLI flag
-blocks-storage.bucket-store.index-header.sparse-persistence-enabled
now defaults to true. - The default value for the CLI flag
-blocks-storage.bucket-store.index-header.lazy-loading-concurrency
was changed from0
to4
. - The default value for the CLI flag
-blocks-storage.tsdb.series-hash-cache-max-size-bytes
was changed from1GB
to350MB
. - The default value for the CLI flag
-blocks-storage.tsdb.early-head-compaction-min-estimated-series-reduction-percentage
was changed from10
to15
.
Bug fixes
- Ingester: Respect context cancelation during query execution. PR 6085
- Distributor: Return 529 when ingestion rate limit is hit and the
distributor.service_overload_status_code_on_rate_limit_enabled
flag is active. PR 6549 - Query-scheduler: Prevent accumulation of stale querier connections. PR 6100
- Packaging: Fix preremove script preventing upgrades on RHEL based OS. PR 6067
- Ingester: Fixed possible series matcher corruption leading to wrong series being included in query results. PR 6884
- Mimirtool: Fixed missing .metricsUsed field in the output of analyze rule-file. PR 6953