GET hardware requirements
This page outlines the current hardware requirements for running Grafana Enterprise Traces (GET). Grafana Labs reserves the right to mark a support issue as ‘unresolvable’ if these requirements are not followed. See the Grafana Labs Enterprise Support SLA for more details.
CPU and memory
GET should be deployed on machines with a 1:4 ratio of CPU to memory, so for every CPU core there should be 4 gigabytes of memory. For most clusters, Grafana Labs recommends deploying GET onto machines with at least 16 CPU cores and 64 gigabytes of memory. All the nodes in the cluster should be of the same type. This is a good mix of CPU to memory for the type of workloads that GET usually performs.
Network
All components of GET require fast network access. Nodes on which the software runs should be connected by 10 gigabit/second or faster network connection speed.
Storage
GET requires fast storage to run. Any storage solutions you use have to have adequate performance. Object storage (for example AWS S3 buckets, Google Cloud GCS or Microsoft Azure Storage) buckets or blobs are used to persist trace data for the required retention period. Your object storage should be large enough to contain 30 days worth of data, the default block retention for GET.
However, various components of GET (for example, the ingester) also require fast, persistent block storage to be available to the host machine. In the case of the ingester component, all incoming data is sent to a write-ahead log (WAL) to aggregate trace data into blocks before storing them in object storage, and to help withstand unexpected node termination.
For every deployment, you need both object and block storage. Block storage must be memory-state storage, like SSD or NVMEs. Hard drive (platter-based) disk storage is not supported.
Block storage
The following are supported configurations for several cloud providers as well as guidance for custom hardware.
Platform | Storage type | Tested with | Comments |
---|---|---|---|
Amazon Web Services | Block | io1 provisioned IOPS SSD EBS volumes | The io1 storage must be provisioned at 50 IOPS per gigabyte, with a minimum of 150Gi allocated to ensure performant I/O. |
Google Cloud Platform | Block | pd-ssd SSD persistent storage | |
Microsoft Azure | Block | Premium SSD SSD persistent storage | |
Custom cluster hardware | Block | Build your cluster with fast, locally attached SSD-based storage. |
Object storage
The following are supported configurations for several cloud providers as well as guidance for custom hardware.
Platform | Storage type | Tested with | Comments |
---|---|---|---|
Amazon Web Services | S3 | S3 Standard | S3 object storage service using the Standard storage class. |
Google Cloud Platform | GCS | STANDARD | GCS object storage service using the STANDARD storage class in both regional and dual regional storage locations. |
Microsoft Azure | Azure Blob | Hot or Cool tier storage class with LRS replication | Blob Storage object storage service using the Hot or Cool tier with replication type Locally Redundant Storage (LRS) |
Custom cluster hardware | S3, GCS or Azure Blob compatible API | See note below. | Build your cluster with fast, locally attached SSD-based storage and object storage compatible with the S3 API. |
GET generally works with object storage installations which support the popular AWS S3 API. However, vendors have various performance characteristics for each solution and installation, so performance testing with your individual solution will be necessary to determine if the performance profile will work for your use case. Slower storage backed by hard disks might be acceptable for less intensive workloads, but more intensive workloads will likely require more performant object storage solutions backed by SSDs.