Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
loki.source.awsfirehose
loki.source.awsfirehose
receives log entries over HTTP from Amazon Data Firehose and forwards them to other loki.*
components.
The HTTP API exposed is compatible with the Data Firehose HTTP Delivery API. Since the API model that Data Firehose uses to deliver data over HTTP is generic enough, the same component can be used to receive data from multiple origins:
- Amazon CloudWatch logs
- Amazon CloudWatch events
- Custom data through DirectPUT requests
The component uses a heuristic to try to decode as much information as possible from each log record, and it falls back to writing the raw records to Loki. The decoding process goes as follows:
- Data Firehose sends batched requests
- Each record is treated individually
- For each
record
received in each request:- If the
record
comes from a CloudWatch logs subscription filter, it’s decoded and each logging event is written to Loki - All other records are written raw to Loki
- If the
The component exposes some internal labels, available for relabeling. The following tables describes internal labels available in records coming from any source.
Name | Description | Example |
---|---|---|
__aws_firehose_request_id | Data Firehose request ID. | a1af4300-6c09-4916-ba8f-12f336176246 |
__aws_firehose_source_arn | Data Firehose delivery stream ARN. | arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream |
If the source of the Data Firehose record is CloudWatch logs, the request is further decoded and enriched with even more labels, exposed as follows:
Name | Description | Example |
---|---|---|
__aws_cw_log_group | The log group name of the originating log data. | CloudTrail/logs |
__aws_cw_log_stream | The log stream name of the originating log data. | 111111111111_CloudTrail/logs_us-east-1 |
__aws_cw_matched_filters | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | Destination,Destination2 |
__aws_cw_msg_type | Data messages use the DATA_MESSAGE type. Sometimes CloudWatch Logs may emit Amazon Kinesis Data Streams records with a CONTROL_MESSAGE type, mainly for checking if the destination is reachable. | DATA_MESSAGE |
__aws_owner | The AWS Account ID of the originating log data. | 111111111111 |
Refer to the Examples for a full example configuration showing how to enrich each log entry with these labels.
Usage
loki.source.awsfirehose "<LABEL>" {
http {
listen_address = "<LISTEN_ADDRESS>"
listen_port = "<PORT>"
}
forward_to = RECEIVER_LIST
}
The component starts an HTTP server on the configured port and address with the following endpoints:
/awsfirehose/api/v1/push
- acceptingPOST
requests compatible with Data Firehose HTTP Specifications.
You can use the X-Amz-Firehose-Common-Attributes header to set extra static labels.
You can configure the header in the Parameters section of the Data Firehose delivery stream configuration.
Label names must be prefixed with lbl_
.
The prefix is removed before the label is stored in the log entry.
Label names and label values must be compatible with the Prometheus data model specification.
Example of the valid X-Amz-Firehose-Common-Attributes
value with two custom labels:
{
"commonAttributes": {
"lbl_label1": "value1",
"lbl_label2": "value2"
}
}
Arguments
You can use the following arguments with loki.source.awsfirehose
:
Name | Type | Description | Default | Required |
---|---|---|---|---|
forward_to | list(LogsReceiver) | List of receivers to send log entries to. | yes | |
access_key | secret | If set, require Data Firehose to provide a matching key. | "" | no |
relabel_rules | RelabelRules | Relabeling rules to apply on log entries. | {} | no |
use_incoming_timestamp | bool | Whether or not to use the timestamp received from the request. | false | no |
The relabel_rules
field can make use of the rules
export value from a loki.relabel
component to apply one or more relabeling rules to log entries before they’re forwarded to the list of receivers in forward_to
.
Blocks
You can use the following blocks with loki.source.awsfirehose
:
Name | Description | Required |
---|---|---|
grpc | Configures the gRPC server that receives requests. | no |
http | Configures the HTTP server that receives requests. | no |
grpc
The grpc
block configures the gRPC server.
You can use the following arguments to configure the grpc
block. Any omitted fields take their default values.
Name | Type | Description | Default | Required |
---|---|---|---|---|
conn_limit | int | Maximum number of simultaneous HTTP connections. Defaults to no limit. | 0 | no |
listen_address | string | Network address on which the server listens for new connections. It defaults to accepting all incoming connections. | "" | no |
listen_port | int | Port number on which the server listens for new connections. Defaults to a random free port. | 0 | no |
max_connection_age_grace | duration | An additive period after max_connection_age after which the connection is forcibly closed. | "infinity" | no |
max_connection_age | duration | The duration for the maximum time a connection may exist before it is closed. | "infinity" | no |
max_connection_idle | duration | The duration after which an idle connection is closed. | "infinity" | no |
server_max_concurrent_streams | int | Limit on the number of concurrent streams for gRPC calls (0 = unlimited). | 100 | no |
server_max_recv_msg_size | int | Limit on the size of a gRPC message this server can receive (bytes). | 4MB | no |
server_max_send_msg_size | int | Limit on the size of a gRPC message this server can send (bytes). | 4MB | no |
http
The http
block configures the HTTP server.
You can use the following arguments to configure the http
block. Any omitted fields take their default values.
Name | Type | Description | Default | Required |
---|---|---|---|---|
conn_limit | int | Maximum number of simultaneous HTTP connections. Defaults to no limit. | 0 | no |
listen_address | string | Network address on which the server listens for new connections. Defaults to accepting all incoming connections. | "" | no |
listen_port | int | Port number on which the server listens for new connections. | 8080 | no |
server_idle_timeout | duration | Idle timeout for HTTP server. | "120s" | no |
server_read_timeout | duration | Read timeout for HTTP server. | "30s" | no |
server_write_timeout | duration | Write timeout for HTTP server. | "30s" | no |
Exported fields
loki.source.awsfirehose
doesn’t export any fields.
Component health
loki.source.awsfirehose
is only reported as unhealthy if given an invalid configuration.
Debug metrics
The following are some of the metrics that are exposed when this component is used.
Note
The metrics include labels such as
status_code
where relevant, which you can use to measure request success rates.
loki_source_awsfirehose_batch_size
(histogram): Size (in units) of the number of records received per request.loki_source_awsfirehose_invalid_static_labels_errors
(counter): Count number of errors while processing Data Firehose static labels.loki_source_awsfirehose_record_errors
(counter): Count of errors while decoding an individual record.loki_source_awsfirehose_records_received
(counter): Count of records received.loki_source_awsfirehose_request_errors
(counter): Count of errors while receiving a request.
Example
This example starts an HTTP server on 0.0.0.0
address and port 9999
.
The server receives log entries and forwards them to a loki.write
component.
The loki.write
component sends the logs to the specified Loki instance using basic auth credentials provided.
loki.write "local" {
endpoint {
url = "http://loki:3100/api/v1/push"
basic_auth {
username = "<USERNAME>"
password_file = "<PASSWORD_FILE>"
}
}
}
loki.source.awsfirehose "loki_fh_receiver" {
http {
listen_address = "0.0.0.0"
listen_port = 9999
}
forward_to = [
loki.write.local.receiver,
]
}
Replace the following:
<USERNAME>
: Your username.<PASSWORD_FILE>
: Your password file.
As another example, if you are receiving records that originated from a CloudWatch logs subscription, you can enrich each received entry by relabeling internal labels.
The following configuration builds upon the one above but keeps the origin log stream and group as log_stream
and log_group
, respectively.
loki.write "local" {
endpoint {
url = "http://loki:3100/api/v1/push"
basic_auth {
username = "<USERNAME>"
password_file = "<PASSWORD_FILE>"
}
}
}
loki.source.awsfirehose "loki_fh_receiver" {
http {
listen_address = "0.0.0.0"
listen_port = 9999
}
forward_to = [
loki.write.local.receiver,
]
relabel_rules = loki.relabel.logging_origin.rules
}
loki.relabel "logging_origin" {
rule {
action = "replace"
source_labels = ["__aws_cw_log_group"]
target_label = "log_group"
}
rule {
action = "replace"
source_labels = ["__aws_cw_log_stream"]
target_label = "log_stream"
}
forward_to = []
}
Replace the following:
<USERNAME>
: Your username.<PASSWORD_FILE>
: Your password file.
Compatible components
loki.source.awsfirehose
can accept arguments from the following components:
- Components that export Loki
LogsReceiver
Note
Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.