Menu
Grafana Cloud

Query your exported logs

Your logs are exported to your storage bucket using Loki’s open source chunk and index formats. This means there are two options available for querying and reading your archived logs:

  • Query the archive using the LogCLI tool
  • Query the archive using Loki in read-only mode

Note

Due to the synchronization schedule, the archive does not include the log data for the most recent period. Synchronization is set to present minus N, currently N=7d. So the archive will not include log data from the past week.

Querying the archive using LogCLI

Please download and build the latest LogCLI.

Create a new config file named logcli-config.yaml. A writable directory is needed to cache files downloaded from the customer bucket.

Example for AWS S3

yaml
storage_config:
  tsdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
  boltdb_shipper:
    active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
    cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  aws:
    s3: s3://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>@<custom_endpoint>/<bucket_name>
    bucketnames: <name of the customer bucket where the archive is stored>
    region: <aws region of the customer bucket>

compactor:
  working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
  cache_index_stats_results: false

Example for Azure Blob Storage

yaml
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  azure:
    account_name: <storage account name>
    account_key: <storage account secret key>
    container_name: <name of the customer container where the archive is stored>

compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
   cache_index_stats_results: false

Example for Google Cloud Storage

yaml
storage_config:
   tsdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/loki/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/loki/index_cache'>
   boltdb_shipper:
      active_index_directory: <writable directory to store cache files, for example:'/tmp/data/index'>
      cache_location: <writable directory to store cache files, for example:'/tmp/data/boltdb-cache'>
  gcs:
    bucket_name: <name of the customer bucket where the archive is stored>

compactor:
   working_directory: <writable directory to store cache files, for example:'/tmp/loki'>

query_range:
   cache_index_stats_results: false

You should use the configuration file in your call to logCLI:

logcli query --remote-schema --store-config=./logcli-config.yaml \
 --schema-store="<gcp,s3,azure>" \
--from="<start date, for example 2022-09-21T09:00:00Z>" \
--to="<end date, for example 2022-09-21T20:15:00Z>" \
--org-id=<tenant-id> \
--output=jsonl \
'<LogQL query eg. '{environment="prod"}'>'

When using the --remote-schema parameter logCLI will read the <tenant>_schemaconfig.yaml file from the customer bucket.

Query the archive using Loki in read-only mode

To query the logs from the target storage, you must run Loki with the CLI argument -target=querier. The querier is a Loki component that reads only the data from storage and does not write anything, so it can not modify the content of the archived logs.

To run the querier on a local machine, you can use a docker-compose setup that contains Loki and Grafana. Or you can deploy a similar configuration to your Kubernetes cluster, VM or dedicated server.

Note

This docker-compose setup contains an increased timeout, querier.query_timeout property set to 10m on the Loki side, and timeout:600 on the Grafana Datasource side. You might need to increase the value if your query processes a large amount of data. Also, it might be necessary to change the default limits if you reach them.
  1. Create a Loki configuration file called loki-query-archive.yaml using the following base configuration example.

    yaml
    auth_enabled: true
    
    server:
      http_listen_port: 3100
      http_server_read_timeout: 10m
      http_server_write_timeout: 10m
    
    memberlist:
      join_members:
        - loki:7946
    
    compactor:
      working_directory: /loki
    
    limits_config:
      query_timeout: 10m
    
    common:
      path_prefix: /loki
      ring:
        instance_addr: 127.0.0.1
        kvstore:
          store: inmemory
      replication_factor: 1
      compactor_address: loki:3100
    
    storage_config:
      tsdb_shipper:
        active_index_directory: /loki/index
        cache_location: /loki/index_cache
      boltdb_shipper:
        active_index_directory: /data/index
        cache_location: /data/boltdb-cache
      # configure the access to your bucket here. S3 example below.
      aws:
        bucketnames: <bucketname>
        s3forcepathstyle: true
        region: us-east-1
        access_key_id: <key>
        secret_access_key: <key>
        endpoint: s3.dualstack.us-east-1.amazonaws.com
    # Copy your `schema_config` from `schemaconfig.yaml` file that is synced to your bucket and insert here.
    1. Copy the content of the schemaconfig.yaml file that is synced to your bucket and add it to the end of the base configuration file.
  2. Create a docker-compose.query-archive.yaml file:

    yaml
    services:
      loki:
        image: grafana/loki:3.2.1
        command: '-config.file=/etc/loki/config.yaml'
        ports:
          - '3100:3100'
          - 7946
          - 9095
        volumes:
          - ./fixtures/loki-query-archive.yaml:/etc/loki/config.yaml:ro
        networks:
          - grafana-loki
    
      grafana:
        image: grafana/grafana:11.3.0
        environment:
          - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
          - GF_AUTH_ANONYMOUS_ENABLED=true
          - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
        depends_on:
          - loki
        entrypoint:
          - sh
          - -euc
          - |
            mkdir -p /etc/grafana/provisioning/datasources
            cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
            apiVersion: 1
            datasources:
              - name: Loki
                type: loki
                access: proxy
                url: http://loki:3100
                jsonData:
                  timeout: 600
                  httpHeaderName1: "X-Scope-OrgID"
                secureJsonData:
                  httpHeaderValue1: "{{TENANT_ID}}"
            EOF
            /run.sh
        ports:
          - '3000:3000'
        networks:
          - grafana-loki
    networks:
      grafana-loki: {}
    1. Update the docker-compose.query-archive.yaml file to replace the {{TENANT_ID}} with your tenant ID.
    2. Replace ./fixtures/loki-query-archive.yaml in Loki volumes with a path to the Loki configuration file you created in Step 1.
  3. Run the following command to update the querier:

    docker-compose -f docker-compose.query-archive.yaml up -d
  4. Launch a browser and navigate to http://localhost:3000 to view Grafana.

  5. Navigate to the Explore page, select a Loki datasource, and try to query the archived logs data.