Menu
DocumentationGrafana CloudMonitor applicationsApplication ObservabilityInstrumentation setupInstrumentPythongunicorn
Grafana Cloud
OpenTelemetry Python uses a Global Interpreter Lock (GIL) that can cause performance issues when using a web server that spawns multiple processes to serve requests in parallel.
To avoid this, you have to register a post_fork
hook that will be called after each worker process is forked.
Note
Other web servers, such as uWSGI, can be instrumented in the same way (@postfork
instead ofpost_fork
).
Use this gunicorn.conf.py
file (gunicorn app -c gunicorn.conf.py
) in addition to the
Python production instrumentation guide.
python
import logging
from uuid import uuid4
from opentelemetry import metrics, trace
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
OTLPLogExporter,
)
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
OTLPMetricExporter,
)
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)
# support for logs is currently experimental
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.resources import SERVICE_INSTANCE_ID
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# your gunicorn config here
# bind = "127.0.0.1:8000"
collector_endpoint = "http://localhost:4317"
def post_fork(server, worker):
server.log.info("Worker spawned (pid: %s)", worker.pid)
resource = Resource.create(
attributes={
# each worker needs a unique service.instance.id to distinguish the created metrics in prometheus
SERVICE_INSTANCE_ID: str(uuid4()),
"worker": worker.pid,
}
)
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint=collector_endpoint)))
trace.set_tracer_provider(tracer_provider)
metrics.set_meter_provider(
MeterProvider(
resource=resource,
metric_readers=[(PeriodicExportingMetricReader(
OTLPMetricExporter(endpoint=collector_endpoint)
))],
)
)
logger_provider = LoggerProvider(resource=resource)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter(endpoint=collector_endpoint)))
logging.getLogger().addHandler(LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider))
Was this page helpful?
Related documentation
Related resources from Grafana Labs
Additional helpful documentation, links, and articles:
Video
Getting started with the Grafana LGTM Stack
In this webinar, we’ll demo how to get started using the LGTM Stack: Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics.
Video
Intro to Kubernetes monitoring in Grafana Cloud
In this webinar you’ll learn how Grafana offers developers and SREs a simple and quick-to-value solution for monitoring their Kubernetes infrastructure.
Video
Building advanced Grafana dashboards
In this webinar, we’ll demo how to build and format Grafana dashboards.