Prometheus integration for Starlette.

Overview

Starlette Prometheus

Build Status codecov Package Version PyPI Version

Introduction

Prometheus integration for Starlette.

Requirements

  • Python 3.6+
  • Starlette 0.9+

Installation

$ pip install starlette-prometheus

Usage

A complete example that exposes prometheus metrics endpoint under /metrics/ path.

from starlette.applications import Starlette
from starlette_prometheus import metrics, PrometheusMiddleware

app = Starlette()

app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics/", metrics)

Metrics for paths that do not match any Starlette route can be filtered by passing filter_unhandled_paths=True argument to add_middleware method.

Contributing

This project is absolutely open to contributions so if you have a nice idea, create an issue to let the community discuss it.

Comments
  • Releasing new version with updated prometheus-client dep?

    Releasing new version with updated prometheus-client dep?

    Hello! I have a deps conflict due to the fact that current version fron pypi 0.7.0 is still list prometheus-client dep as <8.0 in poetry, but another package needs prometheus-client >=8.0. I see that updated dependency is already merged since august, is it possible to release something like v0.7.1 with this dependency?

    opened by DMantis 4
  • Detail how to interact with visualizations of Prometheus

    Detail how to interact with visualizations of Prometheus

    I'm a new user to Starlette, and looking to monitor some Gunicorn processes for my Starlette server. This library looks promising, and I've successfully integrated and viewed the plain text stats at /metrics.

    However, I'd like a better visualization of these performance metrics. I've looked at integrating Grafana, but am having difficulty (https://prometheus.io/docs/visualization/grafana/ looks promising).

    I'm looking for the most basic level of monitoring; the console templates at https://prometheus.io/docs/visualization/consoles/ look promising.

    It'd be really nice to have the following:

    • A couple sentences describing the configuration that Grafana needs to use starlette-prometheus (which I suspect is just prometheus).
    • Basic integration with visualizations. I'd like to see some basics graphs of the stats at /metrics with a simple HTML page. I think I'd like to see an interface like this:
    from starlette.applications import Starlette
    from starlette_prometheus import metrics, metric_viz, PrometheusMiddleware
    
    app = Starlette()
    app.add_middleware(PrometheusMiddleware)
    app.add_route("/metrics/", metrics)
    app.add_route("/metric-viz/", metric_viz)
    
    opened by stsievert 4
  • [FEATURE] Path template instead of actual path in metrics

    [FEATURE] Path template instead of actual path in metrics

    Hi, there!

    Thanks for a great middleware! I've been using it a while and now I want to show response time by url in grafana. It works good with regular paths, like /users, but not with templated paths like /users/{id} because in /metrics they appear as actual paths (/users/1, /users/2, etc...)

    I've made a quick pull request https://github.com/perdy/starlette-prometheus/pull/6 for this. Let me know what you think of this idea and feel free to decline it if anything

    opened by unmade 4
  • [FEATURE] Group unhandled paths

    [FEATURE] Group unhandled paths

    In order to reduce cardinality of prometheus metrics and labels, this pull request adds an option to group all metrics that do not match any route.

    This solves the problem of random path requests generating unwanted metrics (each requested path generates around 23 lines of metrics), which could potentially be a big issue if exposed to the internet.

    opened by tsotnikov 3
  • Record exceptions as 500 responses

    Record exceptions as 500 responses

    This way it will be possible to count the number of 5xx responses with: sum(rate(starlette_responses_total{status_code=~"50."}[1m])) query.

    Fixes https://github.com/perdy/starlette-prometheus/issues/21

    released 
    opened by matino 2
  • Error when raising exception in FastAPI: UnboundLocalError: local variable 'status_code' referenced before assignment

    Error when raising exception in FastAPI: UnboundLocalError: local variable 'status_code' referenced before assignment

    Hi,

    I'm seeing an issue with FastAPI, where I am raising an exception in a route handler. I've created a small reproducer:

    from fastapi import FastAPI
    from starlette.middleware import Middleware
    from starlette_prometheus import PrometheusMiddleware
    
    
    middleware = [
        Middleware(PrometheusMiddleware)
    ]
    
    app = FastAPI(middleware=middleware)
    
    @app.get("/")
    def read_root():
        raise ValueError("Test error")
        # return {"Hello": "World"}
    

    Here's the output from running the reproducer and calling it with curl localhost:8000/:

    output
    $ uvicorn example:app                             
    INFO:     Started server process [5099]
    INFO:     Waiting for application startup.
    INFO:     Application startup complete.
    INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
    ERROR:    Exception in ASGI application
    Traceback (most recent call last):
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
        result = await app(self.scope, self.receive, self.send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
        return await self.app(scope, receive, send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 208, in __call__
        await super().__call__(scope, receive, send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/applications.py", line 112, in __call__
        await self.middleware_stack(scope, receive, send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
        await self.app(scope, receive, _send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 57, in __call__
        task_group.cancel_scope.cancel()
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 572, in __aexit__
        raise ExceptionGroup(exceptions)
    anyio._backends._asyncio.ExceptionGroup: 2 exceptions were raised in the task group:
    ----------------------------
    Traceback (most recent call last):
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 30, in coro
        await self.app(scope, request.receive, send_stream.send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
        raise exc
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
        await self.app(scope, receive, sender)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 656, in __call__
        await route.handle(scope, receive, send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 259, in handle
        await self.app(scope, receive, send)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 61, in app
        response = await func(request)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 226, in app
        raw_response = await run_endpoint_function(
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
        return await run_in_threadpool(dependant.call, **values)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool
        return await anyio.to_thread.run_sync(func, *args)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/to_thread.py", line 28, in run_sync
        return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
        return await future
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 754, in run
        result = context.run(func, *args)
      File "./example.py", line 14, in read_root
        raise ValueError("Test error")
    ValueError: Test error
    ----------------------------
    Traceback (most recent call last):
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 53, in dispatch
        response = await call_next(request)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 35, in call_next
        message = await recv_stream.receive()
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/streams/memory.py", line 89, in receive
        await receive_event.wait()
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1655, in wait
        await checkpoint()
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 440, in checkpoint
        await sleep(0)
      File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 644, in sleep
        await __sleep0()
      File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 638, in __sleep0
        yield
    asyncio.exceptions.CancelledError
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 55, in __call__
        response = await self.dispatch_func(request, call_next)
      File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 65, in dispatch
        RESPONSES.labels(method=method, path_template=path_template, status_code=status_code).inc()
    UnboundLocalError: local variable 'status_code' referenced before assignment
    
    `pip freeze` output
    anyio==3.4.0
    asgiref==3.4.1
    click==8.0.3
    fastapi==0.70.1
    h11==0.12.0
    idna==3.3
    prometheus-client==0.11.0
    pydantic==1.9.0
    sniffio==1.2.0
    starlette==0.16.0
    starlette-prometheus==0.8.0
    typing-extensions==4.0.1
    uvicorn==0.16.0
    

    It seems like starlette has started raising asyncio.exceptions.CancelledError, which is not based on Exception caught here

    https://github.com/perdy/starlette-prometheus/blob/672ffc363041924956e2cbc7c07bea6ec0dbd5a5/starlette_prometheus/middleware.py#L54

    but rather BaseException.

    I believe this was introduced in version 0.15.0 of Starlette, in PR https://github.com/encode/starlette/pull/1157.

    I've tried to change the exception catching to include both – i.e. except (Exception, asyncio.exceptions.CancelledError), this seems to revert the behavior to the expected.

    opened by kbakk 1
  • Fix:

    Fix: "UnboundLocalError: local variable 'status_code' referenced before assignment"

    Not all of the errors thrown by asyncio inherit from Exception. I was throwing an exception in a route to test my sentry intergration and it threw a asyncio.exceptions.CancelledError which inherits from BaseException (see https://github.com/python/cpython/blob/3.9/Lib/asyncio/exceptions.py#L9)

    opened by InsidersByte 1
  • How do i disable logging for specific path

    How do i disable logging for specific path

    I am using PrometheusMiddleware from starlette_prometheus ever second or the, it keeps generating log, this grows the log file.

    How do i disable this log, this specific?

    INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
    INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
    ............several  million times .............................
    INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
    
    opened by Delvify 1
  • WIP: Add test for prometheus_multiproc_dir

    WIP: Add test for prometheus_multiproc_dir

    I was wondering if we could add a test for the situation in which the environment variable prometheus_multiproc_dir is set, thus reaching 100% coverage. The problem is that we get a status code 200 OK, but the content of the response is empty. It would be nice if you have suggestions on how to correctly mock the processing using the prometheus_multiproc_dir. Many thanks and best regards.

    opened by vreyespue 1
  • Make module PEP 561 compatible.

    Make module PEP 561 compatible.

    Add py.typed to indicate that the project has inline type hints. This allows mypy to successfully import and use the type hints provided by the module.

    released 
    opened by trevora 1
  • Fix duplicated charset in the content-type header

    Fix duplicated charset in the content-type header

    The CONTENT_TYPE_LATEST constant from the prometheus_client already contains not only media type but also charset value. On the other hand, starlette adds charset value to a value passed as media_type (related place in code).

    This causes charset value duplication like: text/plain; version=0.0.4; charset=utf-8; charset=utf-8.

    released 
    opened by denysxftr 1
  • Bump certifi from 2021.10.8 to 2022.12.7

    Bump certifi from 2021.10.8 to 2022.12.7

    Bumps certifi from 2021.10.8 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Setting requests in progress multiprocess mode as livesum

    Setting requests in progress multiprocess mode as livesum

    By default it was by pid, this could generate a lot of metrics in case of many workers and multiprocess. This can overtime stack as worker processes can restart and leak metrics like this. With this change, this would remove the pid and the metric will be the same regardless of the multiprocess context

    opened by rbizos 0
  • Revamp implementation without BaseHTTPMiddleware

    Revamp implementation without BaseHTTPMiddleware

    Hello there 👋

    Because of the numerous limitations of the BaseHTTPMiddleware class provided by Starlette, the Starlette dev' team is about to deprecate it and encourage people to write "pure" ASGI middlewares. In particular, one of this limitation causes the issue #33 here.

    This PR is an attempt at converting the existing middleware without BaseHTTPMiddleware. The resulting code is quite similar, the only "tricky" thing is the part where we wrap the send function with our own; as it's the common way to do in ASGI.

    All existing tests are passing.

    I would be glad to discuss it and make all changes needed so we can integrate this new approach in the library.

    Cheers!

    opened by frankie567 2
  • Submounted routes use incorrect path in labels

    Submounted routes use incorrect path in labels

    When submounting routes, rather than the full path template being used, only the mount prefix is used.

    Running the following app:

    from starlette.applications import Starlette
    from starlette.middleware import Middleware
    from starlette.responses import Response
    from starlette.routing import Mount, Route
    from starlette_prometheus import PrometheusMiddleware, metrics
    
    
    async def foo(request):
        return Response()
    
    
    async def bar_baz(request):
        return Response()
    
    
    routes = [
        Route("/foo", foo),
        Mount("/bar", Route("/baz", bar_baz)),
        Route("/metrics", metrics),
    ]
    middleware = [Middleware(PrometheusMiddleware)]
    app = Starlette(routes=routes, middleware=middleware)
    

    Then making the following requests:

    $ curl localhost:8000/foo
    $ curl localhost:8000/bar/baz
    $ curl localhost:8000/metrics
    

    Gives the following output (I only included one metric as an example, but it's the same for all of them). Note the label for the request to localhost:8000/bar/baz has a path label of /bar.

    starlette_requests_total{method="GET",path_template="/foo"} 1.0
    starlette_requests_total{method="GET",path_template="/bar"} 1.0
    starlette_requests_total{method="GET",path_template="/metrics"} 1.0
    
    opened by ter0 2
  • respect PROMETHEUS_MULTIPROC_DIR in example metrics view

    respect PROMETHEUS_MULTIPROC_DIR in example metrics view

    Hey

    Since prometheus_client:0.10.x deprecated prometheus_multiproc_dir in favor of PROMETHEUS_MULTIPROC_DIR. So I updated the example metrics view to also respect PROMETHEUS_MULTIPROC_DIR - wdyt?

    opened by celloni 0
Releases(v0.9.0)
Owner
José Antonio Perdiguero
Artificial Intelligence Engineer & Software Architect
José Antonio Perdiguero
A watch dog providing a piece in mind that your Chia farm is running smoothly 24/7.

Photo by Zoltan Tukacs on Unsplash Watchdog for your Chia farm So you've become a Chia farmer and want to maximize the probability of getting a reward

Martin Mihaylov 466 Dec 11, 2022
Development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application.

README for pympler Before installing Pympler, try it with your Python version: python setup.py try If any errors are reported, check whether your Pyt

996 Jan 01, 2023
pprofile + matplotlib = Python program profiled as an awesome heatmap!

pyheat Profilers are extremely helpful tools. They help us dig deep into code, find and understand performance bottlenecks. But sometimes we just want

Vishwas B Sharma 735 Dec 27, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

Fabian Pedregosa 80 Nov 18, 2022
Sentry is cross-platform application monitoring, with a focus on error reporting.

Users and logs provide clues. Sentry provides answers. What's Sentry? Sentry is a service that helps you monitor and fix crashes in realtime. The serv

Sentry 33k Jan 04, 2023
System monitor - A python-based real-time system monitoring tool

System monitor A python-based real-time system monitoring tool Screenshots Installation Run My project with these commands pip install -r requiremen

Sachit Yadav 4 Feb 11, 2022
Prometheus integration for Starlette.

Starlette Prometheus Introduction Prometheus integration for Starlette. Requirements Python 3.6+ Starlette 0.9+ Installation $ pip install starlette-p

José Antonio Perdiguero 229 Dec 21, 2022
Was an interactive continuous Python profiler.

☠ This project is not maintained anymore. We highly recommend switching to py-spy which provides better performance and usability. Profiling The profi

What! Studio 3k Dec 27, 2022
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.

Glances - An eye on your system Summary Glances is a cross-platform monitoring tool which aims to present a large amount of monitoring information thr

Nicolas Hennion 22k Jan 04, 2023
Watch your Docker registry project size, then monitor it with Grafana.

Watch your Docker registry project size, then monitor it with Grafana.

Nova Kwok 33 Apr 05, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 02, 2023
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 07, 2023
Tracy Profiler module for the Godot Engine

GodotTracy Tracy Profiler module for the Godot Engine git clone --recurse-submodules https://github.com/Pineapple/GodotTracy.git Copy godot_tracy fold

Pineapple Works 17 Aug 23, 2022
ScoutAPM Python Agent. Supports Django, Flask, and many other frameworks.

Scout Python APM Agent Monitor the performance of Python Django apps, Flask apps, and Celery workers with Scout's Python APM Agent. Detailed performan

Scout APM 59 Nov 26, 2022
Visual profiler for Python

vprof vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memo

Nick Volynets 3.9k Dec 19, 2022
Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.

Yappi Yet Another Python Profiler, but this time thread&coroutine&greenlet aware. Highlights Fast: Yappi is fast. It is completely written in C and lo

Sümer Cip 1k Jan 01, 2023
ASGI middleware to record and emit timing metrics (to something like statsd)

timing-asgi This is a timing middleware for ASGI, useful for automatic instrumentation of ASGI endpoints. This was developed at GRID for use with our

Steinn Eldjárn Sigurðarson 99 Nov 21, 2022
Automatically monitor the evolving performance of Flask/Python web services.

Flask Monitoring Dashboard A dashboard for automatic monitoring of Flask web-services. Key Features • How to use • Live Demo • Feedback • Documentatio

663 Dec 29, 2022
Sampling profiler for Python programs

py-spy: Sampling profiler for Python programs py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spe

Ben Frederickson 9.5k Jan 08, 2023