An open source micro framework for observability

Autometrics makes it easy to instrument any function with the most useful metrics and generates powerful queries to help you identify and debug issues in production.

Functions

SLOs

Alerts

app

Last 3 hours

EXPLORER

authentication_user_mgmt

create_user

login

logout

register

forgot_password

change_password

delete_user

view_user

content_mgmt

analytics

search

filtering

payment_billing

messaging

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

13.04%

Latest

Latency

0.005

seconds

Latest

Concurrency

2

Latest

Request rate

This shows the 99th and 95th percentile latency or response time for the given function. For example, if the 99th percentile latency is 500 milliseconds, that means that 99% of calls to the function are handled within 500ms or less.

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

Export

45

40

35

30

25

20

15

10

05

00

13:35

32.6241 calls/sec

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

0.00%

Latest

Latency

No data

Concurrency

0.0667

calls/sec

Latest

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

45

40

35

30

25

20

15

10

05

00

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

The Autometrics workflow

The Autometrics workflow

Function-level metrics

Use Autometrics decorators, and wrappers to quickly instrument any function in your code. Autometrics will create the most important metrics for each function and record them in a uniform and Prometheus-compatible format.

Accurate and actionable alerts

The Autometrics framework includes first-class primitives for working with Service Level Objectives and makes it easy for any developer to adopt a powerful alerting workflow.

Firing

My API SLO

24 hours ago

autometrics-success-rate-95

Get actionable alerts when things go wrong

Quickly spot misbehaving functions and put out fires.

25

20

15

10

05

00

Alert firing

13:00

14:30

15:00

15:30

16:00

13:33

14:00

Create alerts based on your SLO's

Set up alerting workflows to get notified in case you’re at risk of breaching your SLOs.

# Create an objective for low latency


API_SLO_LOW_LATENCY = Objective(

"My API SLO", latency=(

ObjectiveLatency.Ms250, ObjectivePercentile.P99))

Group them into Service Level Objectives

Define latency and error rate goals with SLO’s that your API needs to meet.

@autometrics

def say_hello():

return "hello world"

Annotate functions in your code to instrument them.

Automatically track the most important metrics such as response time and error rate.

Meet Explorer,
your debug companion

Visualize and investigate the performance of your functions, service level objectives and alerts.

Functions

SLOs

Alerts

Functions

SLOs

Alerts

Production

Last 3 hours

EXPLORER

authentication_user_mgmt

login

logout

register

forgot_password

change_password

create_user

delete_user

view_user

content_mgmt

analytics

search

filtering

payment_billing

messaging

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

13.04%

Latest

Latency

0.005

Latest

Concurrency

2

calls/sec

Latest

Request rate

This shows the 99th and 95th percentile latency or response time for the given function. For example, if the 99th percentile latency is 500 milliseconds, that means that 99% of calls to the function are handled within 500ms or less.

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

Export

45

40

35

30

25

20

15

10

05

00

13:35

32.6241 calls/sec

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

Functions

SLOs

Alerts

Functions

SLOs

Alerts

Production

Last 3 hours

EXPLORER

authentication_user_mgmt

login

logout

register

forgot_password

change_password

create_user

delete_user

view_user

content_mgmt

analytics

search

filtering

payment_billing

messaging

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

13.04%

Latest

Latency

0.005

Latest

Concurrency

2

calls/sec

Latest

Request rate

This shows the 99th and 95th percentile latency or response time for the given function. For example, if the 99th percentile latency is 500 milliseconds, that means that 99% of calls to the function are handled within 500ms or less.

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

Export

45

40

35

30

25

20

15

10

05

00

13:35

32.6241 calls/sec

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

Functions

SLOs

Alerts

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

0.00%

Latest

Latency

No data

Concurrency

0.0667

calls/sec

Latest

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

45

40

35

30

25

20

15

10

05

00

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

Functions

SLOs

Alerts

Functions

SLOs

Alerts

Production

Last 3 hours

EXPLORER

authentication_user_mgmt

login

logout

register

forgot_password

change_password

create_user

delete_user

view_user

content_mgmt

analytics

search

filtering

payment_billing

messaging

Live metrics for your autometrics function

create_user

Last updated:

17:21:50

Request rate

0.0667

calls/sec

Latest

Error ratio

13.04%

Latest

Latency

0.005

Latest

Concurrency

2

calls/sec

Latest

Request rate

This shows the 99th and 95th percentile latency or response time for the given function. For example, if the 99th percentile latency is 500 milliseconds, that means that 99% of calls to the function are handled within 500ms or less.

Graph

PromQL query

1.9268

calls/sec

Average

0.0667

calls/sec

Latest

Export

45

40

35

30

25

20

15

10

05

00

13:35

32.6241 calls/sec

13:00

13:30

14:00

14:30

15:00

15:30

16:00

function_calls_duration: channel: subscriber_removed, instance: localhost:3030, job: my-app

Local-first. Starting with your command line

Local-first. Starting with
your command line

Local-first.
Starting with your command line

Local-first. Starting with
your command line

Preview your metrics locally — then ship it to prod.
Use the Autometrics CLI to visualize and iterate your metrics collection and alerting thresholds.

Preview your metrics locally - then ship it to prod. Use the Autometrics CLI to visualize and iterate your metrics collection and alerting thresholds.

# spin up a local Prometheus instance and the UI

# scrape your app that listens on port :3000
am start :3000

# spin up a local Prometheus instance and the UI

# scrape your app that listens on port :3000
am start :3000

Now sampling the following endpoints for metrics: http://localhost:3000/metrics Using Prometheus version: 2.45.0 Starting prometheus Explorer eindpoint: http://127.0.0.1:6789 Prometheus endpoint:http://127.0.0.1:9090/prometheus

Now sampling the following endpoints for metrics: http://localhost:3000/metrics Using Prometheus version: 2.45.0 Starting prometheus Explorer eindpoint: http://127.0.0.1:6789 Prometheus endpoint: http://127.0.0.1:9090/prometheus

All features to get you
started with observability

All features to get you started with observability

Additional capabilities to supercharge your debugging.

Useful metrics only

The Autometrics macro or decorator adds useful metrics to any function without you having to think about what is worth tracking.

Hassle-free PromQL

Generates powerful Prometheus queries to help quickly identify and debug issues in production, straight from the IDE with the VSCode extension.

Define SLO's in source code

Developers can easily add Service-Level Objectives and get powerful alerts for errors and latency issues.

Spot faulty commits and deploys

Tracks your application's version to help identify commits that introduced errors or latency.

Track it in CI

Autometrics ships a GitHub Action that can help you track how well-instrumented your code is and how new commits will impact your observability.

Dashboards out of the box

Visualize your Autometrics data in Grafana dashboards with zero config.

Join the community

Create your first pullrequest on GitHub, help flesh out the roadmap or discuss Autometrics on Discord.