relay_metrics/
lib.rs

1//! Metric protocol, aggregation and processing for Sentry.
2//!
3//! Metrics are high-volume values sent from Sentry clients, integrations, or extracted from errors
4//! and transactions, that can be aggregated and queried over large time windows. As opposed to rich
5//! errors and transactions, metrics carry relatively little context information in tags with low
6//! cardinality.
7//!
8//! # Protocol
9//!
10//! Clients submit metrics in a [text-based protocol](Bucket) based on StatsD. See the [field
11//! documentation](Bucket#fields) on `Bucket` for more information on the components. A sample
12//! submission looks like this:
13//!
14//! ```text
15#![doc = include_str!("../tests/fixtures/buckets.statsd.txt")]
16//! ```
17//!
18//! The metric type is part of its signature just like the unit. Therefore, it is allowed to reuse a
19//! metric name for multiple metric types, which will result in multiple metrics being recorded.
20//!
21//! # Metric Envelopes
22//!
23//! To send one or more metrics to Relay, the raw protocol is enclosed in an envelope item of type
24//! `metrics`:
25//!
26//! ```text
27//! {}
28//! {"type": "statsd", ...}
29#![doc = include_str!("../tests/fixtures/buckets.statsd.txt")]
30//! ...
31//! ```
32//!
33//! Note that the name format used in the statsd protocol is different from the MRI: Metric names
34//! are not prefixed with `<ty>:` as the type is somewhere else in the protocol. If no metric
35//! namespace is specified, the `"custom"` namespace is assumed.
36//!
37//! Optionally, a timestamp can be added to every line of the submitted envelope. The timestamp has
38//! to be a valid Unix timestamp (UTC) and must be prefixed with `T`. If it is omitted, the
39//! `received` time of the envelope is assumed.
40//!
41//! # Aggregation
42//!
43//! Relay accumulates all metrics in [time buckets](Bucket) before sending them onwards. Aggregation
44//! is handled by the [`aggregator::Aggregator`], which should be created once for the entire system. It flushes
45//! aggregates in regular intervals, either shortly after their original time window has passed or
46//! with a debounce delay for backdated submissions.
47//!
48//! **Warning**: With chained Relays submission delays accumulate.
49//!
50//! Aggregate buckets are encoded in JSON with the following schema:
51//!
52//! ```json
53#![doc = include_str!("../tests/fixtures/buckets.json")]
54//! ```
55//!
56//! # Ingestion
57//!
58//! Processing Relays write aggregate buckets into the ingestion Kafka stream. The schema is similar
59//! to the aggregation payload, with the addition of scoping information. Each bucket is sent in a
60//! separate message:
61//!
62//! ```json
63#![doc = include_str!("../tests/fixtures/kafka.json")]
64//! ```
65#![warn(missing_docs)]
66#![doc(
67    html_logo_url = "https://raw.githubusercontent.com/getsentry/relay/master/artwork/relay-icon.png",
68    html_favicon_url = "https://raw.githubusercontent.com/getsentry/relay/master/artwork/relay-icon.png"
69)]
70
71pub mod aggregator;
72pub mod cogs;
73
74mod bucket;
75mod finite;
76mod protocol;
77mod statsd;
78mod utils;
79mod view;
80
81pub use bucket::*;
82pub use finite::*;
83pub use protocol::*;
84pub use utils::ByNamespace;
85pub use view::*;