Collecting Prometheus metrics from multi-process web servers, the Ruby case | Michal Kazmierczak
08-Sep-2023 965
In Prometheus, metrics collection must follow concrete rules. For example, counters must be either always monotonically increasing or reset to zero. Violating this rule will result in collecting nonsensical data.It is a challenge with multi-process web servers (like Unicorn or Puma in Ruby or Gunicorn in Python) where each scrape might reach a different instance of the app which holds a local copy of the metric[1]. These days, horizontal autoscaling and threaded web servers only increase the complexity of the problem. Typical solutions - implementing synchronization for scrapes or adding extra labels to initiate new time series for every instance of the app - can’t always be implemented.In this article I describe a rebellious solution of the problem which combines StatsD for metric collection and aggregation with Prometheus for time series storage and data retrieval.
Collecting Prometheus metrics from multi-process web servers, the Ruby case | Michal Kazmierczak #ruby #rubydeveloper #rubyonrails #Collecting #Prometheus #metrics #multi-process #servers, #Michal #Kazmierczak #servers, #web https://rubyonrails.ba/link/collecting-prometheus-metrics-from-multi-process-web-servers-the-ruby-case-michal-kazmierczak