Collecting Prometheus metrics from multi-process web servers, the Ruby case | Michal Kazmierczak

08-Sep-2023 864
In Prometheus, metrics collection must follow concrete rules. For example, counters must be either always monotonically increasing or reset to zero. Violating this rule will result in collecting nonsensical data.It is a challenge with multi-process web servers (like Unicorn or Puma in Ruby or Gunicorn in Python) where each scrape might reach a different instance of the app which holds a local copy of the metric[1]. These days, horizontal autoscaling and threaded web servers only increase the complexity of the problem. Typical solutions - implementing synchronization for scrapes or adding extra labels to initiate new time series for every instance of the app - can’t always be implemented.In this article I describe a rebellious solution of the problem which combines StatsD for metric collection and aggregation with Prometheus for time series storage and data retrieval.
Use coupon code:

RUBYONRAILS

to get 30% discount on our bundle!
Prepare for your next tech interview with our comprehensive collection of programming interview guides. Covering JavaScript, Ruby on Rails, React, and Python, these highly-rated books offer thousands of essential questions and answers to boost your interview success. Buy our 'Ultimate Job Interview Preparation eBook Bundle' featuring 2200+ questions across multiple languages. Ultimate Job Interview Preparation eBook Bundle