Seasonality Explainer

Intro

Loops Root Cause Analysis (RCA) includes various types of explainers:

  • Segment explainer - Identifies which user or entity segments are contributing to the observed effects
  • Algebraic tree explainers - When a KPI consists of an algebraic combination of other signals (e.g., profit = revenue - clicks * CPC), determines each sub-signal's contribution to the KPI change
  • Population explainer - Measures how much a KPI change can be attributed to fluctuations in entity numbers (e.g., a conversion rate drop resulting from an influx of new users due to changed marketing spend)
  • Seasonality explainer - Assesses how much a KPI change is influenced by yearly seasonal patterns or recurring events

This document explains how the seasonality explainer works by identifying recurring patterns between current and previous years and using these relationships to quantify seasonality's impact on observed changes.


In practice, Loops' seasonality explainer consists of two distinct types:

  1. Periodical seasonality explainer - identifies similarities in KPI behavior over an extended period (~2 months) preceding the analyzed date and the parallel period (same date range) from the previous year.
  2. Notable events explainer (also called holiday explainer) - focuses on significant events occurring on the analyzed dates that are relevant to the user's country mix. These events often fall on different dates each year (such as Thanksgiving, the US Super Bowl, and the Brazilian Carnival) and require specific methodology.

In this document, we'll describe the methodology for both types.

Methodology

Daily data often exhibits strong weekly seasonality fluctuations, which is why we employ a different methodology to quantify the yearly seasonality component in daily KPIs compared to non-daily (weekly/monthly) KPIs.


Periodical seasonality - Weekly / Monthly KPI

The seasonality explainer methodology for weekly/monthly KPIs involves three key steps:

  1. Detecting and modeling the relationship between this year's and last year's KPI values over the same time periods.
  2. Using this model to predict this year's KPI values in both the analyzed and reference periods based on last year's data.
  3. Calculating the percentage explained by seasonality by comparing the predicted trend to the actual trend (the difference between analyzed and reference periods).

Let's examine each of these steps in detail:


Step 1 - Modeling the relationship

We use a simple regression to predict this year's values from the KPI values on the same dates last year. Since our weekly dates are always Monday-to-Sunday, we take exactly 52 weeks back for each date to find its parallel. We analyze about 2 months (8 weeks) of data to detect this relationship—enough to identify meaningful patterns while focusing on recent periods, as seasonality can vary throughout the year.

Even when the absolute values differ significantly between years (much higher or lower), a KPI with a similar pattern will produce a well-fitted model, despite the limited observations. Below is an example showing values from two consecutive years resulting in a good fit (with only 4 values available from last year). The quality of this fit and the number of available data points directly influence our statistical significance calculations—better fits and more data points produce smaller standard errors in the model.


Step 2 - predicted values and trend

Let's call the model from the previous step f(x). For every KPI value from last year ($Y_{old}$), this model predicts a value for the same date this year, which we denote with a flat over-line:

We apply this model to predict values for both the analyzed period (the week or month being examined) and the reference period. Depending on the analysis mode, the reference period could be either:

  • Compared period (typically the previous week in weekly data, unless specified otherwise)
  • Expected by Loops - where the reference value is the average of the previous 4 weeks. Here, we predict each of the last 4 weeks and average those predictions.

This gives us four key values:

  1. Y_analyzed} - The actual KPI value in the analyzed period
  2. Y_ref - The actual KPI value in the reference period (e.g., last week or last 4 weeks)
  3. Y_analyzed bar - The predicted KPI value in the analyzed period, based on the model and last year's value
  4. Y_ref bar - The predicted KPI value in the reference period, based on the model and last year's value

Step 3 - calculate % explained as ratio of predicted to actual trend

We now use these 4 actual and predicted values to calculate the percentage explained by seasonality as follows:

In the numerator, we have the predicted trend (the change in KPI) using last year's values and the fitted model, while the denominator contains the actual trend observed this year. When this year's and last year's KPI values have very similar shapes (for example, if all of last year's WAU values are half of this year's values), this results in a perfect fit—meaning the predicted values will equal the actual values, yielding 100% explanation. Dissimilar values produce a poorly fitted model, resulting in a statistically insignificant result. Similar values with a different trend on the analyzed date can result in partial explanation, meaning there is statistically significant seasonality, but it only explains part of the observed trend.


Periodical Seasonality - Daily KPI

In daily KPIs the weekly seasonality is very often stronger than any yearly seasonality (as weekends are often very different than weekdays). Moreover, looking back to the same date last year will always result in a different day-of-week (either one day earlier in the week in most cases, or two in case Feb-29th is in between the dates). For these reasons, we take a different approach to calculating seasonality in daily data, utilizing our expected values model (used mainly to detect anomalies in the data). The main idea is to calculate the deviance from the expected value last year, and compare it to the deviance from the expected value this year. Depending on the analysis mode, the daily expected value can be either:

  • Compared date (typically the same day in previous week, unless specified otherwise)
  • Expected by Loops - a modeled value that takes into account both recent trends in the data and weekly seasonality.

That way, the fact that these dates fall on different day-of-week is taken into account. Furthermore, we compare the lift from expected value (rather than the differences), assuming the seasonality is multiplicative rather than additive (i.e. we expect a certain date to affect the KPI by a certain percent, not a certain value difference). Here is the formula:

So if for example we see a 20% spike in users on March 31st 2025 (compared to whatever reference value depending on the analysis mode), we go one year back to the same date and check the deviance from expected value back then using the same reference methodology. If for example the KPI observed an 18% spike compared to the reference on March 31st 2024, this implies seasonality accounts for 18/20 = 90% of the trend this year using this specific analysis mode.


Notable events (a.k.a. holiday explainer)

The notable event explainer uses a unified methodology for both daily and weekly granularity, similar to the daily KPI methodology in periodical seasonality. While the periodical seasonality model compares approximately the same dates across consecutive years, the notable events explainer identifies the effects of known impactful events that may occur on different dates each year. This model incorporates additional metadata beyond the periodical seasonality model, including:

  • A comprehensive list of impactful global events, such as religious observances, national/international holidays, major sports/cultural events, and daylight saving time changes. Each event includes its annual date range, affected countries, and other relevant metadata.
  • Country-specific user/entity distribution in the RCA analysis to prioritize relevant events. For instance, an analysis focused on the US market would not consider events that don't affect US users (such as Brazilian Carnival).

The model operates as follows: It compiles potential notable events for the analyzed period (whether a specific day, week, or multiple weeks in custom period mode). To maintain clarity, events are ranked based on two factors: the proportion of affected population (events affecting countries with larger shares of the analysis population rank higher) and the extent of overlap between the event's date range and the analyzed period. The highest-ranking event that exceeds certain thresholds for population share and time overlap is used for estimation using the daily periodical seasonality formula. The key difference is that values from the previous year come from when the event occurred last year, not necessarily the same calendar date.


For example, suppose we're investigating a week-over-week drop in MAU during April 14-20, 2025. Easter, falling on April 20th in 2025 and affecting major markets in the user base (US, Brazil, Germany, etc.), emerges as the strongest candidate for the notable events explainer. We then compare the week-over-week drop on April 14th, 2025, to the drop during Easter week in 2024 (March 25th) versus the preceding week. If MAU dropped by 10% this year and Easter week last year was 8% lower than the week before, we conclude that Easter explains 80% of the analyzed drop.

Summary

The Seasonality Explainer in Loops’ RCA toolkit quantifies how much of an observed KPI change is driven by predictable, recurring patterns, whether they arise from year-over-year cycles or from specific high-impact events. For periodic seasonality, we model historical parallels (the same weekly or monthly window in the prior year) to predict expected KPI values, then compare predicted versus actual trends to calculate the percentage of change attributable to seasonality. For daily data, we leverage our anomaly-detection model to normalize day-of-week effects and compute multiplicative lifts from expected values, ensuring that weekend/weekday differences do not distort year-on-year comparisons.


The Notable Events Explainer (or Holiday Explainer) extends this approach by identifying country-relevant events—like major holidays or cultural milestones—that shift on the calendar each year. By matching each analysis period to its corresponding event window in the previous year and weighting by affected population share, we estimate how much these notable events account for the observed KPI fluctuation. Together, these methodologies give analysts a clear, statistically sound decomposition of seasonality’s role in KPI movements, enabling more accurate isolation of genuine anomalies and true underlying drivers.

Still need help? Contact Us Contact Us