Goal Drivers

Introduction

The Goal Drivers Analysis identifies which features you should prioritize to improve a specific goal—often conversion or retention. Unlike simple correlations that can mislead, this analysis uses causal inference to highlight how changes in feature adoption might truly affect your chosen KPI, while neutralizing factors like user intent, engagement levels, and timing of adoption. This ensures that the impact you see is a genuine cause-and-effect relationship, not just a spurious correlation.

Use this as a roadmap for focusing your product optimization efforts on the features with the highest potential impact.


Inputs

Below are the essential inputs you provide to shape the Goal Drivers Analysis and tailor its findings to your specific product and KPI.

  • Feature Usage Metrics: A list of features you can potentially influence. Each feature has an associated current adoption rate among your users.
  • Goal KPI: The key metric you want to optimize (e.g., conversion rate or retention). All other outcomes in the analysis revolve around improving this metric.
  • Target Adoption Rate (Growth Simulator Mode): When using the Growth Simulator, you can specify a desired adoption rate for any feature. The system will then estimate how your goal KPI might change if that adoption rate is achieved.

Outputs

The Goal Drivers Analysis interface presents two key views:

  1. Current State & Maximum Potential Impact

    This view shows each feature’s current adoption rate and the maximum potential KPI outcome, derived from our causal analysis, if that feature’s adoption were raised to 100% (or effectively minimized for negative drivers). It lets you identify at a glance which features offer the greatest potential impact.

  2. Simulation Mode

    Toggling “Simulation Mode” (look for the switch in the top-right area of the interface) lets you adjust a feature’s adoption rate using a slider and instantly see its projected KPI, derived from our causal analysis. Hovering over the updated KPI projection reveals the 95% confidence interval, indicating the degree of uncertainty around the estimate.

By combining these two views—Current State & Maximum Potential Impact plus Simulation Mode—you can both pinpoint your highest-leverage opportunities and experiment with more realistic adoption scenarios to see how they might affect real-world outcomes.

Categorization of Features

Each feature is grouped into one of four categories based on how it influences the KPI:

  1. Positive Drivers: Increasing adoption is estimated to improve your KPI.
  2. Negative Drivers: Decreasing adoption is estimated to improve your KPI.
  3. Insignificant Drivers: The estimated effect is small or inconclusive (based on a 95% confidence threshold).
  4. Inconclusive Drivers: Not enough data exists to provide a reliable estimate.

Correlative Mode

A non-causal, naïve look at how feature usage appears to correlate with your KPI. While this mode can offer fast insights, it often includes bias from confounding factors (for example, power users adopting multiple features). We strongly recommend relying on the causal results for critical decisions.

To see these correlative results, select the “Correlative” option and use the “View” button (instead of “Causal”). This will display the naive relationships between features and your KPI, as shown here:



Methodology

Below is an overview of how the Goal Drivers Analysis uses causal inference techniques to estimate each feature’s true impact on your KPI. This means we address two critical pitfalls—timing (the “temporal component”) and confounding (differences in users)—so that our estimates reflect genuine cause-and-effect relationships rather than superficial correlations.


Why Naive Approaches Fail

A straightforward (but problematic) method is to compare average KPI values between users who adopt a feature (“treated”) and those who do not (“untreated”). This can be misleading for two main reasons:

1. Ignoring Timing

  • Timing of Treatment: Users might adopt a feature after converting, causing a false impression that the feature led to the conversion when it actually followed it.
  • Follow-Up Time: Even when adoption happens before conversion, each user’s available time to convert (the “follow-up window”) can differ significantly. For example, someone who joined yesterday may have had far less opportunity to adopt or convert than someone who joined six months ago.

Why Causal Methods Help: By explicitly factoring in when the user adopts the feature and how long they have to convert, causal models avoid miscounting late feature adoption as a driver of earlier conversions.

2. Confounding Variables

Changes in Users: Users who adopt a feature may already be more engaged, more experienced, or belong to different demographics (e.g., certain regions or devices) than non-adopters.

Why Causal Methods Help: These user-level differences can create observed correlations, making it seem like the feature drives conversion when it may not. Causal methods adjust for these differences, comparing adopters and non-adopters who are otherwise similar.

Without accounting for these factors, naive estimates risk substantial bias, which is why our analysis relies on advanced causal techniques.


Our Advanced Causal Approach

To tackle those biases, we combine two powerful statistical methods:

  1. Survival Analysis (handling with temporal component i.e., precisely when a user adopts the feature)
  2. Propensity Stratification (handling user-level confounding)

Step 1: Survival Analysis

Goal: Account for the timing of feature adoption and its relationship to conversion or other KPI events, ensuring unbiased estimates.

How It Works

  1. Time-to-Event Modeling
    • We analyze the time from user sign-up to both feature adoption and conversion, ensuring only cases where adoption happens before the outcome are included.
    • Scenarios where users adopt features after converting are excluded to prevent reverse causality.
  2. Handling Unequal Follow-Up
    • Not all users have the same amount of time to adopt or convert (e.g., newer users). Censoring techniques adjust for this, preventing bias from shorter follow-up windows.
  3. Cumulative Effect Estimation
    • Conversion probabilities are calculated for adopters (treated) and non-adopters (untreated) over time.
    • The difference in these probabilities at the final time point provides the feature’s causal impact on the KPI.

This ensures a time-aware, unbiased estimate of how feature adoption influences your KPI.


Step 2: Propensity Stratification

Goal: Neutralize confounders—factors like user intent, demographics, or behavior that influence both feature adoption and the KPI—to isolate the true causal effect of adoption.

How It Works

  1. Calculating Propensity Scores
    • A propensity score is assigned to each user, representing their likelihood of adopting a feature based on factors such as demographics, behavior, and prior engagement.
  2. Grouping Users Into Comparable Segments
    • Users are segmented into groups with similar propensity scores. Within these segments, adopters and non-adopters are more directly comparable, as they share similar underlying characteristics.
  3. Estimating and Aggregating Effects
    • Within each segment, the difference in KPI outcomes between adopters and non-adopters is calculated to isolate the causal effect.
    • These segment-level effects are then aggregated to produce the overall impact of feature adoption on the KPI.

By addressing confounding variables, propensity stratification ensures the results reflect the true influence of feature adoption, independent of unrelated user characteristics.


Why Goal Drivers Analysis Works

By combining Survival Analysis and Propensity Stratification, the Goal Drivers Analysis delivers robust, unbiased estimates of how feature adoption causally impacts your KPI. Survival Analysis ensures that adoption is evaluated only in scenarios where it occurs before the outcome, preventing reverse causality, and accounts for unequal follow-up times, ensuring newer users with shorter exposure windows do not skew results. Propensity Stratification addresses confounding variables—such as user intent, engagement levels, and demographics—ensuring fair comparisons between adopters and non-adopters. This adjustment neutralizes misleading correlations and isolates the true causal effect of adoption on the KPI.

Additionally, the methodology avoids introducing new biases by carefully addressing:

  • Mediators: Variables influenced by feature adoption that also affect the KPI (e.g., increased engagement). Adjusting for mediators can mask the feature’s true impact.
  • Colliders: Variables influenced by both feature adoption and the KPI. Incorrectly adjusting for colliders can create new biases in the analysis.

These techniques work together to reduce biases, provide actionable insights, and help you prioritize, simulate, and make confident, data-driven decisions to improve your KPIs.

Still need help? Contact Us Contact Us