Platform lift studies vs Dema
Understand the differences between platform conversion lift studies and Dema’s geo-based incrementality tests, and why Dema provides a more reliable, neutral measurement.
Understanding the difference between platform conversion lift and Dema’s geo-based testing
Both platform conversion lift studies (like Meta & Google’s Conversion Lift) and Dema’s geo-based testing are designed to measure incrementality, but they do so in different ways:
- Platform conversion lift studies use a user-level treatment, randomly selecting individuals who will not see ads (user-level holdout treatment) while measuring how their behavior differs from users who do (control group).
- Dema’s geo-testing applies a regional treatment, where certain geographic areas have modified ad spend (either paused as holdout treatment or increased for new channel tests) while other regions continue as usual (control group).
These two approaches are compatible because one operates at the individual level and the other at the regional level. As long as user-level selection aligns with regional selection, the methodologies do not interfere with each other.
Why should you run Dema’s geo-testing instead of platform tests?
While platform conversion lift studies can be useful, they come with limitations that make results harder to compare across platforms. Here’s why Dema’s methodology provides a more neutral and consistent measurement of incrementality:
- Different attribution windows make platform comparisons difficult. Each platform applies its own attribution rules (e.g., Meta might use a 7-day click window, while Google uses a different lookback period). This makes cross-platform comparisons inconsistent and unreliable.
- Platforms measure their own performance. Since platforms run their own lift studies, they are inherently biased in how they define attribution, conversions, and reporting—potentially inflating their own perceived impact.
- Dema acts as an independent, neutral measurement layer. Our methodology applies the same measurement approach across all platforms, ensuring fair comparisons without platform-specific biases.
- More control over experiment setup. With Dema, you can structure tests to align with real-world budget allocations and track both sales and profit impact (epROAS, GP2, GP3) instead of just platform-defined conversions.
By running Dema’s geo-testing, you get a standardized, unbiased view of marketing effectiveness that allows you to make truly data-driven decisions across all platforms.
Should you run both types of tests at the same time?
This depends on your priorities. Running them in parallel is possible because both treatment groups are created randomly, meaning one test will not skew the other’s results. However, you may want to consider the opportunity cost of treatment groups: running multiple tests at once means a larger share of your audience or regions will experience modified ad spend, which could impact your total reach.
What if the results from both tests don’t match?
If user-level and geo-level tests provide conflicting results, it could indicate that one test has unintentional bias. Some potential factors include:
- Demographic or behavioral weighting differences: User-level treatment groups may accidentally overweight certain audience segments (e.g., younger users or high-frequency shoppers), leading to skewed outcomes.
- Regional differences in purchasing behavior: Even with randomization, some areas may have different baseline conversion rates due to local economic conditions, brand presence, or competitive factors.
Dema’s synthetic control methodology is designed to create a balanced and representative comparison, minimizing regional biases. In practice, most businesses find that platform lift studies and geo-based experiments produce consistent results. However, if discrepancies arise, running an additional test can help validate findings and refine marketing decisions.
By leveraging both approaches thoughtfully, you can gain a deeper understanding of incrementality, using geo-testing for real-world budget allocation insights and user-level lift studies for granular audience measurement.