Accessing calibrated attribution
Once calibration is configured, a new attribution model called Dema CFA (Dema causal factor attribution) becomes available as a selectable option in your reports. You can switch between attribution models to compare calibrated and uncalibrated views of your data.
Before vs after calibration
Once calibration is applied, your attribution data reflects causally adjusted contributions rather than raw MTA or ad platform-reported values. The key changes to look for:- Channel contribution shares shift to reflect true incremental impact
- Source channels (direct and other unattributed traffic) absorb the redistribution delta
- Daily totals remain unchanged - only the distribution across channels changes
Example walkthrough
Consider a merchant with the following daily attribution (using MTA as the base in this example):| Channel | MTA Gross Sale | Share |
|---|---|---|
| Meta | €10,000 | 40% |
| €8,000 | 32% | |
| Direct | €5,000 | 20% |
| Other | €2,000 | 8% |
| Total | €25,000 | 100% |
| Channel | Calibrated Gross Sale | Share | Change |
|---|---|---|---|
| Meta | €7,000 | 28% | -12pp |
| €8,000 | 32% | - | |
| Direct | €7,143 | 28.6% | +8.6pp |
| Other | €2,857 | 11.4% | +3.4pp |
| Total | €25,000 | 100% | - |
The €3,000 reduction from Meta is redistributed proportionally to Direct and Other (the source channels), based on their original gross sale shares. The daily total of €25,000 is preserved exactly.
- Meta was previously over-credited by about 30% according to the incrementality experiment
- Direct and organic traffic were contributing more than MTA suggested
- Budget optimization decisions based on calibrated values will be more accurate
Downstream effects
Calibrated channel-level contributions flow into campaign-level attribution. This means:- Channel-level calibration adjusts the total contribution for each channel (e.g., Meta Paid Social)
- Campaign/ad-level distribution uses MTA patterns within that channel to allocate the calibrated total across individual campaigns, ad sets, and ads

Comparing ROAS metrics
With Causal factor attribution enabled, you can compare multiple views of channel performance:| Metric | What it reflects | Source |
|---|---|---|
| MTA ROAS | Correlation-based return on ad spend | Multi-touch attribution model |
| Ad platform ROAS | Platform’s self-reported return | Meta, Google, TikTok dashboards |
| Calibrated ROAS | Causally adjusted return on ad spend | Causal factor attribution |
| Incremental ROAS | Experimentally measured return from a single experiment | Incrementality experiments |
When to recalibrate
Your calibration settings should be reviewed and updated when:New experiment results
After completing an incrementality experiment, check whether the results change the benchmark distribution. If the distribution shifts meaningfully, update your multiplier.
Market changes
Significant changes in your market (new competitors, seasonal shifts, or major campaign strategy changes) can affect how incrementally effective your channels are.
Quarterly reviews
Even without new experiments, reviewing calibrations quarterly ensures they still align with your business reality and haven’t drifted.
New channels
When you add a new marketing channel, check the benchmarked distribution and set an initial calibration. Plan an incrementality experiment to validate.
Improving accuracy over time
The more incrementality experiments you run, the more accurate your calibrations become:Start with benchmarks
Use Dema’s platform-wide benchmarked incremental factors as your initial calibration. This is already better than uncalibrated MTA.
Run experiments on high-spend channels
Prioritize incrementality experiments on your largest channels. This is where miscalibration has the biggest budget impact.
Update calibrations with results
After each experiment, review the updated distribution and adjust your multiplier. The bell curve will narrow, reflecting increased confidence.
The virtuous cycle: Better calibration leads to better budget allocation, which leads to better business outcomes. Running even a few well-designed incrementality experiments can significantly improve the accuracy of your entire attribution system.

