Advanced measurement and attribution for data-driven decisions
The tiles below are short, interactive simulations of the core problems we fix: how attribution models shift credit, how past holdout tests calibrate current MTA, why A/B tests reveal true incrementality versus correlation, and where returns saturate and effects decay. You’ll also find links to the causal methods toolkit and a membership-lift triangulation analysis.
PROBLEM
Your marketing team is fighting over credit. See how last-touch vs multi-touch changes everything.
📈PROBLEM
Is your growth real or just correlation? Interactive demonstration of confounded vs randomized data.
🎯PROBLEM
MTA shows 1,600 conversions. Past tests suggest 50% incrementality. Calculate true impact.
📉PROBLEM
When does more spend stop working? How long do effects last? Explore diminishing returns.
🔬TOOLKIT
DiD, ITS, PSM, Granger causality. Full econometric arsenal for rigorous causal inference.
💳ANALYSIS
Prove loyalty program lift. Triangulation across DiD, DoWhy, and OLS methods.
A simple change in the attribution model can dramatically shift budget decisions. Explore how different models assign value across a customer journey.
Platforms like Facebook, Google, and TikTok can claim the same sale. Watch how their claims can add up to more than actual revenue. This is why calibration exists.
Prefer a step-by-step exercise? Open our simple walkthrough ↗
Use past holdout data to calibrate MTA and reveal true campaign impact.
| Last Quarter's Holdout Test (Ground Truth) | |
|---|---|
| Actual Conversions (from Test) | 600 |
| MTA-Attributed Conversions (during Test) | 800 |
| This Quarter's Attribution (MTA) | |
| Campaign A | 1,800 |
| Campaign B | 1,200 |
| Your Calibration Task | |
| Calibrated Campaign A | |
| Calibrated Campaign B | |
| Calibrated Total | |
When sales and ad spend both rise, it’s easy to assume causation. But is the growth real, or just correlation? This simulation reveals the expensive truth of confusing the two.
This view suggests a 17x return, justifying aggressive spending. However, it fails to account for customers who would have converted anyway.
By isolating a control group, the A/B test reveals the true return is 2x. This is a profitable, but vastly different, business case that prevents millions in misallocated budget.
Find where more spend stops working and how long your marketing efforts last.
As spend increases, each additional dollar brings back less revenue. The aim is to stop before your Marginal ROAS falls too low.
A single pulse of spend decays over time. The "Half-Life" tells you how many weeks it takes for impact to fall by 50%.
The more you want to learn, the more data you need.
MMM works best when you have time on your side, movement in budgets, and a focused set of questions. The tiles below outline the key signals that indicate readiness for a successful MMM project.
SIGNAL
≥ 2 years of weekly data (or 4-5 years monthly) allows the model to see seasonality and trends.
SIGNAL
Flat budgets hide impact. Sensible increases and decreases are required for the model to learn.
SIGNAL
Every channel, control, and seasonal factor costs data. Start with a focused scope.
SIGNAL
Use a consistent metric like revenue or conversions. Noisy or sparse data may need aggregation.
SIGNAL
While there's no magic number, MMM becomes more cost-effective as media spend grows ($1M+/yr).
SIGNAL
Markets change. Plan to retrain your model on a cadence that matches your planning cycles.
A short plan beats a hundred random events. Watch the quick explainer, then sketch your own plan with the interactive builder.
You have tracking. But no shared measurement plan.
No one agrees on goals, events, or what "good" looks like. You want a short, written plan that maps real business outcomes to GA4 and the rest of your stack.
See the difference between scattered tracking and intentional measurement. One creates confusion, the other creates clarity.
Answer five quick questions. We'll generate a mini measurement plan and a prioritized implementation table you can export.
| KPI | SUGGESTED GA4 EVENT NAME | OTHER TOOLS |
|---|
| KPI | EVENT NAME | PRIORITY | NOTES |
|---|
Set your efficiency goal, add your channels, and instantly see which to scale, hold, reduce, or cut.
CPA mode: enter spend and conversions. Lower CPA = better performance.
| Channel | Category | Spend | Conv. | CPA |
|---|
No channels yet. Click "+ Add" to get started.
A first-order Markov chain with removal effects—the math behind advanced attribution.
A Markov chain is a system that moves between states, where the probability of the next state depends only on the current state—not on how you got there. This is the Markov property (memorylessness).
In attribution, the states are marketing channels (Display, Search, Email, etc.) plus two absorbing states: Conversion and No Conversion. Once you enter an absorbing state, you stay there forever—the journey is over.
Removal effects are the key insight. For each channel, we ask: “What happens to the overall conversion rate if we remove this channel entirely?”
If removing Search drops conversions from 70% to 30%, Search has a large removal effect. If removing Display only drops conversions from 70% to 65%, Display is less critical. We normalize these effects so they sum to 100%, giving each channel its attribution share.
Unlike last-click or first-click models, Markov chain attribution captures interdependencies between channels. A channel that rarely gets the last click but frequently assists conversions will still receive appropriate credit.
Enter customer journeys using the format: Channel → Channel → Conversion ($value) or No Conversion
Channels: D (Display), F (Facebook), S (Search), E (Email)
This matrix shows the probability of moving from one state to another based on your journey data.
Compare propensity targeting PropensityProbability of purchase. Targeting "likely buyers" often wastes money on people who would buy anyway. vs. uplift targeting UpliftThe causal effect. Probability of buying IF treated minus probability of buying IF NOT treated..
Enter your actual Test vs Control results per segment. The treatment counts come directly from your campaign data — no simulation.
| Segment | Test Group | Control Group | Uplift | ||
|---|---|---|---|---|---|
| Conv | Exposed | Conv | Exposed | ||
Use post-purchase surveys to triangulate walled garden attribution and build statistically rigorous incrementality estimates.
Post-purchase surveys provide a ground-truth layer for marketing measurement by asking customers directly about their purchase journey, enabling cross-platform comparison and walled garden calibration.
Traditional platform attribution (Meta, TikTok, Google) has inherent limitations:
Understand why customers purchased and which channels they actually recall
Much cheaper than MTA platforms or geo-lift studies
Compare performance across all paid and organic channels
Create multipliers to normalize platform-reported metrics
⚠️ Important: Surveys have response bias, attrition issues, and memory limitations. When properly designed and analyzed with rigorous survey methodology, they provide invaluable triangulation data.
Ask customers: "How did you hear about us?"
What matters is why someone purchased and what they remember. A customer's first touchpoint (platform-attributed) could be something they don't recall at all.
Adjust the inputs below to see how implied revenue and ROAS are calculated:
| Channel | Spend | Survey Revenue | Implied Revenue | Implied ROAS |
|---|
The multiplier translates platform-reported ROAS to comparable survey-implied ROAS:
Enter platform-reported revenue to calculate normalization multipliers:
Platform revenue fields are automatically added for each channel with spend. Add channels in the Methodology tab.
| Channel | Survey ROAS | Platform ROAS | % Multiplier | Calibrated ROAS |
|---|
Use calibrated ROAS for budget decisions. This accounts for platform attribution inflation and provides a more accurate cross-channel comparison.
Track completion rates, drop-off points, and timing patterns
Analyze which channels customers remember vs. actual touchpoints
Understand multi-touch journeys from survey responses
Break down responses by age, gender, location, etc.
Create binary variables for extreme responses and sum across items to identify respondents with systematic extreme answering patterns.
Compute proportion of middle-option selections per respondent and flag those >2 SD above mean.
Identify respondents who systematically agree regardless of content.
For conjoint/vignette designs in PPS:
Total attrition rate, stage-specific attrition, correlation with demographics
Page load times, drop-off probability by page, question-type friction
Weighting methods, model-based approaches, imputation
Raw vs. corrected results, robustness checks, benchmark comparisons
Define cells (age × gender × education) and adjust weights to match population totals.
Estimate probability of responding given covariates, then weight by inverse probability.
Iteratively match marginal distributions when joint distributions are sparse.
Random or deterministic donor selection from observed data
Predict missing values using observed covariates
Add random residuals to preserve variance
⚠️ Critical: All imputations must be conditional, multivariate, and stochastic to preserve data structure.
Remove bot responses, filter failed attention checks, handle partial completes
Typical range: 15-30%
Below 15%: Investigate design issues
Above 30%: Excellent engagement
Computed from your channel data in the Methodology and Calibration tabs. Adjust inputs there to update recommendations.
| Channel | Budget | Survey ROAS | Platform ROAS | Recommendation |
|---|
SurveyMonkey, Typeform, Google Forms, Qualtrics
Excel, R, Python (pandas, statsmodels), SPSS
Zapier, Shopify apps, custom API integrations
From measurement plan to causal proof — the full stack
Start with a measurement plan that ties business goals to trackable events, then explore the core challenges: how attribution models shift credit, why A/B tests reveal true incrementality versus correlation, how past holdout tests calibrate current MTA, and where diminishing returns set in. When you're ready, build your own plan with the interactive builder.
START HERE
You have tracking but no shared plan. Map business outcomes to GA4 events and align your team on what to measure.
📊PROBLEM
Your marketing team is fighting over credit. See how last-touch vs multi-touch changes everything.
📈PROBLEM
Is your growth real or just correlation? Interactive demonstration of confounded vs randomized data.
🎯PROBLEM
MTA shows 1,600 conversions. Past tests suggest 50% incrementality. Calculate true impact.
📉PROBLEM
When does more spend stop working? How long do effects last? Explore diminishing returns.
💰OPTIMIZE
Set your efficiency goal, add your channels, and instantly see which to scale, hold, reduce, or cut.
(813) 922-8725 (8139-CAUSAL)Whether you're interested in discussing potential opportunities, sharing insights about analytics challenges, or simply want to connect over shared interests in causal inference and measurement, I'd love to hear from you.
I appreciate your message and will respond as soon as possible. Looking forward to connecting with you.