Causal Science
Advanced measurement and attribution for data-driven decisions
The tiles below are short, interactive simulations of the core problems we fix: how attribution models shift credit, how past holdout tests calibrate current MTA, why A/B tests reveal true incrementality versus correlation, and where returns saturate and effects decay. Youโll also find links to the causal methods toolkit and a membership-lift triangulation analysis.
PROBLEM
Attribution Trap
Your marketing team is fighting over credit. See how last-touch vs multi-touch changes everything.
PROBLEM
Incrementality Test
Is your growth real or just correlation? Interactive demonstration of confounded vs randomized data.
PROBLEM
MTA Calibration
MTA shows 1,600 conversions. Past tests suggest 50% incrementality. Calculate true impact.
PROBLEM
Saturation & Decay
When does more spend stop working? How long do effects last? Explore diminishing returns.
TOOLKIT
Causal Methods
DiD, ITS, PSM, Granger causality. Full econometric arsenal for rigorous causal inference.
ANALYSIS
Membership Impact
Prove loyalty program lift. Triangulation across DiD, DoWhy, and OLS methods.
Turn attribution into confident budget decisions with clear, auditable attribution for confident allocation.
Baseline Paths
Model & Calibrate
Budget & Scenarios
Handoff
Answers You Get
- True contribution by channel/tactic (not just last-click).
- Assist & sequence effects capturedโtop/mid-funnel included.
- Defensible budget-shift recommendations with uncertainty notes.
- Stable rules for windows, decay, and groupingโno whiplash.
What I Deliver
- Documented MTA: Shapley & Markov path models with a rules fallback.
- Calibration to ground truth & tests; reconciliation with MMM.
- Scenario planner to preview reallocations before spend.
- Code + notebook + one-pager for finance & leadership.
Your Stack or Mine
- Python or R implementation; BigQuery/SQL-friendly extracts.
- Excel option for cell-by-cell transparency if preferred.
- No black boxes, no vendor lock-inโeverything auditable.
A simple change in the attribution model can dramatically shift budget decisions. Explore how different models assign value across a customer journey.
Attributed Revenue
Reconcile walled gardens and see real contribution across channels.
Channel | Claimed | ROAS |
---|---|---|
$75k | 3.00 | |
$60k | 3.00 | |
TikTok | $40k | 2.67 |
Total Claimed | $175k | Overlap 75% |
Channel | True | iROAS |
---|---|---|
$43k | 1.71 | |
$34k | 1.71 | |
TikTok | $23k | 1.52 |
Totals | $100k |
After calibration: channel contributions are reconciled to truth, and iROAS reveals real efficiency versus platform-reported ROAS.
Platforms like Facebook, Google, and TikTok can claim the same sale. Watch how their claims can add up to more than actual revenue. This is why calibration exists.
A Calibration Exercise
Use past holdout data to calibrate MTA and reveal true campaign impact.
Last Quarter's Holdout Test (Ground Truth) | |
---|---|
Actual Conversions (from Test) | 600 |
MTA-Attributed Conversions (during Test) | 800 |
This Quarter's Attribution (MTA) | |
Campaign A | 1,800 |
Campaign B | 1,200 |
Your Calibration Task | |
Calibrated Campaign A | |
Calibrated Campaign B | |
Calibrated Total |
You just saw how naive correlation analysis shows 17x ROAS, while proper A/B testing reveals only 2x true return. That's the difference between wasting millions and making profitable decisions. Most businesses are flying blind, mistaking correlation for causation.
- Customer-level randomization
- Geo holdouts with random assignment
- Time-based switchbacks
- Channel incrementality (randomized)
- Creative A/B/n tests
- Synthetic controls (no holdouts)
- Difference-in-Differences
- Regression Discontinuity
- Propensity Score Matching
- Interrupted Time Series
When sales and ad spend both rise, itโs easy to assume causation. But is the growth real, or just correlation? This simulation reveals the expensive truth of confusing the two.
Apparent Return on Ad Spend
This view suggests a 17x return, justifying aggressive spending. However, it fails to account for customers who would have converted anyway.
True Return on Ad Spend
By isolating a control group, the A/B test reveals the true return is 2x. This is a profitable, but vastly different, business case that prevents millions in misallocated budget.
Marketing Mix Modeling (MMM) quantifies diminishing returns (saturation) and carryover (adstock/decay) so you can invest with confidence. If your question is โwhatโs actually driving the business?โ, MMM is the tool that answers it.
Answers You Get
โข Where each channelโs curve saturates.
โข Half-life (or decay rate) of media effects.
โข True channel contributions and base vs media split.
What I Deliver
Validated model + code, a scenario planner, and a budget shift recommendation, aligned to your constraints (seasonality, floors/caps, brand vs performance).
Your Stack or Mine
We can run Meridian, Robyn, or PyMC-Marketing. For fast proofs, I can stand up a clean Excel version you can audit cell-by-cell.
Data & QA
Define KPI, align costs, de-dupe, and lock invariants.
Quickstart MMM
Spin up a base model to surface obvious wins/risks.
Calibration
Cross-checks, posterior predictive checks, and out-of-sample tests.
Budget & Scenarios
Optimize to constraints; simulate shifts and expected lift.
Handoff
Code + docs + dashboard; optional training for your team.
Find where more spend stops working and how long your marketing efforts last.
Diminishing Returns
As spend increases, each additional dollar brings back less revenue. The aim is to stop before your Marginal ROAS falls too low.
Carryover Effect
A single pulse of spend decays over time. The โHalf-Lifeโ tells you how many weeks it takes for impact to fall by 50%.
The more you want to learn, the more data you need.
MMM works best when you have time on your side, movement in budgets, and a focused set of questions. The tiles below outline the key signals that indicate readiness for a successful MMM project.
SIGNAL
Historical Data
โฅ 2 years of weekly data (or 4-5 years monthly) allows the model to see seasonality and trends.
SIGNAL
Budget Movement
Flat budgets hide impact. Sensible increases and decreases are required for the model to learn.
SIGNAL
Focused Questions
Every channel, control, and seasonal factor costs data. Start with a focused scope.
SIGNAL
Stable KPI
Use a consistent metric like revenue or conversions. Noisy or sparse data may need aggregation.
SIGNAL
Ad Budget
While there's no magic number, MMM becomes more cost-effective as media spend grows ($1M+/yr).
SIGNAL
Model Maintenance
Markets change. Plan to retrain your model on a cadence that matches your planning cycles.
Certificates
Download Rรฉsumรฉ (PDF)Contact Us
(813) 922-8725 (8139-CAUSAL)Whether you're interested in discussing potential opportunities, sharing insights about analytics challenges, or simply want to connect over shared interests in causal inference and measurement, I'd love to hear from you.
Thank You
I appreciate your message and will respond as soon as possible. Looking forward to connecting with you.