Causal Science

Advanced measurement and attribution for data-driven decisions

The tiles below are short, interactive simulations of the core problems we fix: how attribution models shift credit, how past holdout tests calibrate current MTA, why A/B tests reveal true incrementality versus correlation, and where returns saturate and effects decay. You’ll also find links to the causal methods toolkit and a membership-lift triangulation analysis.

Turn attribution into confident budget decisions with clear, auditable attribution for confident allocation.

Step 1

Data & QA

Step 2

Baseline Paths

Step 3

Model & Calibrate

Step 4

Budget & Scenarios

Step 5

Handoff

Answers You Get

  • True contribution by channel/tactic (not just last-click).
  • Assist & sequence effects captured—top/mid-funnel included.
  • Defensible budget-shift recommendations with uncertainty notes.
  • Stable rules for windows, decay, and grouping—no whiplash.

What I Deliver

  • Documented MTA: Shapley & Markov path models with a rules fallback.
  • Calibration to ground truth & tests; reconciliation with MMM.
  • Scenario planner to preview reallocations before spend.
  • Code + notebook + one-pager for finance & leadership.

Your Stack or Mine

  • Python or R implementation; BigQuery/SQL-friendly extracts.
  • Excel option for cell-by-cell transparency if preferred.
  • No black boxes, no vendor lock-in—everything auditable.

A simple change in the attribution model can dramatically shift budget decisions. Explore how different models assign value across a customer journey.

Attributed Revenue

Reconcile walled gardens and see real contribution across channels.

Methodologies
Deduplicate
Remove cross-platform overlap
Anchor to Tests
Apply incrementality multipliers
Model Integration
MMM coefficients for macro effects
Bayesian Weighting
Weight by confidence & recency
Path Reconstruction
Rebuild calibrated journeys
Validate & Monitor
Compare to holdouts
📊 Before Calibration
ChannelClaimedROAS
Facebook$75k3.00
Google$60k3.00
TikTok$40k2.67
Total Claimed$175kOverlap 75%
ROAS = platform-reported returns (claimed ÷ spend).
✅ After Calibration
ChannelTrueiROAS
Facebook$43k1.71
Google$34k1.71
TikTok$23k1.52
Totals$100k
iROAS = calibrated contribution (true ÷ spend).
💡 What this shows
Before: platforms collectively claim $175k against $100k in actual sales (75% overlap).
After calibration: channel contributions are reconciled to truth, and iROAS reveals real efficiency versus platform-reported ROAS.

Platforms like Facebook, Google, and TikTok can claim the same sale. Watch how their claims can add up to more than actual revenue. This is why calibration exists.

Ground Truth: Total Sales$100,000
$100k
Facebook Claimed$0
Google Claimed$0
TikTok Claimed$0

Prefer a step-by-step exercise? Open our simple walkthrough ↗

A Calibration Exercise

Use past holdout data to calibrate MTA and reveal true campaign impact.

Last Quarter's Holdout Test (Ground Truth)
Actual Conversions (from Test)600
MTA-Attributed Conversions (during Test)800
This Quarter's Attribution (MTA)
Campaign A1,800
Campaign B1,200
Your Calibration Task
Calibrated Campaign A
Calibrated Campaign B
Calibrated Total
Transform your correlation to causation
The $200,000 Question

You just saw how naive correlation analysis shows 17x ROAS, while proper A/B testing reveals only 2x true return. That's the difference between wasting millions and making profitable decisions. Most businesses are flying blind, mistaking correlation for causation.

🧪 True Experiments (RCTs)
  • Customer-level randomization
  • Geo holdouts with random assignment
  • Time-based switchbacks
  • Channel incrementality (randomized)
  • Creative A/B/n tests
📊 Quasi-Experiments
  • Synthetic controls (no holdouts)
  • Difference-in-Differences
  • Regression Discontinuity
  • Propensity Score Matching
  • Interrupted Time Series

When sales and ad spend both rise, it’s easy to assume causation. But is the growth real, or just correlation? This simulation reveals the expensive truth of confusing the two.

Apparent Return on Ad Spend

This view suggests a 17x return, justifying aggressive spending. However, it fails to account for customers who would have converted anyway.

$1,700,000
Observed Revenue
$100,000
Ad Spend
17x
Apparent ROAS

Marketing Mix Modeling (MMM) quantifies diminishing returns (saturation) and carryover (adstock/decay) so you can invest with confidence. If your question is “what’s actually driving the business?”, MMM is the tool that answers it.

Answers You Get

• Where each channel’s curve saturates.
• Half-life (or decay rate) of media effects.
• True channel contributions and base vs media split.

What I Deliver

Validated model + code, a scenario planner, and a budget shift recommendation, aligned to your constraints (seasonality, floors/caps, brand vs performance).

Your Stack or Mine

We can run Meridian, Robyn, or PyMC-Marketing. For fast proofs, I can stand up a clean Excel version you can audit cell-by-cell.

Step 1

Data & QA

Define KPI, align costs, de-dupe, and lock invariants.

Step 2

Quickstart MMM

Spin up a base model to surface obvious wins/risks.

Step 3

Calibration

Cross-checks, posterior predictive checks, and out-of-sample tests.

Step 4

Budget & Scenarios

Optimize to constraints; simulate shifts and expected lift.

Step 5

Handoff

Code + docs + dashboard; optional training for your team.

Find where more spend stops working and how long your marketing efforts last.

$500K

Diminishing Returns

As spend increases, each additional dollar brings back less revenue. The aim is to stop before your Marginal ROAS falls too low.

$693K
Total Attributed Revenue
1.39x
Overall ROAS
1.12x
Marginal ROAS

The more you want to learn, the more data you need.

MMM works best when you have time on your side, movement in budgets, and a focused set of questions. The tiles below outline the key signals that indicate readiness for a successful MMM project.

📊

SIGNAL

Historical Data

≥ 2 years of weekly data (or 4-5 years monthly) allows the model to see seasonality and trends.

🔄

SIGNAL

Budget Movement

Flat budgets hide impact. Sensible increases and decreases are required for the model to learn.

🧩

SIGNAL

Focused Questions

Every channel, control, and seasonal factor costs data. Start with a focused scope.

🎯

SIGNAL

Stable KPI

Use a consistent metric like revenue or conversions. Noisy or sparse data may need aggregation.

💰

SIGNAL

Ad Budget

While there's no magic number, MMM becomes more cost-effective as media spend grows ($1M+/yr).

🛠️

SIGNAL

Model Maintenance

Markets change. Plan to retrain your model on a cadence that matches your planning cycles.

A short plan beats a hundred random events. Watch the quick explainer, then sketch your own plan with the interactive builder.

You have tracking. But no shared measurement plan.

No one agrees on goals, events, or what "good" looks like. You want a short, written plan that maps real business outcomes to GA4 and the rest of your stack.

See the difference between scattered tracking and intentional measurement. One creates confusion, the other creates clarity.

Measure everything
Events everywhere, no clear objective
Tracked events
  • page_view
  • scroll_depth
  • cta_click_test
  • signup_old
  • misc_event_01
How the data behaves
page_view ⇄ scroll_depth ⇄ cta_click_test ⇄ signup_old
⇵ ⇵ ⇵ ⇵
Leads ⟻ all events (different definitions)
Bookings ⟻ all events
Revenue ⟻ all events
Everything connects to everything. No one agrees.
  • Leads
  • Bookings
  • Revenue
Without a plan, "measure everything" becomes a tangle. Different events are added "just in case," they all appear to relate to every outcome, and it's hard to agree on what success actually looks like.
Simple measurement plan
Fewer events, a clear path to one outcome
Planned events
  • view_pricing
  • generate_lead
  • booking_confirmed
Measurement objective: See how many visitors reach pricing, become qualified leads, and complete a booking.
Clear event → outcome path
view_pricing → generate_lead → booking_confirmed → Booked customer
GA4 event → GA4 event → GA4 event → Business outcome
One path. One owner. One definition of success.
  • Booked customer (primary outcome)
With a plan, events form a small, intentional graph that answers a specific question. The team agrees on the outcome, each key event has a clear name and purpose, and it's easier to see how changes affect the result.

Answer five quick questions. We'll generate a mini measurement plan and a prioritized implementation table you can export.

Step 1 — Business win
What is the main way this site creates value?
Measurement starts from the business model—not from GA4 menus.
WHAT THIS MEANS FOR YOUR MEASUREMENT PLAN
Pick 3-6 primary KPIs
Start with the few that actually move revenue. You can always add more later.
Too many KPIs. Keep to 3-6 for focus.
Translate KPIs into events
For each KPI, here's a suggested event name. Edit to match your naming convention.
KPISUGGESTED GA4 EVENT NAMEOTHER TOOLS
Reality check: Budget and effort
This helps prioritize which events to implement now vs. later.
We have dev time available this quarter
We already use Google Tag Manager
Perfect setup firstFast wins first
IMPLEMENTATION RECOMMENDATIONS
Your mini measurement plan
A summary you can share with your team or use as a starting point.
PLAN SUMMARY
Business Model
Primary KPIs
Implementation Priority
Balanced
EVENT IMPLEMENTATION PLAN
KPIEVENT NAMEPRIORITYNOTES

Set your efficiency goal, add your channels, and instantly see which to scale, hold, reduce, or cut.

CPA mode: enter spend and conversions. Lower CPA = better performance.

100%
Decision FrameworkGoal: $100
SCALE
HOLD
REDUCE
CUT
Channel Builder
ChannelCategorySpendConv.CPA

No channels yet. Click "+ Add" to get started.

Contact Us

(813) 922-8725 (8139-CAUSAL)Whether you're interested in discussing potential opportunities, sharing insights about analytics challenges, or simply want to connect over shared interests in causal inference and measurement, I'd love to hear from you.

Thank You

I appreciate your message and will respond as soon as possible. Looking forward to connecting with you.