A simple change in the attribution model can dramatically shift budget decisions. Explore how different models assign value across a customer journey.

Attributed Revenue

Platforms like Facebook, Google, and TikTok can claim the same sale. Watch how their claims can add up to more than actual revenue. This is why calibration exists.

Ground Truth: Total Sales$100,000
$100k
Facebook Claimed$0
Google Claimed$0
TikTok Claimed$0

Prefer a step-by-step exercise? Open our simple walkthrough ↗

Use past holdout data to calibrate MTA and reveal true campaign impact.

Last Quarter's Holdout Test (Ground Truth)
Actual Conversions (from Test)600
MTA-Attributed Conversions (during Test)800
This Quarter's Attribution (MTA)
Campaign A1,800
Campaign B1,200
Your Calibration Task
Calibrated Campaign A
Calibrated Campaign B
Calibrated Total

When sales and ad spend both rise, it’s easy to assume causation. But is the growth real, or just correlation? This simulation reveals the expensive truth of confusing the two.

Apparent Return on Ad Spend

This view suggests a 17x return, justifying aggressive spending. However, it fails to account for customers who would have converted anyway.

$1,700,000
Observed Revenue
$100,000
Ad Spend
17x
Apparent ROAS

Find where more spend stops working and how long your marketing efforts last.

$500K

Diminishing Returns

As spend increases, each additional dollar brings back less revenue. The aim is to stop before your Marginal ROAS falls too low.

$693K
Total Attributed Revenue
1.39x
Overall ROAS
1.12x
Marginal ROAS
#mmmreadiness

The more you want to learn, the more data you need.

MMM works best when you have time on your side, movement in budgets, and a focused set of questions. The tiles below outline the key signals that indicate readiness for a successful MMM project.

📊

SIGNAL

Historical Data

≥ 2 years of weekly data (or 4-5 years monthly) allows the model to see seasonality and trends.

🔄

SIGNAL

Budget Movement

Flat budgets hide impact. Sensible increases and decreases are required for the model to learn.

🧩

SIGNAL

Focused Questions

Every channel, control, and seasonal factor costs data. Start with a focused scope.

🎯

SIGNAL

Stable KPI

Use a consistent metric like revenue or conversions. Noisy or sparse data may need aggregation.

💰

SIGNAL

Ad Budget

While there's no magic number, MMM becomes more cost-effective as media spend grows ($1M+/yr).

🛠️

SIGNAL

Model Maintenance

Markets change. Plan to retrain your model on a cadence that matches your planning cycles.

A short plan beats a hundred random events. Watch the quick explainer, then sketch your own plan with the interactive builder.

You have tracking. But no shared measurement plan.

No one agrees on goals, events, or what "good" looks like. You want a short, written plan that maps real business outcomes to GA4 and the rest of your stack.

See the difference between scattered tracking and intentional measurement. One creates confusion, the other creates clarity.

Measure everything
Events everywhere, no clear objective
Tracked events
  • page_view
  • scroll_depth
  • cta_click_test
  • signup_old
  • misc_event_01
How the data behaves
page_view ⇄ scroll_depth ⇄ cta_click_test ⇄ signup_old
⇵     ⇵       ⇵        ⇵
Leads ⟻ all events (different definitions)
Bookings ⟻ all events
Revenue ⟻ all events
Everything connects to everything. No one agrees.
  • Leads
  • Bookings
  • Revenue
Without a plan, "measure everything" becomes a tangle. Different events are added "just in case," they all appear to relate to every outcome, and it's hard to agree on what success actually looks like.
Simple measurement plan
Fewer events, a clear path to one outcome
Planned events
  • view_pricing
  • generate_lead
  • booking_confirmed
Measurement objective: See how many visitors reach pricing, become qualified leads, and complete a booking.
Clear event → outcome path
view_pricing → generate_lead → booking_confirmed → Booked customer
GA4 event → GA4 event → GA4 event → Business outcome
One path. One owner. One definition of success.
  • Booked customer (primary outcome)
With a plan, events form a small, intentional graph that answers a specific question. The team agrees on the outcome, each key event has a clear name and purpose, and it's easier to see how changes affect the result.

Answer five quick questions. We'll generate a mini measurement plan and a prioritized implementation table you can export.

Step 1 — Business win
What is the main way this site creates value?
Measurement starts from the business model—not from GA4 menus.
WHAT THIS MEANS FOR YOUR MEASUREMENT PLAN
Pick 3-6 primary KPIs
Start with the few that actually move revenue. You can always add more later.
Too many KPIs. Keep to 3-6 for focus.
Translate KPIs into events
For each KPI, here's a suggested event name. Edit to match your naming convention.
KPISUGGESTED GA4 EVENT NAMEOTHER TOOLS
Reality check: Budget and effort
This helps prioritize which events to implement now vs. later.
We have dev time available this quarter
We already use Google Tag Manager
Perfect setup firstFast wins first
IMPLEMENTATION RECOMMENDATIONS
Your mini measurement plan
A summary you can share with your team or use as a starting point.
PLAN SUMMARY
Business Model
Primary KPIs
Implementation Priority
Balanced
EVENT IMPLEMENTATION PLAN
KPIEVENT NAMEPRIORITYNOTES

Set your efficiency goal, add your channels, and instantly see which to scale, hold, reduce, or cut.

CPA mode: enter spend and conversions. Lower CPA = better performance.

100%
Decision FrameworkGoal: $100
SCALE
HOLD
REDUCE
CUT
Channel Builder
ChannelCategorySpendConv.CPA

No channels yet. Click "+ Add" to get started.

A first-order Markov chain with removal effects—the math behind advanced attribution.

A Markov chain is a system that moves between states, where the probability of the next state depends only on the current state—not on how you got there. This is the Markov property (memorylessness).

Transition Matrix
Every Markov chain is defined by a transition matrix. Each row represents a “from” state, each column a “to” state. The values are probabilities, so every row sums to 1.0.

In attribution, the states are marketing channels (Display, Search, Email, etc.) plus two absorbing states: Conversion and No Conversion. Once you enter an absorbing state, you stay there forever—the journey is over.

Absorbing States
An absorbing Markov chain has at least one state you can never leave. Conversion and No Conversion are absorbing: once a customer converts (or drops off), the journey ends. The math question becomes: what is the probability of being absorbed into Conversion vs. No Conversion?

Removal effects are the key insight. For each channel, we ask: “What happens to the overall conversion rate if we remove this channel entirely?”

If removing Search drops conversions from 70% to 30%, Search has a large removal effect. If removing Display only drops conversions from 70% to 65%, Display is less critical. We normalize these effects so they sum to 100%, giving each channel its attribution share.

The Formula
Removal Effect(channel) = (Baseline conversion − Conversion without channel) / Baseline conversion

Attribution(channel) = Removal Effect(channel) / Sum of all removal effects

Unlike last-click or first-click models, Markov chain attribution captures interdependencies between channels. A channel that rarely gets the last click but frequently assists conversions will still receive appropriate credit.

Why This Matters
Traditional attribution models are deterministic rules. Markov chains are probabilistic—they model the actual paths customers take and compute how much each channel contributes to the probability of conversion. This makes budget decisions less arbitrary and more defensible.

1 Customer Journeys

Enter customer journeys using the format: Channel → Channel → Conversion ($value) or No Conversion

Channels: D (Display), F (Facebook), S (Search), E (Email)

Compare propensity targeting PropensityProbability of purchase. Targeting "likely buyers" often wastes money on people who would buy anyway. vs. uplift targeting UpliftThe causal effect. Probability of buying IF treated minus probability of buying IF NOT treated..

Model Strategy
Campaign Budget
Reach30%
Qini Curve Qini CurveCumulative incremental profit sorted by model score. The area between the model curve and the random line represents the value of the model. (Incremental Profit)
0%
25%
50%
75%
100%
Uplift
Propensity
Random
Campaign Results
Conv. Rate0.0%
Incr. Conversions0
True lift vs control

Net Incr. Profit$0
(Incr. Rev) - (Camp. Cost)
Incr. ROI0%
Targeted Segments
Experiment Data

Enter your actual Test vs Control results per segment. The treatment counts come directly from your campaign data — no simulation.

SegmentTest GroupControl GroupUplift
ConvExposedConvExposed
Qini Curve Qini CurveCumulative incremental profit if segments were targeted in order of observed uplift. The dot shows where your actual campaign landed. (Incremental Profit)
0%
25%
50%
75%
100%
Optimal Ordering
Random
Your Campaign
Campaign Results
Conv. Rate (Test)0.0%
Incr. Conversions0

Net Incr. Profit$0
(Incr. Rev) - (Camp. Cost)
Incr. ROI0%
Targeted Segments
The Core Insight: Traditional models ask "Who will buy?". Uplift models ask "Who will buy because we treated them?". By distinguishing Persuadables from Sure Things and Sleeping Dogs, you stop wasting money on people who would buy anyway and avoid disturbing those who react negatively.

Use post-purchase surveys to triangulate walled garden attribution and build statistically rigorous incrementality estimates.

📊 Survey-Driven Attribution

Post-purchase surveys provide a ground-truth layer for marketing measurement by asking customers directly about their purchase journey, enabling cross-platform comparison and walled garden calibration.

Why Use Surveys for Attribution?

Traditional platform attribution (Meta, TikTok, Google) has inherent limitations:

  • Last-click bias: Only credits the final touchpoint
  • Walled garden opacity: Platforms report inflated performance
  • Cross-channel blindness: Cannot see the full customer journey
  • Memory gap: Attributed first touchpoint may not be what customers remember

Key Benefits

🎯 Market Insight

Understand why customers purchased and which channels they actually recall

💰 Cost-Effective

Much cheaper than MTA platforms or geo-lift studies

📈 Cross-Platform

Compare performance across all paid and organic channels

🔄 Calibration Tool

Create multipliers to normalize platform-reported metrics

⚠️ Important: Surveys have response bias, attrition issues, and memory limitations. When properly designed and analyzed with rigorous survey methodology, they provide invaluable triangulation data.

🔧 Calculation Methodology

The Core Question

Ask customers: "How did you hear about us?"

What matters is why someone purchased and what they remember. A customer's first touchpoint (platform-attributed) could be something they don't recall at all.

Step-by-Step Process

  1. Collect PPS responses with revenue data attached
  2. Sum revenue by channel (Facebook, TikTok, etc.)
  3. Calculate implied revenue: Divide by response rate
  4. Calculate implied ROAS: Divide implied revenue by ad spend
  5. Create multipliers: Compare to platform-reported revenue

Interactive Calculator

Adjust the inputs below to see how implied revenue and ROAS are calculated:

%

Results

ChannelSpendSurvey RevenueImplied RevenueImplied ROAS

🎯 Incrementality Calibration

Creating the Normalization Multiplier

The multiplier translates platform-reported ROAS to comparable survey-implied ROAS:

% Multiplier = Survey Implied Revenue ÷ Platform Reported Revenue

Interactive Calibration Tool

Enter platform-reported revenue to calculate normalization multipliers:

Platform revenue fields are automatically added for each channel with spend. Add channels in the Methodology tab.

Calibrated Results

ChannelSurvey ROASPlatform ROAS% MultiplierCalibrated ROAS

Use calibrated ROAS for budget decisions. This accounts for platform attribution inflation and provides a more accurate cross-channel comparison.

📊 Advanced Survey Analytics

Essential Analytics for PPS Surveys

1. Response Rate Analysis

Track completion rates, drop-off points, and timing patterns

2. Channel Attribution Patterns

Analyze which channels customers remember vs. actual touchpoints

3. Customer Journey Mapping

Understand multi-touch journeys from survey responses

4. Demographic Segmentation

Break down responses by age, gender, location, etc.

Response Bias Detection

A. Extreme Response Bias (ERS)

ERS Index = Σ (Extreme Response Indicator) ÷ Total Items

Create binary variables for extreme responses and sum across items to identify respondents with systematic extreme answering patterns.

B. Moderacy Bias

Compute proportion of middle-option selections per respondent and flag those >2 SD above mean.

C. Acquiescence Bias

Acquiescence Index = Σ (Agree/Yes Responses) ÷ Total Items

Identify respondents who systematically agree regardless of content.

Factorial Experiment Analytics

For conjoint/vignette designs in PPS:

  • AMCE (Average Marginal Component Effect): Estimate causal effect of each attribute
  • Variance Decomposition: Use ANOVA/multilevel models
  • Interaction Effects: Test attribute combinations
  • D-Efficiency: Validate experimental design quality

🛡️ Bias Correction & Attrition Analytics

Four Major Analytics Buckets

1. Attrition Reporting

Total attrition rate, stage-specific attrition, correlation with demographics

2. Design Diagnostics

Page load times, drop-off probability by page, question-type friction

3. Non-Response Correction

Weighting methods, model-based approaches, imputation

4. Best Practice Reporting

Raw vs. corrected results, robustness checks, benchmark comparisons

Weighting Methods for Non-Response Bias

A. Cell Weighting

Weight = Population Share ÷ Sample Share

Define cells (age × gender × education) and adjust weights to match population totals.

B. Inverse Probability Weighting (IPW)

pᵢ = P(respondᵢ | Xᵢ)
Weight = 1 ÷ pᵢ

Estimate probability of responding given covariates, then weight by inverse probability.

C. Raking

Iteratively match marginal distributions when joint distributions are sparse.

Model-Based Approaches

  • Heckman Selection Models: Account for non-random attrition
  • Probit/Logit Attrition Models: Predict dropout probability
  • Instrument-Based Corrections: Use randomized incentives or reminders

Imputation Methods

Hot-Deck Imputation

Random or deterministic donor selection from observed data

Regression Imputation

Predict missing values using observed covariates

Stochastic Regression

Add random residuals to preserve variance

⚠️ Critical: All imputations must be conditional, multivariate, and stochastic to preserve data structure.

⚙️ Implementation Guide

Step 1: Survey Design

Key Questions to Include:

  • "How did you hear about us?" (multi-select)
  • "What almost stopped you from purchasing?"
  • "How long have you known about our brand?"
  • Demographics (age, gender, location)
  • Attention check questions

Step 2: Technical Setup

# Example: Survey Integration with E-commerce Platform # Trigger survey after purchase def trigger_pps_survey(order_data): survey_url = f"https://survey.example.com?order_id={order_data['id']}" survey_url += f"&revenue={order_data['total']}" survey_url += f"&email={order_data['email']}" send_email(order_data['email'], "Quick Survey - Get 10% Off Next Order", survey_url) # Track response rate response_rate = responses / total_orders_sent

Step 3: Data Collection & Cleaning

Data Validation

Remove bot responses, filter failed attention checks, handle partial completes

Response Rate Targets

Typical range: 15-30%
Below 15%: Investigate design issues
Above 30%: Excellent engagement

Step 4: Analysis Workflow

  1. Download PPS data in Excel/CSV format
  2. Sum revenue by channel (Facebook, TikTok, etc.)
  3. Calculate implied revenue: Revenue ÷ Response Rate
  4. Calculate implied ROAS: Implied Revenue ÷ Ad Spend
  5. Create multipliers: Survey Revenue ÷ Platform Revenue
  6. Normalize platform ROAS: Platform ROAS × Multiplier
  7. Run bias corrections using weighting/imputation
  8. Report findings with confidence intervals

Step 5: Actionable Insights

Computed from your channel data in the Methodology and Calibration tabs. Adjust inputs there to update recommendations.

ChannelBudgetSurvey ROASPlatform ROASRecommendation

Tools & Resources

Survey Platforms

SurveyMonkey, Typeform, Google Forms, Qualtrics

Analysis Tools

Excel, R, Python (pandas, statsmodels), SPSS

Integration

Zapier, Shopify apps, custom API integrations

Contact Us

(813) 922-8725 (8139-CAUSAL)Whether you're interested in discussing potential opportunities, sharing insights about analytics challenges, or simply want to connect over shared interests in causal inference and measurement, I'd love to hear from you.

Thank You

I appreciate your message and will respond as soon as possible. Looking forward to connecting with you.