All blog posts
Product
ROI at Faraday

Behind the numbers: The methodologies Faraday uses to turn ROI into value

Faraday’s strategic approach to ROI uses tailored analysis methods—like holdout testing, longitudinal studies, and relative comparisons—to deliver insights that actually reflect each client’s unique goals, constraints, and campaign impact.

Chris Waite
Ben Rose
Chris Waite & 
Ben Rose
on

This post is part of a series called ROI at Faraday that digs into Faraday's comprehensive approach to ROI reporting.

It’s tempting to think of ROI analysis as a simple math problem. Plug in the numbers, calculate the return, and call it a day.

But the truly specialized secret to great ROI isn’t in the arithmetic; it’s the methodology and strategy that underlie and inform the analysis. It’s knowing where to look, what to measure, and how to choose the right approach for the situation.

At Faraday, we don’t start our ROI analyses by creating an equation. We start by picking the right tool from the toolbox. Different tools unlock different kinds of answers. In our last blog, we made the case that ROI isn’t a scorecard—it’s a strategy. This post digs into the next layer: how we measure ROI in a way that actually reflects the individual strategic value of your campaigns.

And that starts by understanding which is the right method for the job.

ROI isn't just about math, it’s about methodology

The actual math behind ROI analysis is pretty simple. You’re mostly dealing with addition, subtraction, multiplication, and division. In this case, complexity comes not from the operations themselves, but how we choose to apply them.

Comedic GIF showing arithmetic operations

An ROI specialist knows which data points matter, how to structure the comparison, and what kind of baseline you’re using to judge performance. In other words, the real work isn’t in crunching the numbers; it’s in designing the analysis.

And importantly, different ROI methods exist for a reason. Each one comes with trade-offs, and the right choice depends on your goals, your timeline, and the level of precision you need.

Common "tools" in the ROI toolkit

There’s no single “best” way to measure impact. It depends on the shape of the campaign, the outcome you care about, and how precise your measurement needs to be.

Different methods are used depending on what you're measuring, what kind of attribution you need, and what trade-offs you’re willing to accept.

1. Holdout testing (A/B testing)

How it works: A holdout test randomly splits your audience into two groups, one that receives the intervention, and one that doesn’t. Then simply compare the outcomes.

Advantages:

  • Delivers highly reliable attribution. You can isolate the effect of the intervention with minimal noise.
  • Especially useful when you need clear proof of impact—for example, before rolling out a new campaign strategy.

Disadvantages:

  • You’re intentionally withholding the (potentially better) treatment from part of your audience, which can mean sacrificing short-term gains.
  • Longer-term metrics like LTV can require months or years of tracking, which slows down decision-making.
  • Can be difficult to set up for more integrated processes.

How this might look in practice: A home services company wants to evaluate the impact of a new predictive lead scoring model. To measure its effectiveness, they randomly split inbound leads into two groups. The test group receives scores from Faraday, and the sales team uses those scores to prioritize outreach. The control group is handled as usual, without scores. After 30 days, the test group shows a 15% higher conversion rate. Because the only variable that changed was the introduction of scoring, the team can confidently attribute the improvement to Faraday’s model—and deploy it more broadly.

2. Longitudinal studies

How it works: These studies measure a key performance indicator (KPI) before and after an intervention, and attribute any change to the strategy.

Advantages:

  • Easier to implement when the change affects a system or workflow that can’t easily be split, like internal operations.
  • No need to create and manage control groups.
  • Can continue tracking performance indefinitely without missing out on performance benefits.

Disadvantages:

  • Harder to separate the signal from the noise. External factors like seasonality, sales events, or even weather can cloud the results.

How this might look in practice: Picture a credit union using Faraday to launch a predictive direct mail campaign aimed at promoting HELOCs. Since they want to reach the entire audience, they don't create a holdout group. Instead, they measure HELOC applications for eight weeks following the campaign and compare that to the eight weeks prior. Applications increase by 25%, and with no other major marketing efforts running in that window, the credit union sees a strong directional signal that the campaign worked.

3. Relative performance improvement studies

How it works: This method builds on a longitudinal approach, but instead of simply comparing “before” and “after” for one group, you compare the change in your test group to the change in a baseline group that wasn’t affected by the intervention.

In other words: both groups experience whatever else is happening in the world (seasonality, ad spend changes, etc.), but only the test group is affected by the treatment. By comparing the test group’s performance to the baseline group, you subtract out the “background noise” and can isolate the true impact.

Advantages:

  • More resilient to outside variables. Since both groups are exposed to the same external forces, the comparison helps filter out unrelated effects.
  • Doesn’t require randomization—useful when you can’t run a traditional holdout test.

Disadvantages:

  • Can be harder to explain to stakeholders unfamiliar with the method.
  • Requires a valid control group that closely tracks with the test group’s normal performance. If your baseline group isn’t steady or comparable, the whole thing falls apart.

How this might look in practice: Imagine a regional furniture retailer running a new predictive direct mail campaign in their Midwest region, while using their Northeast region as a comparison. Both regions are similar in customer base and seasonality, but only the Midwest receives the new campaign. During the test window, Midwest sales rise by 12%, while Northeast sales only increase by 2%. The 10-point gap is attributed to the campaign, with the unaffected region helping control for external factors like seasonal shopping trends or macroeconomic changes.

Choosing the right tool for the job

Not every situation calls for the same approach, sometimes a "perfect" test isn’t practical or cost-effective. The best ROI analysts know how to match the method to the moment, balancing rigor with speed, and clarity with real-world constraints.

Here’s a quick cheat sheet to guide your choice:

  • Use holdout tests when you need the cleanest attribution possible and you have time to run a structured experiment—ideal for testing new acquisition channels or proving the value of a bold strategy shift.

  • Use longitudinal studies when A/B testing isn’t practical—like when rolling out changes to workflows, routing logic, or systems that touch your entire organization.

  • Use relative performance studies when you can’t randomize, but you can find a comparable group or region to act as a baseline—helping you filter out external noise and isolate the true impact.

Choosing the right approach isn’t about checking a box—it’s about working with what you’ve got and building an analysis that supports better decisions.

ROI is an art of trade-offs

Every method comes with trade-offs: between speed and precision, between statistical rigor and operational feasibility, between what you want to know and what you can realistically measure.

That’s why ROI analysis isn’t a checklist, but a craft. It requires judgment. True expertise means knowing when “good enough” is actually better than “perfect,” because it lets you act faster, test more often, and make decisions that still move the needle.

At Faraday, we don’t believe in one-size-fits-all measurement. Instead, we tailor ROI analysis to fit the strategy, timeline, and success metrics of each client. Whether that means running a formal holdout test, analyzing past performance, or finding a creative workaround, we help our clients make smart decisions grounded in evidence—not just instincts.

Conclusion

At the end of the day, calculating ROI isn’t just a mechanical process, it’s a strategic one. The numbers only matter if they’re telling you something useful. And that usefulness depends on choosing the right tool, the right comparison, and the right trade-offs for your business.

Faraday’s approach is built around that idea. We help our clients go beyond surface-level metrics to get a clearer, more actionable picture of what’s working, what’s not, and what to do next. Whether you’re trying to justify spending, test a new channel, or make smarter bets for the future, we’ll help you measure what matters—on your terms.

Want to see how predictive insights and tailored ROI reporting can work together? Let’s talk.

Ready for easy AI agents?

Skip the struggle and focus on your downstream application. We have built-in sample data so you can get started without sharing yours.