In Outcomes, you'll describe the business goals you'd like to predict. Let's say a major business goal of yours is that you want to acquire more high-quality leads (who doesn't?)–you'll plug the outcome's cohorts in, and Faraday will create a likely-to-buy predictive model for that goal by applying dozens of strategies to build a range of candidate models, then selecting the one that most accurately predicts your outcome.

Getting started

Inside Outcomes, you'll find a list of your current outcomes if you have any, with columns for:

  • Performance: the performance score of the predictive outcome.
  • Eligiblity: the cohort chosen for who you want to be able to achieve this outcome.
  • Attainment: the cohort selected as attainment represents the "finish line" for the prediction. What cohort do you want the eligibility cohort to look like?
  • Attrition (optional): the cohort people will enter if they fail to enter the attainment cohort by achieving the outcome.
  • Status: whether the outcome is ready, queued, or errored.

Creating an outcome

  1. Select new outcome in the upper right of the Outcomes list view.

Screenshot of the Outcomes list view

  1. Next, select an eligibility cohort. The eligibility cohort is who you want to be able to achieve this outcome. An important note on eligibility cohorts, however, is that it should not be a subset of your attainment cohort: if your attainment cohort is set to Customers, your eligibility cohort should not be something like Customers with a basement.

Screenshot of a new outcome

  1. Next, select your attainment cohort. This cohort represents the "finish line" for your prediction. If you're predicting which leads are most likely to become customers, your attainment cohort

  2. Optionally, select an attrition cohort for users that fail to attain this outcome.

📘Example: creating a lead conversion outcome

  1. Optionally, select certain traits to block in this outcome. For example, you may want to ensure protected classes aren't used.

  2. Once your cohorts are selected, give your outcome a unique name.

  3. With your desired fields are filled out, click save outcome to save the outcome, after which you'll receive a popup telling you that your outcome is building. You'll receive an email when the outcome is ready for use, and its status in the Outcomes list view will display as Ready.

Analyzing an outcome

Once your outcome is complete and its status is ready, various features to analyze will populate in the outcome. This includes the performance of the model (through the score, lift table, lift curve, and ROC curve), or what kind of results you can expect when using this outcome, as well as the data features that were most important during the predictive model's build. Each section can include breakdowns based on how long the individuals in the outcome were in the cohort in use.

📘Further reading: Faraday scoring

📘Further reading: outcome feature data

Screenshot of an outcome's performance

Your model will receive a score based on what results you can expect when using this outcome in your campaigns. The closer to the upper right corner, the better real-world results you can expect when using the outcome. The performance chart's Y axis is associated with Faraday’s ability to predict the outcome, and the X axis is associated with the business value expected due to the predictive lift.

The score can range from:

  • Misconfigured: This can happen when your cohorts have too few people in them to make meaningful predictions.

  • Weak: Expect your results to have minimal improvement when leveraging predictions from this outcome.

  • Moderate: Expect your results to improve modestly when leveraging predictions from this outcome.

  • Good: You can expect good results when employing this outcome in most supported use cases.

  • Excellent: Your outcome is strongly predictable. You can expect great results in many use cases.

  • Warning: Your outcome is predicted better than we would typically expect. You should check that only predictors known prior to the outcome are included. In other words, the model's performance is too good to be true. This can happen when the model calls on first-party data that's directly related to the outcome.

For a full report on everything involved in creating this outcome, click the full technical report button via the three dots in the upper right. In the report, you'll find very detailed information on everything that went into creating the predictive model, including bias and fairness reports.

Understanding bias reporting

Each outcome you create includes a section that breaks down any bias that Faraday detects in your predictions and the data that was used for them, including a summary tab for an at-a-glance overview.

Screenshot of an outcome's bias summary

Bias reporting is broken down into four categories: data, power, predictions, and fairness.

  • Data: The underlying data used to build an outcome can introduce bias by unevenly representing subpopulations. This bias is measured by comparing distributions of sensitive dimensions across labels. Categorical distributions (e.g. for gender) are compared using proportions. Numeric distributions (e.g. for age) are compared using a normalized Wasserstein distance on the space of empirical distributions.

  • Power: A subpopulation is a subset of the eligible population defined by a set of sensitive dimensions (e.g. age and gender) and values (e.g. adult and female).

    Outcome performance can be measured for a subpopulation and compared to the overall performance on the entire population.

  • Predictions: Outcome predictions can target subpopulations with bias. Measuring that targeting discrepancy fall under this heading.

  • Fairness: Fairness metrics aim to give an overall picture of how a subpopulation is treated. There are many metrics in the literature and the appropriate metrics depend on the specific situation.

By clicking details in the summary tab next to a category–or the tab itself–you can see the level of bias that was detected for that category.

Screenshot of an outcome's bias reporting on predictions

In the above example, Faraday found that this lead conversion outcome had a strong favorable bias (indicated by the green arrow) toward people in the sensitive dimensions breakdown of senior/male. Expand the table below to see how each sensitive dimension breaks down into its subgroups.

Sensitive dimensions breakdown
Sensitive dimensionSubgroupValue
Ageyears old
Young Adult21-30
Middle Age41-60

Mitigating bias

When you find that an outcome of yours has bias being reported that you'd like to address, you can apply bias mitigation strategies to mitigate or reverse the bias.

Currently available bias mitigation strategies:

  • None: Ignore bias
  • Equality: Neutralize bias.
    • As an example, if you have 48% men in the outcome's eligiblity cohort, and you mitigate gender using equality, then any output of a pipeline using the mitigated outcome will have 48% men and 52% women.
      • For equality, the basis is to preserve the distribution from the eligible population, which means that you don't want your outcome to create bias by ranking people of a certain subpopulation higher than another.
  • Equity: Invert bias.
    • For equity, in a perfectly fair world, each subpopulation in the outcome's eligibility cohort is of the same size (e.g. 33% Senior Male, 33% Young Female, 33% Teens with Unknown gender). If one of these subpopulations–Senior Male, for example–is 52% of the overall population in the eligiblity cohort, it's 19% too large. Because it's 19% too large, Faraday shrinks it by 19% from the ideal 33%, which is roughly 14%. This process repeats for each subpoplation.
    • At the end of this process, Faraday over-promotes rarer classes so that the target population of your predictions ends up being made of more people from the originally under-represented populations. From a business standpoint, this serves two purposes:
      • Prevents your marketing campaign from being trapped in a vicious circle (i.e. I market more to middle-aged men, therefore I sell more to middle-aged men, therefore I should market more to middle-aged men.)
      • Allows you to identify which under-marketed population has the best potential to convert.

🚧️With great power...

In general, using equality to neutralize any bias discovered is the most common use case for bias mitigation in cases where you do want to actively mitigate (see warning above). It's often helpful, if you have outcomes available, to create a new outcome when applying a bias mitigation strategy for easy comparison.

📘Bias mitigation: further reading

Deleting an outcome

To delete an outcome, click the options menu (three dots) on the far right of the outcome you'd like to delete, then click delete. If the outcome is in use by other objects in Faraday, such as a pipeline, the delete outcome popup will indicate that you need to modify those in order to delete the outcome. Once there are no other objects using this outcome, you can safely delete it.

📘Deletion dependencies

Screenshot of deleting an outcome