Outcomes
In Outcomes, you'll describe the business goals you'd like to predict. Let's say a major business goal of yours is that you want to acquire more high-quality leads (who doesn't?)–you'll plug the outcome's cohorts in, and Faraday will create a likely-to-buy predictive model for that goal by applying dozens of strategies to build a range of candidate models, then selecting the one that most accurately predicts your outcome.
Getting started
Inside Outcomes, you'll find a list of your current outcomes if you have any, with columns for:
- Performance: the performance score of the predictive outcome.
- Eligiblity: the cohort chosen for who you want to be able to achieve this outcome.
- Attainment: the cohort selected as attainment represents the "finish line" for the prediction. What cohort do you want the eligibility cohort to look like?
- Attrition (optional): the cohort people will enter if they fail to enter the attainment cohort by achieving the outcome.
- Status: whether the outcome is ready, queued, or errored.
Creating an outcome
- Select new outcome in the upper right of the Outcomes list view.
- Next, select an eligibility cohort. The eligibility cohort is who you want to be able to achieve this outcome. An important note on eligibility cohorts, however, is that it should not be a subset of your attainment cohort: if your attainment cohort is set to Customers, your eligibility cohort should not be something like Customers with a basement.
-
Next, select your attainment cohort. This cohort represents the "finish line" for your prediction. If you're predicting which leads are most likely to become customers, your attainment cohort
-
Optionally, select an attrition cohort for users that fail to attain this outcome.
📘Example: creating a lead conversion outcome
For an overall example in using these cohorts, say you want to create an outcome that scores leads based on the likelihood that they'll convert and become customers. In your outcome, you'll select the attainment cohort Customers, as the goal of your outcome is that leads will enter the Customers cohort and become customers. You'll leave the attrition cohort empty, as you don't necessarily want to discard the leads who don't attain this outcome. Lastly, your eligibility cohort will be your Leads, as you're only interested in how likely it is that your leads will become customers. With these selections, Faraday will use your current customers as a baseline against which your leads will be scored on the likelihood that they'll become just like your customers.
-
Optionally, select certain traits to block in this outcome. For example, you may want to ensure protected classes aren't used.
-
Once your cohorts are selected, give your outcome a unique name.
-
With your desired fields are filled out, click save outcome to save the outcome, after which you'll receive a popup telling you that your outcome is building. You'll receive an email when the outcome is ready for use, and its status in the Outcomes list view will display as Ready.
Analyzing an outcome
Once your outcome is complete and its status is ready, various features to analyze will populate in the outcome. This includes the performance of the model (through the score, lift table, lift curve, and ROC curve), or what kind of results you can expect when using this outcome, as well as the data features that were most important during the predictive model's build. Each section can include breakdowns based on how long the individuals in the outcome were in the cohort in use.
📘Further reading: Faraday scoring
For further reading on Faraday scoring, see Propensity vs probability: Understanding the difference between raw scores and probabilities.
📘Further reading: outcome feature data
For further reading on an outcome's features of importance and score explainability, see Removing the black box around customer predictions with Faraday score explainability.
Your model will receive a score based on what results you can expect when using this outcome in your campaigns. The closer to the upper right corner, the better real-world results you can expect when using the outcome. The performance chart's Y axis is associated with Faraday’s ability to predict the outcome, and the X axis is associated with the business value expected due to the predictive lift.
The score can range from:
-
Misconfigured: This can happen when your cohorts have too few people in them to make meaningful predictions.
-
Weak: Expect your results to have minimal improvement when leveraging predictions from this outcome.
-
Moderate: Expect your results to improve modestly when leveraging predictions from this outcome.
-
Good: You can expect good results when employing this outcome in most supported use cases.
-
Excellent: Your outcome is strongly predictable. You can expect great results in many use cases.
-
Warning: Your outcome is predicted better than we would typically expect. You should check that only predictors known prior to the outcome are included. In other words, the model's performance is too good to be true. This can happen when the model calls on first-party data that's directly related to the outcome.
For a full report on everything involved in creating this outcome, click the full technical report button via the three dots in the upper right. In the report, you'll find very detailed information on everything that went into creating the predictive model, including bias and fairness reports.
Understanding bias reporting
Each outcome you create includes a section that breaks down any bias that Faraday detects in your predictions and the data that was used for them, including a summary tab for an at-a-glance overview.
Bias reporting is broken down into four categories: data, power, predictions, and fairness.
-
Data: The underlying data used to build an outcome can introduce bias by unevenly representing subpopulations. This bias is measured by comparing distributions of sensitive dimensions across labels. Categorical distributions (e.g. for gender) are compared using proportions. Numeric distributions (e.g. for age) are compared using a normalized Wasserstein distance on the space of empirical distributions.
-
Power: A subpopulation is a subset of the eligible population defined by a set of sensitive dimensions (e.g. age and gender) and values (e.g. adult and female).
Outcome performance can be measured for a subpopulation and compared to the overall performance on the entire population.
-
Predictions: Outcome predictions can target subpopulations with bias. Measuring that targeting discrepancy fall under this heading.
-
Fairness: Fairness metrics aim to give an overall picture of how a subpopulation is treated. There are many metrics in the literature and the appropriate metrics depend on the specific situation.
By clicking details in the summary tab next to a category–or the tab itself–you can see the level of bias that was detected for that category.
In the above example, Faraday found that this lead conversion outcome had a strong favorable bias (indicated by the green arrow) toward people in the sensitive dimensions breakdown of senior/male. Expand the table below to see how each sensitive dimension breaks down into its subgroups.
Sensitive dimensions breakdown
Sensitive dimension | Subgroup | Value |
---|---|---|
Age | years old | |
Teen | 0-21 | |
Young Adult | 21-30 | |
Adult | 31-40 | |
Middle Age | 41-60 | |
Senior | 60+ | |
Gender | ||
Male | ||
Female | ||
Unknown |
Mitigating bias
When you find that an outcome of yours has bias being reported that you'd like to address, you can apply bias mitigation strategies to mitigate or reverse the bias.
Currently available bias mitigation strategies:
- None: Ignore bias
- Equality: Neutralize bias.
- As an example, if you have 48% men in the outcome's eligiblity cohort, and you mitigate gender using equality, then any output of a pipeline using the mitigated outcome will have 48% men and 52% women.
- For equality, the basis is to preserve the distribution from the eligible population, which means that you don't want your outcome to create bias by ranking people of a certain subpopulation higher than another.
- As an example, if you have 48% men in the outcome's eligiblity cohort, and you mitigate gender using equality, then any output of a pipeline using the mitigated outcome will have 48% men and 52% women.
- Equity: Invert bias.
- For equity, in a perfectly fair world, each subpopulation in the outcome's eligibility cohort is of the same size (e.g. 33% Senior Male, 33% Young Female, 33% Teens with Unknown gender). If one of these subpopulations–Senior Male, for example–is 52% of the overall population in the eligiblity cohort, it's 19% too large. Because it's 19% too large, Faraday shrinks it by 19% from the ideal 33%, which is roughly 14%. This process repeats for each subpoplation.
- At the end of this process, Faraday over-promotes rarer classes so that the target population of your predictions ends up being made of more people from the originally under-represented populations. From a business standpoint, this serves two purposes:
- Prevents your marketing campaign from being trapped in a vicious circle (i.e. I market more to middle-aged men, therefore I sell more to middle-aged men, therefore I should market more to middle-aged men.)
- Allows you to identify which under-marketed population has the best potential to convert.
🚧️With great power...
Mitigating bias isn't something you should set-and-forget, meaning it's not recommended that you toggle both age and gender on for every single outcome you create. For example, if you're a womens' swimwear brand, your data will skew heavily toward women, and mitigating gender would negatively impact your lift, so in this case you would not want to mitigate.
In general, using equality to neutralize any bias discovered is the most common use case for bias mitigation in cases where you do want to actively mitigate (see warning above). It's often helpful, if you have outcomes available, to create a new outcome when applying a bias mitigation strategy for easy comparison.
📘Bias mitigation: further reading
For further reading on how Faraday mitigates bias, including bias mitigation's impact on projected lift, read our blog: How Faraday helps mitigate harmful bias in machine learning
Deleting an outcome
To delete an outcome, click the options menu (three dots) on the far right of the outcome you'd like to delete, then click delete. If the outcome is in use by other objects in Faraday, such as a pipeline, the delete outcome popup will indicate that you need to modify those in order to delete the outcome. Once there are no other objects using this outcome, you can safely delete it.
📘Deletion dependencies
See the deletions documentation for the order of dependencies, or the order of deletion priority.