It’s not easy to make responsible predictions with responsible data. But we’ve figured out how.
Preventing direct harmful bias
It all starts with data. That's where predictions come from, so that's the first place we go looking for biases. By automatically excluding race, ethnicity, and religion from our database, we prevent our models from creating predictions based on protected classes.
Reporting on possible indirect bias
Every single model Faraday builds includes an automated report that uncovers potential harmful bias. And, now in private beta, you can selectively eliminate or even invert the bias you find.
Saying no to cookies since 2012
We don't think consumers should be tracked across the Internet without their permission, so we've made sure our data comes from cookieless sources.
Opt-in data only
We believe the things you share on social media should stay there. All of our consumer data is fully permissioned, licensed from reputable vendors.
Faraday licenses data from the world's best providers, whose job it is to secure opt-in permission for everything they track. All of this data is associated with known individuals, rather than cookies, ensuring that online anonymity is protected.
Let’s get predictive
Get started embedding predictions in your stack, including a free-forever plan.Sign up for free