This week, Faraday is thrilled to announce a series of product updates that mark a revolutionary moment in our own history, as well as the broader AI practice. These updates don’t just make our users’ lives easier, though that’s certainly a major part of it. They also bring us one step closer to the world that all of us at Faraday are invested in building: a world where brands can use AI in a manner that’s both widely accessible and intrinsically responsible.
What does accessible, responsible AI look like? How did we get here? And why does it matter? Read on to find answers—and join us in the revolution.
AI becomes accessible when anyone can use it to make a prediction. You don’t need a million dollar marketing budget to get your foot in the door, you don’t need a data science degree to build a model, and you don’t need to write a line of code. Today, for the first time, Faraday users can cast all those barriers aside and make predictions about their customers’ behavior in just a few clicks.
This level of accessibility has been four years in the making, and while it seems like a no-brainer today, it was a risk when we committed to the new product roadmap. Our 2018 decision to invest in a sweeping, accessible AI product update was a gamble—and maybe a somewhat audacious decision at the time.
Back then, we were growing increasingly aware that brands wanted more control over their predictions. But pivoting from what was an intensely collaborative experience to a fully self-serve platform would require thousands of engineering hours, all for a technology that was still more of a buzzword than a common business solution. The software had to take a quantum leap forward.
Four years later, AI is everywhere and Faraday’s quantum leap is here. We’re thrilled to be able to offer AI to the entire brand landscape, from retail to finance and beyond, paving the way for more and more companies to connect with their customers.
There’s a reason they say that with great power comes great responsibility, and AI is no exception.
Machine learning algorithms train on historical data, using patterns from the past to make predictions about the future. It should come as no surprise that, left unchecked, these algorithms will surface injustices that have been baked into decades of consumer data, reflecting common biases around race, gender, class, religion, and more. It’s completely possible to mitigate these biases and produce responsible predictions that avoid or even correct for past injustices, but it takes expert know-how.
For the first six years of our company’s life, we were tightly involved in every predictive model and we could be stewards of the crucial ethics involved. With the world moving toward AI for all, we had to make sure that it was truly AI for all and deeply embed responsible AI principles into everything we were doing.
As a result, Faraday has doubled down on responsible AI. We’ve always employed proven approaches for mitigating bias and stood by our ethical consumer data practices, but now we’re rolling out automatic bias detection and mitigation for our users as well. We’re serious about building a future where AI helps—not harms—our community, and we wouldn’t release a new wave of features without ensuring that our users are equipped to use them responsibly.
When Robbie, Seamus, and I started Faraday, AI was largely reserved for the Amazons and Walmarts of the world. The cost to develop, train, and supervise predictive models was exorbitant, and only the biggest corporations could afford that kind of investment. They’ve been reaping the benefits—and the profits—that come with predicting customer behavior ever since.
This summer, we’re ringing in an AI revolution at Faraday, one where any brand, not just the Fortune 100, can use practical, powerful AI without compromising on ethics. We’ll continue building on these updates for the rest of 2022 and beyond, and we hope you’ll join us for the ride.