AI to empower human intelligence, not just emulate it
What makes a machine intelligent, and how can we harness its capabilities to improve our lives? This essay explores various approaches to that question, from Descartes to Turing.


Note to reader: The following is the perspective of a lowly marketer who has followed these debates closely, but it’s worth noting that these discussions have passed the desks of far more knowledgeable people. So if you want to know what Spinoza and Descartes really said, read their work!
Imagine a world where a computer can write a poem that ‘rivals’ Shakespeare, but struggles to understand the simple concept of a joke. This is the paradoxical reality we often find ourselves in when working with large language models (LLMs) like OpenAI’s GPT-4.
We’re all hearing a lot about LLMs—AI systems that generate text, answer questions, and even compose blogs (although we promise this one comes from a certified real boy). These models are rapidly reshaping workflows across industries, automating tasks that once required human effort—from customer support to legal document review to coding assistance. But as they become more embedded in our work and lives, it’s worth asking: Are they truly intelligent? And, more importantly, what do we even mean by ‘intelligence’ when it comes to machines?
To unpack this question, let’s compare two different popular approaches to AI: conversational and predictive. While both are powerful, they operate in very different ways. Let’s take a closer look at how Conversational AI like GPT-4 works and how it compares to Predictive AI like Faraday.
Conversational AI vs. Predictive AI
Conversational AI, like GPT-4, excels at generating human-like text. Trained on vast amounts of data, it learns patterns in language by analyzing how words and phrases relate to each other across different contexts. At its core, it predicts the most statistically likely next word in a sequence—similar to the predictive text on your phone’s keyboard, but on a much grander scale. This allows it to mimic conversation convincingly, but it doesn’t necessarily understand what it’s saying. It can generate fluent responses, but it struggles with complex logic, math, or real-world reasoning, which requires understanding beyond mere pattern recognition.
Think of it like a high-speed version of Mad Libs. While it can generate coherent and even insightful responses, it doesn’t understand the meaning behind them—it’s simply identifying statistical patterns and filling in the blanks. On that basis, one could argue that its intelligence is primarily performative—more like an advanced pattern recognition game than genuine comprehension or thought.
Of course, this raises a deeper question: What actually counts as thought? Philosophers, cognitive scientists, and AI researchers have debated this for decades, and we’re not here to settle it today. But for the sake of this discussion, let’s focus on the practical distinction between AI that emulates behavior and AI that augments decision-making.
Predictive AI (like Faraday), on the other hand, isn’t built to simulate conversation. Instead, it’s designed to analyze complex patterns in structured data and forecast outcomes, helping humans to make smarter, faster decisions. While Conversational AI generates responses based on linguistic patterns, Predictive AI identifies meaningful relationships between behaviors, demographics, and other variables—surfacing insights that would be difficult or impossible to detect manually. In that sense, Conversational AI is designed to sound human, whereas Predictive AI is designed to enhance human decision-making by providing precise, data-backed foresight.
Let’s unpack how predictive AI works with a parallel from science fiction.
A mathematical model for predicting the future
In Isaac Asimov’s 1940s Foundation series, mathematician Hari Seldon develops a science called psychohistory—a statistical method capable of predicting the future. Psychohistory doesn’t predict individual actions but instead foresees large-scale societal trends with remarkable accuracy, enabling leaders to anticipate and alter the course of history.
If you’re already using Faraday, you probably see a connection here. Like psychohistory, predictive AI helps businesses affect the future by anticipating customer behaviors. By analyzing vast amounts of data, predictive AI can forecast outcomes such as which leads are most likely to convert or what offer will resonate with a particular customer. This ability empowers businesses to make precise, data-driven decisions that can directly influence future outcomes.
But unlike psychohistory, which only understood the broad movements of society, predictive AI is powerful enough to make accurate, actionable predictions at the individual level. And in this sense our product actually beats a system designed to save the galaxy (sorry Hari)!
Turing, Descartes, and machines that learn
But we came here to talk about intelligence.
And when discussing intelligent machines, our minds often turn to the famous, controversial, and (critically) often misunderstood Turing Test, a benchmark proposed by the great Alan Turing to measure whether a machine can exhibit behavior indistinguishable from human intelligence.
“So it basically tested if the machines were sentient right?”
Well, not exactly. Turing himself viewed the test as more of an “imitation game” than a definitive way to assess whether a machine truly “thinks”. His focus was less on the idea of thought itself, and more on the appearance of it—the ability of a machine to convincingly mimic human behavior and language.
The goal of the Turing Test wasn’t to prove that a machine could reason or understand in the way humans do, but rather to test if it could generate responses so human-like that an observer might be fooled into believing it was also actually a person. Passing the test would mean a machine could emulate the behaviors and language associated with human reasoning, but not necessarily that it possesses any true understanding behind them.
This idea interacts interestingly with the philosophy of René Descartes, who famously said “I think, therefore I am”. Descartes believed true human intelligence was defined by the ability to reason and, importantly, to use language to express reason. For him, language wasn’t just a tool for communication—it was the ultimate marker of conscious thought. He argued that only beings capable of reason, expressed through language, could be considered truly conscious. In this context, Descartes viewed machines and (sadly) animals, as Automata—sophisticated mechanical systems that operated according to predetermined laws, but are incapable of thought or reason.
Even though Automata might perform tasks that appeared purposeful, Descartes believed their actions were purely mechanical, driven by physical processes rather than any understanding or intention. For Descartes, the difference between humans and these objects was clear: humans could reason and engage with the world in a way that reflected true thought.
Fast forward to the world of modern AI, and we now have machines that can produce remarkably human-like language, making it seem as though they are reasoning or thinking. But are they really doing so?
While LLMs can string together words and phrases in ways that sound intelligent, they don’t truly “understand” the meaning in the way humans do. In this sense, LLMs seem to align with Descartes’ view of machines as sophisticated mimics—producing outputs that seem intelligent but, in reality, are merely a performance of an extremely complicated script.
But this also presents a challenge to Descartes’ assertion that language is the definitive marker of reason, as they demonstrate the ability to produce seemingly intelligent language without underlying comprehension.
And to be fair, it is also of course worth noting that Descartes was writing in the 17th century. A time when machines were more likely to resemble windmills than the complex computers we know today. So his idea of “machines” was far more limited, even if his philosophical view of them as unthinking, mechanical objects persists in many ways today.
But Turing didn’t just talk about AI that could communicate, he also discussed those that could learn from experience. In a 1947 lecture, he stated, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” Instead of just following pre-written rules, such a system would improve over time through experience and trial-and-error. This is where the concept of heuristics in problem-solving comes into play. Heuristics are rules of thumb that allow people (or models) to quickly find solutions based on past experiences and probabilities of success. Rather than seeking the perfect solution, which could be time-consuming and resource-intensive, heuristics prioritize finding a good-enough solution more efficiently.
Predictive AI operates on this principle. When these models analyze new data, they adjust their internal calculations, improving their accuracy based on what has worked in the past. Unlike LLMs, which don’t fundamentally change after their initial training unless retrained on new datasets, predictive AI systems evolve dynamically. They continuously optimize their decision-making to deliver better, faster outcomes as they learn from new data and experiences.
The future of AI: beyond emulation
In this author's opinion the future of AI should aim to augment, not just replicate, human behavior.
Predictive AI, which excels in analyzing data and uncovering hidden patterns, serves as a prime example of this. By turning complex data into actionable insights, these models empower businesses to make more informed decisions. For instance, in marketing, predictive models can identify high-conversion customers and suggest tailored strategies, tasks that would be impossible for humans to undertake or scale efficiently on their own. Its continuous learning and evolution further refine its predictions, unlocking new levels of efficiency and impact over time.
But while the focus of this article thus far has been primarily on how LLMs focus on mimicking human behavior, that’s not the only thing they can do. For instance, LLMs help us interact with and quickly manage complicated systems, as is the case with AI agents.
Consider a scenario where a patient needs to schedule a complex series of medical appointments. An AI agent, integrated with the hospital's scheduling system, can understand the patient's needs, check doctor availability, coordinate with insurance requirements, and send reminders, all through a simple conversational interface. This not only streamlines the process for the patient but also frees up hospital staff to focus on critical patient care.
These tools can automate processes and save time by performing repetitive tasks or helping us interact more efficiently. They don't replace us or try to copy us; they help us do more and do it better. And they free up human effort for higher-value work, enabling creativity and decision-making.
In this sense, both predictive AI and LLMs, despite their differing focuses, can amplify human abilities when used in the right contexts, showing that the key to successful AI implementation is using the right tool for the job.
And really, what even is intelligence?
At this point in the article you might be asking, “Wasn’t this whole thing supposed to be about whether these AIs were intelligent?”
The concept of intelligence, though, isn’t as simple as a binary choice between “smart” and “dumb”. As another enlightenment philosopher, Baruch Spinoza proposed with his philosophy of monism, consciousness and reason are not exclusive to humans. They are defined through an object's interactions with its surroundings, which exist on a spectrum throughout nature, from the simplest forms of interaction (those of a rock) to the complex reasoning of humans.
And indeed, both predictive AI and LLMs, though different in their design and function, are part of this spectrum. Each possesses a distinct form of intelligence that, when used thoughtfully, can augment human decision-making, creativity, and problem-solving.
Ultimately, both tools—predictive AI and LLMs—are intelligent in their own right, but each excels in different areas. By approaching AI with the mindset of enhancing human abilities rather than simply emulating human behavior, we unlock its true potential. Predictive AI excels at providing insights and optimizing decision-making, while LLMs offer a powerful tool for generating human-like text, facilitating communication, and automating tasks (among a range of other applications). By understanding their strengths and applying them wisely, we can harness AI to not only complement human abilities but to push the boundaries of what is possible, opening up new opportunities for innovation, efficiency, and growth.
Conclusion
Both LLMs and predictive models are intelligent, but in different ways. It’s about using them thoughtfully—recognizing the unique strengths each brings to the table.
In the end, the true value of AI isn’t in replicating human thought, but in extending our own potential. Predictive AI can enhance decision-making and strategic thinking, while LLMs offer powerful ways to facilitate communication and automate tasks. When used in harmony, these technologies can open up new opportunities for innovation, efficiency, and growth, creating a future where humans and AI work together to solve complex problems and drive meaningful change.
At the end of the day, it’s not much different from working with any other tool; drills are great for drilling, but don’t ask one to write an essay (that draft was terrible).
Ready for easy AI agents?
Skip the struggle and focus on your downstream application. We have built-in sample data so you can get started without sharing yours.