This post is part of the AI for Business series
Shedding light on AI from a non-technical business/product/design perspective.
This is post #1 in our AI for Business series.
From Ancient Dreams to Modern Realities
Artificial Intelligence is having its moment. It powers your phone’s assistant, recommends your next favorite show, and might even help diagnose your medical condition. But this seemingly modern marvel has roots stretching far deeper than you might expect. To understand where AI is headed, it’s worth looking at where it started and how it’s evolved along the way.
Let’s step back in time 🕰️
Ancient Gears and Big Ideas
AI’s story begins long before the first computer chip. Over 2,000 years ago, Greek engineers created the Antikythera mechanism, a complex system of gears that could predict astronomical events. It wasn’t “smart” by today’s standards, but it reflected humanity’s ambition: to build machines capable of tasks that, until then, only humans could perform.
Fast forward to the 1800s, and two visionaries (Charles Babbage and Ada Lovelace), were laying the groundwork for modern computing. Babbage sketched out designs for a programmable device called the “Analytical Engine”, while Lovelace theorized that machines could one day be creative and that they would actually produce music and art. Two people who were truly before their time!
The Birth of AI
The real push for Artificial Intelligence came after World War II, as computing technology advanced. Alan Turing, celebrated for breaking the Nazi Enigma code, posed a bold question: “Can machines think?” His 1950 paper introduced the Turing Test, a way to measure whether a machine could convincingly imitate a human in conversation. This wasn’t just a philosophical exercise; it was a challenge to the scientific community.
By the mid-1950s, researchers were diving in. Computers were programmed to play chess and checkers, and John McCarthy—who coined the term “Artificial Intelligence” in 1956—hosted a conference that officially established AI as a field of study.
The enthusiasm was infectious. Machines would soon rival human intelligence—or so they thought.
The AI Winter; where Hype Meets Reality
As often happens with new technology, expectations were… ambitious. Early AI systems struggled with anything outside tightly controlled tasks. It wasn’t that AI lacked potential; it was that the hardware and data needed to realize that potential simply didn’t exist yet.
By the 1970s, the hype cooled. Funding dried up. Researchers focused on narrow applications, like “expert systems” that could diagnose diseases or recommend repairs. Practical? Yes. But it wasn’t the human-like intelligence sci-fi promised. This period, known as the “AI Winter,” was a stark reminder that big ideas need time to mature.
Turning a Corner
AI’s breakthrough moment came in the late 20th century. In 1997, IBM’s Deep Blue defeated chess champion Garry Kasparov, proving machines could outthink humans in structured domains. Ironically, while this victory represented the zenith of the expert systems approach to AI, it also represented a point of diminishing returns. With the world once again turning its head in the direction of AI, expert systems were about to be eclipsed by a new (or returning) kid on the block: Deep Learning.
Deep Learning
The concept of Deep Learning, a technique inspired by the human brain in which neural networks play a central role, had been around since Rosenblatt’s Perceptron in the late 1950s. We won’t get into the weeds of Deep Learning and neural networks here as they are quite technical and that is not the purpose of this blog series.
What we need to know is that, while other breakthroughs like Sir. Jeffrey Hinton’s noble prize winning creation of the backpropagation algorithm in 1974, did occur, breakthroughs were sparse due to a lack of storage, processing power and available training data. Progress on a Deep Learning approach to AI stalled.
However, by the early 2000s, the world was witnessing an explosion of digital data and constantly improving processing power and storage solutions. These adjacent breakthroughs created the conditions required for Deep Learning to thrive.
Suddenly, AI systems could learn from experience through Deep Learning rather than relying on preprogrammed rules as is the case with expert systems. Deep Learning allowed machines to identify patterns in massive datasets and the results have been impressive; self-driving cars, virtual assistants like Siri, and AI that could beat the world’s best Go players, a feat once considered impossible.
Today’s AI: Everywhere and Everything
Today, AI isn’t just in labs; it’s in our lives. It writes essays, composes music and generates art. It powers streaming recommendations, spam filters, and medical diagnostics. Tools like Claude and ChatGPT are making natural language processing feel… natural.
The sky is the limit for AI! Right?
Well, maybe. AI’s history is a story of progress and pauses, breakthroughs and setbacks. Its evolution mirrors humanity’s own: driven by curiosity, tempered by limitations, and propelled by a constant desire to do more.
While AI is undoubtedly having a moment of late, do you see any mirroring of the excitement of the late 1950s? Will the neural network approach to AI, like Expert Systems in the late 1990s, reach a point of diminishing returns when apparently at its zenith?
It took 60 years to get the extra storage, processing power and data that has allowed neural networks to flourish. Will a low supply of chips or energy, or copyright and regulatory issues hinder this wave of Deep Learning AI? Or could something like quantum computing unlock the next set of possibilities for this evolving technology?
It is hard to say. But if the past is any indication, the future of AI will be transformative, unpredictable and likely non-linear despite what the hype would suggest.
Enjoy the journey:
So, we suggest you enjoy the journey. Next time your phone predicts the perfect playlist or your car parks itself, take a moment to marvel. Behind these technologies is a story centuries in the making. And, with a bit of luck, the best chapters are still to come.
💡 If you’re ready to start using AI to transform your business, thoughtbot would love to work with you. Let’s talk about making your AI initiative a success!
This blog post is part of a series based on the course Artificial Intelligence (AI) for Business run by University College Dublin (UCD). I took this course and found it so helpful that I’m recapping my top insights. thoughbot has no affiliation with UCD.