Episode summary
In Episode 22 of '100 Days of Data,' Jonas and Amy trace the rich history of artificial intelligence—from Alan Turing's foundational question to today’s powerful transformers. They explore key eras including the rise and fall of symbolic AI, the breakthrough of neural networks, and the emergence of deep learning fueled by big data and GPU computing. Major milestones like IBM’s Deep Blue and Watson are discussed, showing how AI evolved from logical rules to language-based insights. The episode highlights how different AI approaches have shaped industries like finance, healthcare, and retail, and underscores the importance of historical context in understanding the strengths and limitations of modern AI. By charting this journey, Jonas and Amy provide a grounded perspective for business leaders and tech professionals aiming to apply AI meaningfully today.
Episode video
Episode transcript
JONAS: Welcome to Episode 22 of 100 Days of Data. I'm Jonas, an AI professor here to explore the foundations of data in AI with you.
AMY: And I, Amy, an AI consultant, excited to bring these concepts to life with stories and practical insights. Glad you're joining us.
JONAS: From Turing to Transformers — the story of AI is as fascinating as it is complex. Let's take a journey together through the history of artificial intelligence.
AMY: It’s amazing how AI has evolved. We’re talking about a field that went from simple puzzles and theories to powering the recommendation engines in your favorite apps today.
JONAS: Absolutely. To start, it all begins with Alan Turing in the 1950s. He proposed a question that still echoes: Can machines think? His famous Turing Test was designed to see if a computer’s behavior could be indistinguishable from a human’s.
AMY: Right, and that idea isn't just philosophy—it's what drove early AI research. Companies today might not run Turing Tests per se, but they obsess over whether their AI systems can interact naturally with humans—think chatbots, virtual assistants.
JONAS: Early AI research was dominated by what we call \"symbolic AI.\" This approach tried to formalize reasoning by encoding knowledge as symbols and rules. Imagine a giant flowchart or decision tree where every logic step is programmed.
AMY: I’ve seen symbolic AI in action in industries like finance, where rule-based systems manage credits and fraud detection. Back in the day, experts would define all these rules manually, based on what they knew.
JONAS: Exactly, but that’s both the strength and the weakness. Symbolic AI can be very precise when the rules are clear, but it struggles with uncertain or messy information — which is most of the real world.
AMY: That’s why in retail, for example, symbolic AI for customer service often falls flat. You need systems that learn over time, not just follow rigid rules.
JONAS: This challenge pushed researchers to explore new techniques. The 1980s and 90s saw a rise in connectionist models — neural networks — inspired by the human brain's architecture.
AMY: Neural networks are behind many breakthroughs today. In healthcare, deep learning—a form of neural networks—helps detect diseases by analyzing medical images far faster than traditional methods.
JONAS: Those early neural networks were limited by computing power and data availability. It wasn’t until the 2000s that deep learning really took off, due to advances in GPUs and massive datasets.
AMY: And that’s been crucial in automotive too—think self-driving cars. They rely on deep learning to recognize objects and make split-second decisions in traffic.
JONAS: Let’s not forget other important milestones along the way. In 1997, IBM’s Deep Blue defeated chess champion Garry Kasparov, showcasing how AI can excel in structured games.
AMY: That event was huge for public awareness. It showed businesses and consumers that AI could do more than just calculations—it could strategically plan and learn.
JONAS: Following that, in 2011, IBM’s Watson won on Jeopardy!, showing advances in natural language understanding. This set the stage for AI to engage with human language more deeply.
AMY: Which we see with virtual assistants like Siri or Alexa—tools that rely heavily on understanding and generating natural language, moving far beyond scripted responses.
JONAS: The culmination of these advancements led us to the latest frontier: Transformer architectures, introduced around 2017. Transformers excel at understanding sequences of data, especially language.
AMY: Transformers revolutionized AI applications in customer service, marketing, and content creation. I worked with a retail client that used transformer-based models to generate personalized shopping experiences that felt genuinely human.
JONAS: The beauty of transformers is that they handle context very well. Unlike earlier models that looked at words one by one, transformers consider the entire sentence or paragraph at once — which vastly improves understanding.
AMY: And they’ve made AI-generated text, translations, and even coding assistants more effective, changing how businesses automate knowledge work.
JONAS: So, to recap, the history of AI moves from symbolic systems, focused on hard-coded logic, through neural networks inspired by the brain, to deep learning and now transformers that enable advanced understanding and generation.
AMY: I love seeing how these shifts influence the real world. Symbolic AI laid the groundwork for rule-based systems in banking. Neural networks unlocked breakthroughs in image recognition for medicine and automotive safety. Transformers are bringing natural language to life everywhere.
JONAS: It’s important to remember too that AI’s history is not a straight line. There were periods called “AI winters,” when enthusiasm and funding dried up because early systems couldn’t deliver on high expectations.
AMY: Yeah, I’ve been on projects where clients were skeptical about AI because of those past disappointments. But knowing the history helps build realistic expectations—AI isn’t magic but an evolving technology with real business value.
JONAS: Definitely. And the history highlights a key lesson: AI advances often depend on a combination of theory, available data, and computing power. Without one of these, progress slows.
AMY: That’s where companies need to focus—investing not just in flashy AI tools but in infrastructure and data quality to actually make AI work for them.
JONAS: Well said. Understanding AI’s roots helps managers and consultants make smarter decisions about where and how to apply AI.
AMY: And it empowers teams to communicate confidently about AI—knowing it’s a journey from Turing’s question to sophisticated transformers powering today’s innovations.
JONAS: So, what’s our key takeaway today? For me, it’s this: AI is built on decades of evolving ideas, from symbolic reasoning to deep learning and transformers, and each step unlocks new capabilities.
AMY: From my side, I’d say: knowing AI’s history helps you see its strengths and limitations—and that’s critical when you want to turn AI from buzzword into real business results.
JONAS: Next time on 100 Days of Data, we’ll dive into Machine Learning Basics — the heart of AI systems today.
AMY: We’ll explain what machine learning really means, how it works, and why it matters to your business.
JONAS: If you're enjoying this, please like or rate us five stars in your podcast app.
AMY: And don’t hesitate to leave comments or questions—your input might just show up in future episodes.
JONAS: Thanks for being part of this journey with us.
AMY: Until tomorrow — stay curious, stay data-driven.
Next up
Next episode, Jonas and Amy demystify machine learning—what it is, how it works, and why it powers today's smartest systems.
Member discussion: