{Rich Text}
{Rich Text} Podcast
Can this AI-generated podcast teach you how to apply "systems thinking" to your life?
0:00
-10:41

Can this AI-generated podcast teach you how to apply "systems thinking" to your life?

Some new tech from Google can turn complex concepts into uncannily listenable podcast episodes. Do tools like this help us pierce our illusions, or abstract even further from wisdom?

In the never-ending AI news of the weird, the Verge detailed a few weeks back how Google is using AI to turn academic articles (or whatever you want really) into eerily listenable AI podcasts. The output is generated from user-uploaded materials, written by machine learning LLMs, and “hosted” by two text-to-speech AI voices that make bizarrely realistic small talk with one another.

The implications for the podcasting industry are, like all industries being touched by AI, unclear.

The Verge has written about it and it’s been discussed several times on The Vergecast, including with one of its developers, the Google-based

. Johnson, a tech writer before working at Google, talks about the tech behind the tool on his Substack @adjacentpossible.

You can experiment for yourself here.

To see how well it works, I fed it six or seven large texts about something I’ve been digging into for work, systems thinking and Systems Theory.

I found the result not just shockingly un-terrible, but genuinely a good primer for other people who might be interested in this topic. As someone who finds this topic important enough that I’m buying and reading multiple books on it and signing up for courses, it’s definitely a bit jarring to see an AI-created podcast sum it all up effortlessly.

Here’s a summary of the podcast (of course, also AI-generated) to get a sense of what it’s about:

In this episode, we dive deep into the world of systems thinking—a powerful way of looking at the world that helps make sense of complex problems. Whether it’s tackling big issues like climate change or navigating everyday challenges at work, systems thinking provides a mental framework to understand how all the moving parts connect. Using real-world examples like controlling mosquito populations in Borneo and handling workplace conflicts, we explore how seeing the relationships and feedback loops in a system can help avoid unintended consequences and create positive change.

We also clear up some common misconceptions: systems thinking isn't just about the big picture; it’s about understanding both the details and how they fit into the whole. And, no, it’s not tied to any one fancy tool or model. It’s a skill that anyone can learn and get better at over time. If you want to dig deeper, we highlight some must-read resources, including Danella Meadows’ work on leverage points.

So, whether you're trying to solve tough challenges or just make better decisions, this episode will give you new tools to see the world differently and make a bigger impact. Tune in and start thinking like a systems thinker!

“I’m Ira Black Mirror, and you’re listening to This Artificial Life”

The tool, called NotebookLM, is not really pitched as a podcast killer, out to dethrone Gimlet and Headgum podcast studios. The voices are uncannily human, but that still leaves them uncanny. As Johnson puts it, even with the drastic improvements in AI voices (which can add a plethora of filler words and faux pauses), “they still don’t know how to make us laugh.”

NotebookLM emerged from a lineage of tools that, as Johnson notes, was some of the first visible public-facing AI tools that got really good: translation. Effective translation is, he asserts, built on simple instructions: “Take this input and turn it into this new output, but keep the meaning intact.” This ability seems to have unfurled into something orders of magnitudes more complex.

It wasn’t long ago that “who wrote this, Google Translate?” was an adage for poorly phrased robot speak. Now, when I use Google Translate to convert my WhatsApp messages to Spanish, I’m finding the translation to have better grammar going out than what I put in.

Johnson explains how this ability for AI to translate from one language to another, without losing the meaning of the original, was expanded upon to give the models the ability to create metaphors. “We had a feature we called ‘Explanatory metaphor’, Johnson notes, “that would take any text you gave it and generate a helpful metaphor to describe the core ideas in the passage.”

Thinking of these tools as extensions of “translation” machines on steroids is pretty interesting. Johnson’s thoughts align with something I’ve considered, which is that there’s an accessibility component to tools like this. I am not dyslexic, I have a perfectly fine reading comprehension level — but I have ADHD. If I can absorb information by any means other than reading it, I will. If I have a preferred learning style, it is quite literally people bantering about something while I’m passively absorbing it — and I have a tremendous recall for things I’ve learned this way.

There’s a whole slew of people like me my age. We could probably be identified on some sort of cognition test, but the easier way to find us is that most of what we know about culture came from VH1 talking head shows from the 2000s (e.g. “The 100 Most Shocking Moments in Rock & Roll”, “I Love the ‘80s”). This has morphed into listening to hundreds (if not thousands) of hours of podcasts about movies and TV shows, most of which I haven’t seen, but I can tell you the cast deep into the call sheet and probably what its opening box office was.

Some people are visual learners. Some are experiential learners. I am, apparently, a talking head recap show learner.

Johnson writes on his substack that this is partly the idea of this “podcast” tool. This project is not (as yet) about creating a monetizable, parasocial relationship with a salary-free host that can drop infinite episodes a week. It’s more of an assistive learning tool.

… not everybody learns or remembers most effectively through reading. Many of us are auditory learners, or just prefer to take in new information while walking around or driving, when reading is impossible. And we know from the massive rise in podcast listening that one of the most powerful ways to understand a topic is to listen to two engaged, thoughtful people having a conversation about it.  

I agree. And dissent.

There is something depressingly, familiarly, “easy button” about all this. On the one hand, there are innumerable barriers to higher learning, and much deep knowledge is hidden behind unnecessary jargon and inscrutable concepts. And perhaps more to the point, much ignorance is hidden behind things that “seem complicated” which only those with the correct anointed knowledge can understand. This is how McKinsey makes money. Puncturing the illusion of false complexity, like the Goddess Kali severing the heads of man’s illusions, is a divine endeavor.

Kali: She Transforms the Lives of Those Who Honor Her — Tabby Biddle
The severed heads that Kali holds or wears symbolize the cutting away of illusions. Each head represents a different aspect of human attachment—desires, ego, ignorance, and delusions—that prevent one from realizing their true nature and achieving spiritual liberation (moksha).

But the losing of false complexity is meant to make room for experiencing the true complexity of the world, something that requires directed attention and self-realization. As the Greeks said: “χαλεπὰ τὰ καλά” (beautiful things are difficult to attain).

System Theory is not, as the documents I uploaded to Google NotebookLM, and the podcast it produced correctly explained, a “science”. There are not first principles as with Newtonian Physics or cellular biology. The “object” of study is conceptual, it's “systems” as a whole. “Systems Theory” is really just a loose canon of texts that hover around certain expressions of a cognitive capacity called “systems thinking”.

It's about looking for patterns in the multitude to find insight on the specific that you're studying. This isn't empirical knowledge — rather it's something closer to wisdom.

And wisdom can't be reliably contained in a text, and definitely not poured into your ears via a 10 min conversational AI podcast.

Socrates feared that books would make us soft. That we’d lose our taste for philosophy if we suffered the misconception that wisdom could be stored on paper. We’re obviously only becoming further abstracted from wisdom, a black mirror reflecting back a black mirror, our digitized books dissected by mindless neural networks.

If bots banter about Systems Theory on an RSS feed, and no one hears it, is anyone the wiser?

Discussion about this episode

User's avatar