Ian Leslie’s work focuses on human behavior. He has appeared on two earlier episodes of EconTalk (Ian Leslie on Curiosity and Ian Leslie on Conflicted). In this episode, host Russ Roberts and Leslie continue the discussion of human behavior, discussing Leslie’s thesis that AI is already changing how we think. It isn’t just the machines who are imitating us, rather, we have begun to imitate the machine in profound ways that are changing what and how we create. 

Roberts and Leslie spend some time discussing how students are actually taught writing using a very simple algorithm, the five paragraph essay. As a former teacher, I’m glad I left the classroom before the emergence of Chat-GPT and similar tools. Yet some of the cultural and technological forces Leslie pinpoints could already be seen in students’ writing and thinking long before the introduction of generative AI technology to the public. Certainly we have been living for many years in a world shaped by algorithms: social media and search algorithms have shaped our information stream and social circles for years. The ubiquity of autocorrect and  the digital organization of information affect the way adults and children learn. The simple isolation that personal devices enable also change how we come into contact with information, process, and share it.

If Ian Leslie’s argument is correct, algorithmic transmission of knowledge is even older than these technologies. As a product of late-20th-early-21st century schools and a teacher, I’m tempted to agree). As Leslie points out:

…essentially we’ve taught them–we taught many of them–that good writing means following a series of rules and that an essay should have five-part structure. So, instead of helping them to understand the importance of structure and the many ways you can approach structure and the subtleties of that question, now, we tend to say, ‘Five points.’ That’s what you want to make in an essay. The student goes, ‘Okay, I can follow that rule.’ Instead of helping them to understand what it means to really nail or at least give your writing depth and originality and interest, we say, ‘Here are the five principles you need to follow. Here’s how long a paragraph should be. Here’s how your sentence should be. Here’s where the prepositions go or don’t go.’

And, we’re basically programming them. We’re giving them very simple programs, simple algorithms to follow.

And, the result is we often get very bland, quite shallow responses back. So, it isn’t actually any wonder that ChatGPT can then produce these essays because they’re basically kind of following a similar process. That ChatGPT has a huge amount of training data to go on, so it does much more quickly.

And so, we should be alarmed by it, but not because it’s on the verge of being a kind of super-intelligent consciousness, but because of the way that we’ve trained ourselves to write algorithmic essays.

Small wonder that the modern school set-up has relied on these “simple programs, simple algorithms”. Quality education at scale is not a simple proposition,  and the five paragraph simplistic view does reliably produce a mediocre but acceptable product. Certainly this goes a long way in explaining the mediocrity common to an average student essay before common access to generative AI. Now the problem is more direct: essays actually compiled  by generative AI. I do not envy teachers these days who are trying to teach around this, but the problem already existed before the latest, most powerful tool came to be. Now it has accelerated.

Of course, the applications of AI in education extends far beyond the classroom. Most of us use Google or other search engines as a quick way to look  up information or images. Now, much of that content is influenced or created by AI, our perception of reality is filtered through the machine.  One strange example:  I was browsing my Reddit feed and saw several complaints that wedding floral images are being generated by AI and posted on sites like Pinterest, which many people use for design and planning inspiration. Why was the poster complaining? Because the bouquets of wildflowers in some very realistic looking images were physically impossible: the species of flowers pictured do not have strong enough stems to be incorporated in a bridal bouquet.

  1. What are you noticing in your environment  that is changing because of AI? Has it changed how you interact with others professionally or personally? Does this, on the whole, make your life better or worse?
  2. What counts as AI? Autocorrect and autofill are much simpler than ChatGPT but even more ubiquitous in our digital world, but they have potential to change the words we use when communicating with each other. Do they make our communication better, or simply more algorithmic?
  3. Regardless of the level of technology, some part of learning is algorithmic, repetitive, and not particularly creative. Beginning piano students learn scales, math students memorize formulas, and writers and artists learn by imitation. Where does imitation stop and creativity begin? And when should that happen? 
  4. What makes creativity human? What is it about machine output, even very complex output, that is lacking, or inauthentic?

 

Additional links: 

Ian Leslie on why curiosity is like a muscle Quercus Books

Ian Leslie on Why We Must Continue to Learn and be Curious The Royal Society of Arts

 


Nancy Vander Veer has a BA in Classics from Samford University. She taught high school Latin in the US and held programs and fundraising roles at the Paideia Institute. Based in Rome, Italy, she is currently completing a masters in European Social and Economic History at the Philipps-Universität Marburg.