Digitally Curious
Digitally Curious is a show all about the near-term future with actionable advice from a range of global experts. Order the book that showcases these episodes at curious.click/order
Who is your host, Andrew Grill? He’s the AI expert who speaks your business language. After 30+ years building tech solutions at companies like IBM and a range of high-tech startups, Andrew now helps executives navigate AI without getting lost in the complexity.
He has held senior leadership roles, including Global Managing Partner at IBM, and has collaborated with C-suite teams from organisations such as Shell, Vodafone, Dell, SAP Concur, Nike, Nestlé, and the NHS.
Andrew has delivered 700 keynotes in over 50 countries on topics such as generative AI, quantum computing, digital transformation, and the future of work.
Ranked among the world’s top 10 futurist speakers and a finalist for AI Expert of the Year, in 2025, he was recognised on the AI 100 UK List as one of the country’s leading voices in responsible Artificial Intelligence.
He is the author of Digitally Curious (2024), a bestselling guide to navigating the future of AI and technology, and host of the Digitally Curious Podcast (since 2019), where he translates complex trends into actionable insights.
Andrew is a regular media commentator, featured on BBC Television & Radio, Sky News, LBC, and in publications such as the Financial Times, The Guardian, and The Economist.
Find out more about Andrew at actionablefuturist.com
Digitally Curious
S8E2 - When AI does the thinking, how do young people learn to be critical thinkers? The urgent warning for those under 25.
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What happens to a generation growing up with AI always on hand to do the thinking for them?
That question sits at the heart of this episode, and few people are better placed to answer it than Tim Cook, an elementary school teacher in Amman, Jordan, who has spent over a decade in international classrooms across five countries.
Tim writes the Algorithmic Mind column for Psychology Today, and his research on cognitive offloading and child development has been making waves well beyond the education sector.
In Andrew's book Digitally Curious, he argues that curiosity and critical thinking are the most important skills in an AI-powered world.
Tim's work takes that further, asking a harder and more urgent question: what if the generation now entering school never develops those skills in the first place?
In this episode
- The classroom as laboratory. Tim has been noticing a shift in children's relationship with struggle for most of a decade. well before AI arrived.
- Cognitive atrophy versus cognitive foreclosure. An adult who offloads tasks to AI is atrophying a muscle they already built — it can be rebuilt. A child who offloads a task they have never learned is foreclosing a developmental pathway that may never form.
- The homogenisation problem. When a health teacher set a creative writing task designed to be AI-proof, 80% of students submitted the same Mission Impossible-style hero's journey narrative.
- The AI audit problem. To check AI output, you need domain expertise. But a child is still supposed to be building that expertise. You cannot audit what you do not yet understand — and so the substitution becomes foreclosure.
- AI as provocateur, not thinking partner. The goal is to use AI to surface your own expertise, not to let it generate the thesis.
- Cognitive Privacy. Tim introduces his Cognitive Privacy Project: AI is the first tool in human history to collect our cognitive behavioural data.
Resources
- Tim Cook's Psychology Today column — The Algorithmic Mind
- Adults Lose Skills to AI. Children Never Build Them — Tim Cook, Psychology Today, March 2026
Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
Andrew on YouTube
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious
Why AI Changes Learning
SPEAKER_01And offered two yet and two.
SPEAKER_02One of the most important questions of that. It's an elementary school teacher in the children. It's been an international class for instance for over a decade across five countries and approximately to real children and real learning is the foundation of everything he argues about. He writes the algorithmic mind column for psychology today. In my own book, digitally curious, I argue that the most important skill in an AI world is the curiosity and critical thinking needed to interrogate what the technology gives you, not just accept it. Tim's research takes that further and asks a harder question. What happened to the generation growing up now, for whom AI has always been there to do the thinking? The answer he writes out is one that should concern every parent, every educator, and every organization that depends on human judgment. Welcome, Tim.
SPEAKER_00Hey, nice to meet you.
The Boat Experiment Problem
SPEAKER_02This issue is something I get asked all the time by clients. And the only way I learn how to answer this and have a perspective is by asking experts like you. So thank you for your time. Let's look at the classroom as the laboratory. You teach elementary school and amand. You're not a theorist looking at classrooms from the outside. You are actually in one every single day. So what did you start noticing your students that made you think something's fundamentally changed to you?
SPEAKER_00I've been in elementary school for a long time, and I have noticed a change over the past decade before AI even came on the scene. And I'll give you a story to illustrate it. So I started my teaching career in Palestine in 2014. And at that point I was teaching science. And I used to run this experiment, which was a really fun experiment. Um it had a lot of constraints and everything like that. And what I did was I gave the kids some tinfoil, some paper, some tape, and I said, we're gonna build a boat, however you want, and at the end of this hour, we're gonna see how much it can hold. You know, it's messy, it's loud, the kids are screaming, they're collaborating, the boats are sinking, they're trying to make new ones, there's a time constraint, and all of this is happening all at once. I tried the same experiment with my third grade class last year. The first thing that happened to about 50% of the students was I'm gonna look up a boat with my iPad. I'm gonna look it up on Google. Why would a kid struggle when they can just retrieve this information in 15 seconds? If somebody already designed a functional working boat, why would I try to just iterate one from scratch with nothing to go on? And I think that technology or technological scaffolding has changed the way kids engage with struggle or friction in the classroom. And I have seen this gradually play out over the course of the last decade.
SPEAKER_02So when you say they struggle, they are not thinking for themselves, they're not being curious in how to arrive at the answer to the problem?
Curiosity Needs Friction
SPEAKER_00The follow-up to that was once I removed that option and the kids still had to sit with those that problem, they were able to do it just fine. I think that once you introduce technology, or in this case, if you introduce AI, you're allowing this frictionless technology to exist. And we as human beings are hardwired to try and offload as much effort as we can. It's like you can't blame a kid for wanting to put in the least amount of effort to get the best result. And this is inevitably what happens when you introduce technology that can do the work for you. When it's not an option, right, it introduces the concept of struggle. You have to struggle, you have to figure it out, you have to work hard, you might fail. And then ultimately, when you succeed, you feel really good about it. You feel really good about yourself. This is what I really think the point of learning is, is to try, try again, and eventually succeed. I mean, it's not reinventing. I'm not reinventing anything that's already been in existence. This isn't some brilliant new idea that's just came into the education space, right? It's existed for a long time. We need to struggle to learn something. And in your introduction, you mentioned two important things. You mentioned curiosity. And I would say that intrinsic curiosity is one of the most unautomatable skills. When I say unautomatable, a machine can't replicate a human's intrinsic curiosity. It's personal, it comes from our own schemata, our own lived experience, our own cultural context. But the drive to learn something is what makes a tool like AI useful, whether it's useful in high school or useful later in life.
SPEAKER_02So it's been a long time since I was in an elementary classroom. Back then, I think I was curious because I asked the teachers questions. I probably annoyed them by saying, But why? Are you seeing any change in the curiosity of students in your time now and back when you were a kid?
SPEAKER_00Elementary school students are very curious. Kids want to ask questions. That's how they explore the world. When kids ask questions, you should engage with those questions, even if it's irrelevant, even if it's a tangent. Because helping kids discover answers to the questions or pushing them to discover answers to those questions themselves, again, this is how you develop a love of learning. What eventually happens as kids go through school, right? We start the younger the age you are, the more questions that are allowed, and the more tangents that are allowed. And as you age, you move towards this concept of credentialing, right? Whether it's I have to get a certain grade or I have to meet this standard or this rubric. And this kind of creates a different scenario where kids are less intrinsically motivated to pursue their own curiosities. And they're like, okay, well, I have to get this done, right? I have to get this A. How do I do that the most efficient and easiest way possible? And now that we've introduced AI into the school workflow, this creates some problems because the credentialing system is still, this is how you're going to get into a good university. You have to do well in these classes. And the way you're going to do well in these classes is get a good score on these rubrics. And these rubrics are assessing your ability to write an argument. Well, if AI can write the argument for me, why can't I just use that to help me write the argument? What ends up happening is you never learn how to write the argument. So you functionally have lost that capability to do so. It's never developed.
Grades Push Kids Toward AI
SPEAKER_02So you've talked about elementary children, that's the day-to-day thing you look at. Uh you've said that when you talk to high school teachers now, they describe a pattern where when they ask the students to engage with the text, they get back nearly identical ideas and the same bullet points, the same structure. A few years ago, they would have got a real variety of thought. So can you describe what you're hearing from those teachers, why you think it matters, and how it connects to what you're seeing in your own elementary classroom?
SPEAKER_00This is actually really fascinating because I've talked to high school teachers in multiple countries, and they've all told me very similar stories. I'm going to share one with you right now, but this one's very funny. So I was talking to a health teacher, and the health teacher was thinking, okay, I know my high school kids, they they use AI, and I'm going to try and figure out an inventive way to come up with an assessment in which they're not going to be able to use it. Um so what he did was he said, okay, you're going to write a fictional metaphor, like a story, like a narrative fiction story, uh, that's kind of a metaphor for childbirth. Right. Now none of the kids in the class have experienced childbirth, right? So you would expect a ton of variance in terms of a fiction, make-believe story. You'd expect some fun, some would be ridiculous, right? At least I know that if I was doing that, I would make it probably ridiculous. A few weeks later, he gets the assignment and he's reading through them. He's seeing, okay, well, 80% of these stories are this mission impossible style hero's journey. And he's telling me this, and I'm like, there's no way that's true. He's like, yeah, they're all like everything was normal at the start of the day, and then something happened, like symbolic of the contractions, and then it became this mission impossible story where the hero eventually prevails at the end. And I thought, that's really interesting. I'm I'm gonna test this myself. So I opened up ChatGPT on my phone, and I basically put that prompt in. And what do you know? The first result was this hero's journey. And he had suspected that kids still used AI, even though he tried to come up with a way to AI proof it. When I hear these stories, I've heard other stories from uh history teachers and economics teachers. It concerns me that kids are thinking through the same model, and the model doesn't give a wide degree of variance in its responses. And everyone, I I think you can test this yourself if you wanted to by giving AI the same prompt in an incognito window 10 times, you'll see that there's very limited variance in the way that it outputs information. If you have students all using the same tool, where there used to be a classroom of variants, and there might have been some really terrible things, and there might have been some really great things that were produced, now everything's kind of just converged into this average statistical output. This is concerning both from a school perspective, but also a future workforce perspective, where a business is going to have people that generally just think the same way through the same model, it introduces a kind of a single point of failure. You want variance, you don't want convergence in the workforce, right? Especially at creative companies or companies that are driving innovation. You want people to be able to reason their own way, to have their own independent thoughts.
High School Work Starts Converging
SPEAKER_02Scalably aside, could you test for understanding, not with written work, that there's a chance you can cheat, but with a stand-up Vior where books down, it's just you and me. My life at the moment is a lot of public speaking. So I'll do a prepared talk and then I'll go to QA and I'll have a room full of people, I'll have no notes in front of me, literally ask me anything, and I have to answer on the spot. And I actually love that because it really tests my understanding. If I learn everything I need to do, could I answer any question in that room? That is a test of visor. I cannot cheat there. Everyone can see in front of me. I have no aids, I'm not cheating, I'm not using ChatGPT. Now you might say, well, Andrew, we couldn't possibly scale that if we got 30 kids in the class, you'd have to have 30 of those. But is that where we might have to get to that you've got to do books down and you need to tell me and prove that you know what you know by telling me in a vior. Is that realistic or am I just simplifying things?
Books Down Proof Of Understanding
SPEAKER_00We're doing it right now with a podcast. I've actually introduced podcasting into my class this year as an alternative way for students to express what they know. Because I think that being able to verbalize your own reasoning, right, while while often messier than when you write, um, does help you think deeper and it also elicits a kind of metacognition about how you're approaching or thinking about things. And it does prove expertise. And one of the things that's very different today in education, especially with the introduction of AI, is when we went to school 20 years ago or 30 years ago, or whenever it was, um if you wrote an essay and it was a competent essay, right, it was assumed that the author was capable, right? It was assumed that the author of a competent essay that there was a connection, that that person uh had critical thinker, they had done the analysis, they had done the synthesis, and they produced this piece of work. You could assume that connection. Now you can create a competent essay, and no one is no one knows if you know what you're talking about at all. You can produce something that's fantastic, right? Oh man, this guy he really did his research. But if it came down to questioning, that person might not actually know what they wrote. There's actually a lot of studies that have been done that that shows this, right? Um, there was one by Jakesh in 2023, and he showed that when you co-wrote with AI, it you started to shift your own beliefs towards that of the AI's output. Right? This is concerning on a lot of levels, but when AI begins to influence your own reasoning patterns, it becomes problematic. Verification of AI output is not the same as authorship. And I think that we need to move towards assessing something different in schools than maybe just an essay or a test or writing. And I do think that there's a place for live reasoning, or at least teachers that know how to provoke students to think a little bit more deeply or have to justify or introduce the concept of friction more into the classroom. Obviously, there needs to be some thought about learning design and that the traditional way that school's been run over the past 40 or 50 years may not make it through with this technology.
SPEAKER_02Yeah, that's what worries me. And I don't have the answer, but I again I'm thinking deeply about this, and I get asked all the time. That's why it's good to have you provoke what I uh what my thoughts are. You work a lot with elementary age children, and a lot of the debate around AI and education focuses on older university students or professionals. So why does it matter that you're seeing these effects at such an early age? What do you believe is the developmental significance of this?
Development Windows And Early Risk
SPEAKER_00Dr. Horvath, he recently did a Senate hearing in the United States. It was in January. I listened to probably about eight minutes of it. And he said that Generation Z was the first generation ever that was less cognitively capable than their parents in almost every metric reasoning, IQ, literacy, mathematical computation. Now he attributed it to technology entering the classroom. He said there's a correlation, right? There's a correlation between when tech, like iPads, one-to-one devices enter the classroom and the decline in all of these scores. Now it is correlative, and there's not hard evidence of this. But the truth is that a lot of technology, and I I've said this earlier in the podcast, a lot of technology has made learning too easy, right? There's always an out. If I have a question, the first thing I can do is Google it. I never have to sit with it. I don't have to sit with it for three minutes and be like, huh, let's like really think about this question. Why is the sun hot? That's a question I had as a kid. I couldn't Google it. There was a couple ways I could find the answer to that. I could ask my teacher, I could go try to find a book or go ask the librarian. Ultimately, I had to see if I could understand even why the sun was hot from that material or book. So I might need some help scaffolding like to get there in the first place. It was a process to learn something. Now it's very easy. Why is the sun hot? Okay, I look it up, and now I think I know it. AI compounds this problem because even in this situation, which I'm talking about, is frictionless. I still have to go to Google search, I still have to choose the correct source, I still have to read it, and I still have to understand it and synthesize it into the knowledge that I already had in my brain. With AI, it does all of that for you. So if I say, why is the sun hot to ChatGPT? It's synthesizing all of that information for me and producing this clean output to me in a confident way. And I start thinking, okay, well, I know. But I've done zero work to get there. And the fact is, is that you won't know and you won't remember.
SPEAKER_02I grew up as an Encyclopedia Britannica kid. My dad shell had a lot of money back then. They still have it. I went back home to Australia. It still sits there unused now. But I remember opening it up and flicking through. What I do remember though as an aside is the person that sold Dad the Encyclopedia Britannica did a really good job. We're at one of these um Easter shows and walked past the Encyclopedia Britannica stand and guy jumped out of his store. It was a family. What's your name, Ron? Oh Ron, do you want your child to be successful? Which parent doesn't. And he did such a great sales job that there and then my father shelled out then about$2,000 Australian dollars on this Encyclopedia Botanica set, which we know the day it was printed is out of date. But that's how I learnt. It wouldn't give me the perfect answer. I maybe have would have had to flick through and do some more research. But back in the 1970s, 1980s, when you didn't have these information retrieval systems on tap, you had to work harder. Maybe that's made you and I a bit smarter because we had to work harder. We had to exercise our brain more to get the answer.
SPEAKER_00I'm not sure that it made us smarter. And I would be hesitant to say that kids don't have the capacity to develop the same neurological connections. But that struggle is what builds the neurological connections. And we, as children, we we grow through these developmental windows from zero to five, where it's with language. We move into our first in early elementary school. That's where really when curiosity takes off and we start learning about reasoning for the first time. And then when we get into middle school with hormones, there's a big recycling or and then reformation of all of our neurological connections. And it has to do with both cognition, but also how we build relationships and there's the social aspect as well. Then we move into kind of higher order thinking and reasoning skills. And this is a big, I mean, I'm not going to say that it's between 16 and it's a big bracket, right? This can happen at varying ages with a high degree of variance in when it happens to a kid. But the truth is, is that the parts of our brain that are responsible for deep abstract reasoning, judgment, and decision making, the prefrontal cortex, that isn't fully developed until 25. Anytime you introduce a technology into the workflow of the developmental windows, you risk changing the way those neurons end up forming. While Dr. Horvath did say that Generation Z overall had a lower IQ than the parents' generation, is it because they were less smart or was it because something got in the way of allowing those connections to form as deeply as their parents? And I would suspect it's the latter.
SPEAKER_02So what you're saying is that if you're under 25, AI could be a health hazard to your learning.
Atrophy Versus Foreclosure
SPEAKER_00I think that under 25, we have to be more cautious about when it's introduced and how it's introduced. One of my favorite pieces of research is by Michael Gerlich. He did a study that showed how heavy AI use across all age levels had a negative correlation with critical thinking. Interestingly enough, the worse effects were in the 17 to 25 year old group. And the group that sustained the highest critical thinking scores, regardless of how much they used AI, was the 46 and up group. You might think, well, what about the middle group? Well, it was linear. So the older you were, the more critical thinking skills you maintained, even through high AI usage. And that's because when you're offloading at 46, you're offloading things that you already know how to do. So your choice there, while it does affect reasoning in the short term, your choice is I'm going to be a little bit more efficient in the present. I'm going to offload some of this. But ultimately, if you took AI from me tomorrow, I would be able to develop it again. I've written an email a thousand times. I know how to write an email. You know what? It takes me 15 minutes to reply to this person. If I use AI to help me, it's just going to take me five. For a kid, they haven't written an email a thousand times. They don't know how to respond to someone. And by not going through that process, they actually never learn the skill of doing so. If a kid says, I need you to write this email, they don't have the foundational skills to do so. So they're offloading something that they never learned how to do in the first place. And this becomes a completely different problem. It's what I describe as the difference between atrophy, for adults, which is you're atrophying a muscle, versus foreclosure for children, which is that they never develop the skill in the first place.
SPEAKER_02Well let's get into that because there's the distinction that's really stayed with me since I read your work, the difference between cognitive atrophy and cognitive foreclosure. Maybe you can explain it in plain terms and tell us why you think it matters so much.
SPEAKER_00We need to understand what cognitive offloading is and why AI is fundamentally different than the way in which human beings have offloaded in the past. So cognitive offloading isn't bad, right? It's not a fundamentally bad thing. It's actually quite useful. If I use a calculator after I've learned how to divide and I'm using it to do a long division problem, I'm still inputting the function. I still have to understand the function in order to use it. It's just calculating the operation. So I'm offloading this so I can think about maybe a higher level reasoning in mathematics. And this has greatly this technology has allowed the progression of mathematics to go much quicker. We've offloaded fact retrieval to things like Google search. We've offloaded phone numbers and addresses into our phones. We don't remember those types of things anymore. Even pictures or photographs we've offloaded into our our phones as memory. These are all nouns. We've always offloaded nouns we've offloaded things. AI changes this we're not offloading things anymore we're not offloading fact retrieval. We're offloading the synthesis we're offloading the judgment we're offloading our reasoning and letting AI make those decisions. So this is a fundamentally different thing. And we've never in in our history have offloaded these verbs before I'll try to describe it in plain English. I can walk just fine. But if I decided I'm not going to walk for the next year I'm just going to sit in this chair. What would happen is my my muscles would atrophy but if I stood up again I would and with physical therapy I would be able to learn how to walk again. It's cemented in my neurological system I know how to walk. If I never learned how to walk from birth that skill doesn't exist. The effort it would take to learn how to walk when you've never done it before would be much harder than the effort it would take for me to relearn that skill. And this is why introducing AI in any important developmental window without very clear constraints will affect the way that students or children develop like their their actual reasoning they'll begin to think like an AI. They'll begin to think like a large language model. They'll begin to be homogenized. They'll begin to converge with the same reasoning patterns this is a broad problem not just for education but also for civil society for national defense for democracy for a functioning democracy.
SPEAKER_02Let's just test that you argue that the cognitive foreclosure may be permanent the neural pathways which never form may not be recoverable later. How confident are you in that claim? Is this neuroscience settled or is this a well-reasoned hypothesis that the evidence is pointing towards well the evidence isn't settled because A it's unethical to run experiments on children.
Why Waiting For Proof Fails
SPEAKER_00So it's impossible to collect actual data on this control group of 14 year olds using AI and this control group not. So the evidence isn't settled and it won't be settled the precautionary case is very much overwhelming. And you can see this by looking at past harms. And child harm is always a signal of what's to come. For example we know that when kids start getting sick in a city and then we eventually test the water and we find out that lead is in it. The children are always the ones that are sick first because they're smaller, they're younger their immune systems aren't as developed they get affected by lead quicker than adults. Similarly as we're learning now in Europe, in the US, in Australia, all of a sudden we realize social media was very harmful to child mental health. All of a sudden, 10 years later there were signals along the way with the rise in anxiety and depression amongst kids and suicides and other negative consequences of social media lack of being able to have healthy relationships, etc, that these were signals that there was going to be a problem. And I think if we wait too long to discover the evidence that AI is going to have a negative effect on learning, it will be too late. The signal is already there. The research on adults is already there. Waiting for proof is an excuse we don't want to wait for a generation of students or adults that don't know how to make independent judgments or decisions because they've offloaded the development of their reasoning through large language models.
Personalization Marketing Versus Real Teaching
SPEAKER_02We kind of know how the story ends with social media. I was a social media pioneer playing with it very early on and we didn't think about the mental health harms that are there now which we had. But let's look at what needs to change. You've been critical of the institute's response to AI in education and this is a discussion I have with my educators around the world the combination of banning AI for students while professors automate their grading, the investment in training teachers to use AI rather than preserve human capacities it threatens. What would a genuinely serious institutional response look like?
AI As A Thinking Catalyst
SPEAKER_00I am very critical of the way institutions are deploying artificial intelligence. We cannot disassociate what's going on from economic incentive. These companies are offering something called personalization and schools are buying it as this is the next thing we have personalized learning. This these AI tools can help us make sure that we can personalize each of our kids on their level and everyone's going to learn better because of it. The marketing is in itself flawed because these companies aren't offering personalized learning they're offering personalized content delivery and there's a huge distinction there. In order to personalize learning you need to know the human being that's sitting in front of you. You need to have their context you need to have where they came from you need to know about their background you need to know how they interact with you in class you need to know what type of behavior that they express. Personalizing learning is a very human thing. So while AI might be able to personalize your mathematical progression based on where you're showing your level is it can't actually personalize your learning the same way a teacher can. So it's problematic for AI companies to be saying well like we need to train teachers how to use AI and we need to train students how to use AI because if we don't they're not going to know how to use it. This is going to change the way education works. But I always say this if anyone's ever used a large language model before it's really not rocket science. The effectiveness of the tool comes from the effectiveness of the user and people that use AI well are people that have a very high domain expertise. If I use AI to try and explore genetics I don't know anything about genetics. So when I do that engagement or that interaction what I'll get is competent sounding information that I'll probably just agree with because I don't have the domain expertise to really audit it. But if you were to talk to an expert geneticist who used AI in their workflow to really push or augment their thinking you would get a completely different experience because they're using different vernacular they're keying in on specific niche areas or terms that the general populace doesn't know. So having kids explore AI, right, or prompting or teachers using it to write lesson plans or give feedback this isn't really like a high level move. Kids have to develop expertise to use AI well. This is what the the the foundational point of school is and as you mentioned earlier that curiosity to learn AI is magic if you have the curiosity to learn it really is. It's the greatest technology ever if you want to learn something. So I think that what they're selling to schools isn't really based on good pedagogy. It's based on marketing something that's fundamentally not really what they're saying it is.
SPEAKER_02You advocate for what you call dialogic AI engagement, treating AI as a thinking partner to interrogate rather than as an answer machine to accept how do you actually teach that in an elementary classroom what does it look like in practice?
Parents Building Judgment At Home
SPEAKER_00I don't like using the term thinking partner because it gives the impression that it's this is a kind of a collaborative experience where me and the large language model are kind of equal partners here. I prefer the term catalyst and I prefer the term thinking catalyst. What you're shooting for is not collaboration. What you're shooting for is provocation what you want to do is use AI as a provocateur to surface your own expertise and the things that you want AI to surface is your perspective, your personal context or your cultural context your domain expertise your lived experience your past struggles and how this all connects to your current brain schemata that exists which is unique from everyone else's this is what AI can do when you ask it to quote unquote provoke you. My third graders earlier this year they were really struggling with mathematical fluency especially with multiplication and division. I had experimented with having kids practice with these apps online they're they're kind of they're just games they're engagement apps they're gamification systems. It wasn't really having a positive impact. One day these three boys came up to me they said Mr Cook can can we make a math game? Because they know that I had been coding a few applications um they had they had seen me do it. So they said can can we make a math game a multiplication one to practice and I said yeah that that's actually a great idea let's let's try it. But here's the constraints before you even touch the artificial intelligence you three are going to go into a room I'm going to give you a microphone you're going to record what you envision your game to look like why you're doing it what the constraints of the game are what the rewards are what it's supposed to be teaching you. They went in the room they recorded this long conversation that they were having it was a they were laughing having fun thinking no no no let's do this I would no I think it should do this I transcribed it I put it into a large language bottle and I said can you ask these students five follow-up questions to clarify some of the points that they were maybe unclear about then the students had to answer those five questions. After that the artificial intelligence model had enough context to actually create a prompt that was representative of what the kids wanted to design in the first place. So when I threw that prompt into Claude to code the game it was entirely different than if a kid went to Claude in the first place and said hey I want you to build a math game for me that teaches me multiplication and division. What ends up happening in that scenario is Claude will give you three different options. Oh that's a great idea here's three different ways we can do it. This happens with adults too this is it proven by research as well we tend to choose one of those continuations rather than say no those are not what my idea was at all. So when AI gives you choices we tend to choose one of those choices. So if you use AI like that you're not thinking at all you're not bringing your thesis to it you're allowing it to drive the way in the direction in which you go at this point I think I've made five math games with students so I think about 13 or 14 of the kids have kind of grouped together. And there's been I mean these are if you saw them they're entirely representative of what you'd expect from a third grader. There's one where it's supposed to be you're supposed to answer multiplication facts quickly and you're in a parachute going back and forth and if you don't answer it quickly then the lightning comes down and hits the parachute. There's another one that's kind of modeled after a Pokemon game where you move around a room and you have to find a monster and then you have to answer four questions and that attacks it. Never allow AI to develop the thesis for you. Always bring what you want to it's the same as if I say I'm going to go to the store I I need some gray pants that's what I want. And then you go to the store and across the hall you see some shoes and you're saying no actually I think I want those shoes too I I do need shoes. And then you see a sale and you're like actually that suit jacket is 40% off it's a it's a good time to buy that. You can't be influenced by the continuations of the model. If you came in to get the gray pants get the gray pants and leave.
SPEAKER_02In my own book Digitally Curious I'll have to get you a copy of it I write about my father Ron's habit of constantly asking me questions just for those that haven't read it I started playing with electronics electronic kits and uh we would actually wire up these lights in parallel and the lights would glow brightly if we then wired them in series the lights would glow dimly and I had this little logbook I wish I could still find it it was a A4 notebook and I write down the outputs of the experiments that we were doing. And I turned to Dad one day and I said no he asked me Andrew so why do you think they glow brighter when they're in parallel versus series? And I said I don't know he said why do you think I want you to think about it. He made me think about the answer and he explained why it was and I was only six years old so I think I've been curious for from a very young age. I then studied engineering which basically then forced me to think from first principles and I've got some great examples there as well. But that thing of always asking dad questions and him firing it back saying well why do you think that's the way I think that's probably had a profound influence on my curiosity now my digital curiosity. You write about parents need to play a role here and this really hit home. What should parents be doing and when should they start? Because I'm not seeing institutions teaching critical thinking so maybe it's got to start at home.
SPEAKER_00That's a really interesting example and you have a fantastic father to ask questions like that. I'm curious what do you think those questions did to change the way you think? When your father was asking you questions, how did that change the way you thought about the world that you were navigating?
SPEAKER_02Well I knew that he wasn't going to give me the answer straight away without a fight that I had to think for it. And even if I I ran out of options he would then explain what the answer was but he made me stop and think and you know where might you go and find that out let's do another bit of the experiment and let's let's wire that around again and why do you think that is and how would current flow through this differently because it's got to go through the lamp and there's some impedance in there and it is it it resists that it's pushing harder so that's why the the lights are dimmer. But it made me change gear and go, okay, he's not going to give me the answer. I've got to earn it and I've got to think harder and probably if you had a skull cap on me you would have seen my brain thinking a lot harder because I wanted to answer that I wanted to show my father I'd done the thinking I wanted him to be proud of me that I'd actually exercise some some judgment there and I I think that's continued. And I constantly ask questions. I'm really curious. So in answer to your question what did it do? It made me think more and it made me understand that I'm not always going to get the answer from the easy part. It may be hard to get the answer but at the end it's worth it. Let me give you another analogy and and I again I mentioned I did electronic degree. I remember a teacher called Gunnar Krieg. He was really tough. Before we go into a laboratory and do some electronic experiments he would say I want you to do the pre-work. I want you to do the theory and explain when you're in the lab what's the answer going to get is it going to be five volts or is it going to be seven volts and explain how you got that. So in the lab we had limited amount of time in there if the answer was three volts you would know that it was wrong. Now at the time we thought he was such a taskmaster I was asked 20 years later to go back to the institution and talk to the students and Gunnar was standing at the back of the room and I stopped in my tracks I said everyone turn around look at Gunnar crates 20 years ago I thought he was a real taskmaster but I realized why he was doing that. He was instilling that creative thinking he was making us understand this is what the answer is before we go in the lab. So when we're in the lab we know what it should be. And that was so important and 40 years later I'm still talking about it. I'm not sure if Gunnar's still with us. But what my father did what Gunnar did was instill that notion of you need to know what to expect rather than just have the answer put in front of you.
AI Free Zones And Cognitive Privacy
SPEAKER_00What you just described it's like an intellectual inheritance and it's very unique to you and your father's interactions with you knowing you and knowing how to ask you questions that's unique to you and it it's shaped who you are. As I've explored this AI space more I've started to and especially my my deep concern with homogenization I've started to realize that each of our individual inheritances those learning experiences that you remember and everyone has two or three that they really remember those shape who you are and how you think more than any kind of content or tool that you use it's those experiences it's what makes us unique. So you answered the question I mean that's what parents and teachers can do that AI can't. They can know you well enough to push you towards those questions without giving you the answer to create that friction to drive you to want to learn that yourself and to be there and to support you and to not say like that's a dumb question or this is why this is right why is the grass green? Oh yeah it's because of photosynthesis how do you think you could find that information out how do you lead kids to solve a problem by themselves my daughter was riding around the room in one of those little bicycle things and she got stuck behind a table and she was still on it and she said she said daddy I'm stuck help and I said just figure it out three seconds later she just figured out she lifted up the bike turned it around and went out the other way but the thing is is kids want they want you as parents to solve their problem for them. And there are some problems that that if you're a supportive parent you help solve. You also want kids to have the capacity to solve their own problems. You want kids to have the capacity to make their own judgments and sometimes that means that they're going to make mistakes and sometimes they might be bad mistakes but those mistakes are what allow us to learn what we did wrong and what not to do again. I do think like the simplest thing you can do as a parent is be supportive and ask questions to your kids. Make sure that they're curious and never give them the easiest path to success.
SPEAKER_02And that's my point it has to happen at an early age when they understand your learning style I think it's too late once we say well universities and secondary schools should be teaching critical thinking you should have those skills already but of course not every parent is going to know how to teach the kids to be critical thinkers. That's the challenge and what we don't want them to do is to default to AI. You've argued that AI free zones of exercises with no AI tools, the Fiverr I talked about are essential not as punishment By diagnostics. How do you respond to educators and students who say that this is just preparing people for a world that no longer exists? We need to understand that AI isn't going away. There's that balance between allowing them to use the tools and not having them dependent on the tools.
SPEAKER_00I think the answer is pretty obvious to it because using a large language model, and a lot of times there's a lot of variations of AI. The one we think about in popular culture are large language models. Using a large language model is not rocket science. If you have any kind of thinking skills or domain expertise, it's pretty easy to use. We don't really need to teach kids how to use it. And when you're in the walls of a school, the walls of the school don't apply to outside the school. If you don't think that kids are experiencing or experimenting with large language models outside of school, like we don't need to teach them how to use it. What we need to do is teach them the skills that will protect them from the harms of the model itself. And that's things like how do you audit truthfulness or facts? How do you know when something is wrong? How are we able to recognize when a model is sycophantic and creating false confidence that doesn't exist? There's lots of experiments you can do to teach kids how ridiculous this tool really is so that they can kind of begin to see like, okay, this is what the tool actually is. It is a fantastic tool that supports high quality thinking and high quality content knowledge, if I have it. It's a great tool to provoke me to think deeper about myself and elicit some things that maybe I haven't really explored to myself, but it's also problematic. There's been a lot of research on how kids are using chatbots outside of school and they're forming relationships and attachments with them. And this happens to adults too. AI is collecting our cognitive behavioral data. There's never been a tool that's done that before. When we use that in schools, you're giving the way in which I process my thinking or my thoughts over to a company whose economic incentive is to use that data to either profile you or broadly profile a population to create revenue for themselves. While the data of nouns is protected by law, like your social security number where you live, the inferences that a behavioral algorithm makes about you based on your prompts is not yours, it's theirs. They can do what they want with that information. So we as institution, whether it's a school or parents or governments, really need to consider this as a child protection problem. AI is not going away. Anyone can sign up for Chat GPT, even if you're 10 years old and you just click on 13. The reason I would always recommend if any parent is ever thinking about allowing their students to experiment with AI, you should really only be looking at anthropic and clawed at this point. It's the company that takes ethics and cognition as seriously as as they can, but they also do have economic incentives to make money from all of the investment that they're putting in.
SPEAKER_02So final question before we go to the quick fi round. You're a practicing elementary school teacher in a real classroom with real children. After a year or so of writing this research, when you walk back in the classroom on Monday morning, what does all of this mean for how you actually teach? How has it changed your teaching style?
Quick Fire Round And Next Steps
SPEAKER_00Every day I walk into the classroom now, I think about constraints. What constraints can I put on the kids today? How can I introduce positive anxiety? How can I introduce friction? How can I make sure that they build resilience? Even just a simple task. Build a bridge with two pieces of paper. That's all you get. Massive amount of constraints creates the conditions of collaborative problem solving, critical thinking, having to evaluate limited resources, and it makes you creative and have to think outside the box. That's what I do every day now. What what constraints can I put on my students today?
SPEAKER_02I want to delve into you, the human, the quick fire round when we learn more about our guests. So I'm going to fire some questions at you. Window or aisle?
SPEAKER_00It depends on my wife and kids, so mostly the middle.
SPEAKER_02Your biggest hope for this year and next.
SPEAKER_00Stop being enthralled by AI marketing for schools.
SPEAKER_02I wish that AI could do all of my organizing my photos. The app you use most on your phone.
SPEAKER_00Al Jazeera.
SPEAKER_02The best advice you've ever received.
SPEAKER_00Be genuinely helpful and good things will come of it.
SPEAKER_02What are you reading at the moment?
SPEAKER_00Currently I'm 20 pages into 1984, and it feels too real. I just finished Norwegian Wood by Mirakami.
SPEAKER_02Who should I invite next onto the podcast?
SPEAKER_00Myra Chang. She's from Stanford, and she's the one that does a lot of work in AI sycophancy.
SPEAKER_02How do you stay digitally curious?
SPEAKER_00I explore how algorithms are affecting our reasoning and what to do about it.
SPEAKER_02How do you want to be remembered?
SPEAKER_00I want to be remembered as the canary in the coal mine. I would like to be remembered as someone that's warning education, what could happen, and that we need to collectively, as teachers and schools, parents, and governments, take action to allow our kids to develop their higher order thinking skills without intervention from artificial intelligence.
SPEAKER_02So, as this is the Digitally Curious Podcast, what three things should our audience do today to stay curious about the issues we've discussed?
SPEAKER_00I recommend everyone try the variance collapse experiment. Doesn't matter what model you pick. Choose any PDF, put it in an incognito window, ask the same prompt ten times, ask it to summarize this, see the kind of variance you get, and you'll see that AI is a statistical predictor. It doesn't have as much variance as you think.
SPEAKER_02Tim, this has been a fantastic discussion, one I've been looking forward to, and I've learned a lot from what we've talked about today. How can we find out more about you and your work?
SPEAKER_00I write the algorithmic mind column for psychology today. That's where you can find my thoughts on the intersection of AI and cognition and child development. I run the Cognitive Privacy Project, which is a new initiative that's exploring how the architecture of algorithms is creating a space where all of our and our children's thoughts are now captured by companies. This has never happened before. And I think that there needs to be new laws in place to protect everyone's right to think.
SPEAKER_02You love a book by my fellow podcast guest, Susie Allegro, Freedom to Think.
SPEAKER_00I literally just listened to that podcast. He was talking about social media and how those algorithms have influenced the behaviors of children. So I listened to it and my first thought was apply the same thing to AI. It'll influence the behavior of children.
SPEAKER_02Thank you for your time. Thank you for writing about what you do that made me think. So uh it's it makes you expect code perfectly if you put the book of the time. And if you haven't grabbed a top copy yet, let's get that stuff if you can see the interesting just search for GQS online. Right now, you have a great book of the code.
SPEAKER_01You can find all the two yet. You can find out two yet.