Digitally Curious

S7 Episode 1: Augmenting Your Career in the Age of AI: An Interview with David Shrier

David Shrier with Actionable Futurist® Andrew Grill Season 7 Episode 1

Welcome to Season 7 of the Digitally Curious Podcast. This show is a perfect compendium if you’ve bought the book of the same name, and if you haven’t grabbed yourself a copy, may I suggest that’s something you should consider investing in - just  click here or ask for it by name wherever great books are sold.

In today’s episode, we’re going back in time to an interview I conducted with David Shrier in late 2021 - well before ChatGPT hit the headlines. We discuss his book Augmenting Your Career: How to Win at Work In the Age of AI, and the book and our discussion are as relevant as ever, three years on.

What I found interesting while editing the episode on New Year’s Day 2025 is that it’s an excellent discussion about the fundamentals of AI without the hype of ChatGPT and Generative AI

David Shrier, a trailblazer in technology and educational innovation, joins us in dissecting the future of work in the age of artificial intelligence. His perspective on AI as "augmented intelligence" challenges us to rethink the relationship between humans and machines.

With examples from around the globe, David advocates for AI literacy as a cornerstone of modern education, highlighting how nations like Denmark, China, and Singapore are setting benchmarks.

This episode promises to enhance your understanding of how AI can be integrated into educational ecosystems and professional life, guided by insights from David's book, "Augmenting Your Career."

Navigating the ethical landscape of AI is more important than ever, and we bring focus to the critical role of government regulators in this space. We dive into real-world examples, such as Google's image recognition failures, to underscore the importance of diverse data and ethical diligence. The conversation praises initiatives like those of the UK government and challenges the pitfalls of overregulation, drawing lessons from New York's BitLicense situation. ]

Our discussion is a must-listen for anyone interested in how regulatory frameworks can either propel or stifle technological innovation.

As AI reshapes the job market, we explore the shifting roles and opportunities emerging across various industries. From philosophy majors finding their niche in AI development to the resilience of healthcare and creative arts, there's a world of possibility awaiting those prepared to adapt. We also touch on corporate responsibility, using Accenture's workforce reskilling as a model for sustaining company culture amidst AI-driven changes.

Discover practical strategies for staying ahead in this rapidly evolving landscape, emphasizing cognitive flexibility and continuous learning as keys to thriving in the digital age.

More on David
David on LinkedIn
David's books

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

Speaker 1:

Welcome to Digitally Curious, a podcast to help you navigate the future of AI and beyond. Your host is world-renowned futurist and author of Digitally Curious, Andrew Grill.

Speaker 2:

Welcome to Season 7 of the Digitally Curious podcast. This show is a perfect compendium if you've bought the book of the same name and if you haven't grabbed yourself a copy yet, may I suggest that's something you should consider investing in. Just search for Digitally Curious online, or ask for it by name wherever great books are sold. In today's episode, we're going back in time to an episode from an interview I conducted with David Schreer in late 2021, well before ChatGPT hit the headlines. We discuss his book Augmenting your Career how to Win at Work in the Age of AI, and the book and our discussion is as relevant as ever three years on. What I found interesting while editing the episode on New Year's Day, 2025 is that it's an excellent discussion about the fundamentals of AI, without the hype of chat, gpt and generative AI muddying the waters. I really hope you enjoy this episode.

Speaker 2:

My guest today is David Schreier, who is a globally recognized thought leader in technology and educational innovation, serial entrepreneur and author. The portfolio of digital classes he created for MIT and University of Oxford have engaged more than 15,000 innovators in over 140 countries and revolutionized the model for university short course offerings. He is a professor of practice AI and innovation with Imperial College Business School, where he directs the translational AI lab. We're here to talk about augmenting your career how to win at work in the age of AI. Welcome, david.

Speaker 3:

Andrew, thank you for having me. I can't wait to dive in.

Speaker 2:

It's a very relevant topic and a book for our time right now, because I was actually on stage yesterday with a bunch of accountants who are very worried that I may take over their job.

Speaker 3:

Well, I will say that the accountants are right to be fearful. There's a massive effort underway in the services industries, like consulting, accounting and law, to replace people with machines.

Speaker 2:

So let's dive into some of the topics covered in the book. When we hear AI, everyone thinks it's artificial intelligence, but after reading your book, perhaps we should rename it to augmented intelligence. You talk about humans and machines. Is that a fair way of using the A and the I?

Speaker 3:

Well, I would argue no. So think of AI as the superset. That's everything and one particular kind of AI. And what I think is the most exciting and profoundly impactful is where we put people and machines together, and that is augmented intelligence. But that's just one kind of AI.

Speaker 2:

Great book. You've written, as I said, six over the last six years. What was the inspiration behind this book?

Speaker 3:

Well, I've been messing about with AI since 1991. And I'm also a big fan of science fiction, so I read a lot of fabulists who speculate about what AI could look like 50 years or 500 years or 5,000 years from now. But what I've been seeing, particularly over the last five to 10 years, is that AI is now really getting useful and having more and more profound effects on society, and I think people are woefully underprepared for this change, and I think there is a massive lack of AI literacy. So I wanted to write a book that was really accessible, that's easy for people to read, easy to get into, but still substantive. That still gives you the essential knowledge that you need in order to understand this technology what it means for your job, for your future, for children's futures, and what it means for society at large.

Speaker 2:

The AI literacy is something I want to cover because I think it's a huge issue. Should it be taught earlier in the education system? So primary school children are taught about how money works, because you need to know that. Your book basically talks about the fact that people are afraid because they don't know what they don't know. If they understand the opportunities of AI and we go back to our accountancy example if they know what the issue is and why they might be replaced, they can start reskilling now. So should we start earlier with AI and digital literacy?

Speaker 3:

Absolutely. I think that we need to be AI literate the same way that we're numerate, you know, and so I think it's incredibly important not just sort of the direct understanding of AI, but also kind of some of its indirect but high impact offshoots. So, for example, denmark teaches 10-year-olds how to understand that they're looking at fake news versus real news by providing critical thinking skills, and so, when you think of AI literacy, you really need to think big, not small. You need to think of adjacencies, not just the facts of what's an expert system.

Speaker 2:

So you mentioned that Denmark are doing a good job there, but who else is getting AI literacy right and for our listeners? Where could they go today, apart from, of course, buying and reading your book? But where could they go for a sort of a mini course on AI to get ready?

Speaker 3:

China is putting a lot into it and everyone needs to pay attention to that.

Speaker 3:

Singapore has an incredible array of educational programs, from very young ages all the way up through working adults, and you see interesting programs select other countries like Canada, united States, a few other places in a few other areas In terms of improving AI literacy. So the book you know, the nice thing about a book is it's you know, it's 20 quid and there's an audiobook version done by wonderful actor Roger Davies, that if you don't want to read it, you can listen to it on the treadmill or while you're walking. If you want to go more deep, I've created a program at Imperial College together with Chris Tucci and Eve DeMontjoy, two other professors, and it's a joint program between the business school and the engineering school that lets you understand not just what AI is but what to do about it and how to use it, and you walk out of that. It's a six-week entirely online program and you walk out of that with a plan, like a business plan or a strategy or something that you could apply at work tomorrow.

Speaker 2:

I just want to touch back on China, because every guest I've had on that talks about AI, talks about the threat from China, and I think part of it you talk about in the book is that the government has a national program around that. So what's the real threat here and what are the other superpowers doing to respond?

Speaker 3:

There are several threats.

Speaker 3:

One of them is that China is very good at doing a coordinated response, so they've aligned government funding and a very large amount of government funding with private sector support and enablement and a super national strategy, meaning they've been actively using this what's called the Belt and Road Initiative to push China technology beyond the borders of China into numerous other nations around the world.

Speaker 3:

They've also been active investors in obtaining university innovation, so they fund research all over the world at the world's leading universities and then take those inventions and innovations back to China and then build businesses about them. So they've got a very, very smart strategy. Something I will note is that the UK generates a tremendous amount of AI research, but then does not do anywhere near as good a job of commercializing it as China does. So, according to Tortoise Research, for example this is something I cite in the book the UK has two and a half times as many articles published by top-rated AI experts as China does, and three and a half times as many as France by top-rated AI experts as China does, and three and a half times as many as France and almost 30% more than Germany, and so the UK has global leadership in its research around AI, in creating inventions, but where it lags is in innovation in translating those inventions into commercial practice and scale, and so that's an opportunity that I think the new national strategy that the government has announced is seeking to address.

Speaker 2:

Now in the book you allude to something you're working on with the AI Institute. Can you tell us more about that?

Speaker 3:

Well, this is part of my small contribution to the effort, right? So I've banded together with some other folks, including a notable tech entrepreneur, ben Yablon, and the opportunity is to build a better commercialization pathway for AI and, at the same time, to improve the governance of AI to make sure that it's trusted, that it's ethical, that it does what we want it to do and not violate laws or go off and do whatever the AI itself decides to do. So that's an exciting new project which we hope to get off the ground.

Speaker 2:

Ethics and AI is a huge topic and I want to focus on that now because, again, talking to our accountants yesterday, I said one thing you need to be aware of is who's programmed this? Where is the conscious bias? Because a lot of people don't realize. They think that AI just happens by itself. You've got to educate and train them and that's done by humans, and humans have a conscious bias, as we all know. So ethics is a huge issue. In your book you sort of talk about, elon Musk is trying to scare us about the machines will take over and rule the world. I would hope that you're a little less science fiction about that. But where do you stand on ethics and what should we be doing more of to ensure that the right ethics frameworks are built into every AI system?

Speaker 3:

First of all, I actually think that, if anything, elon Musk is understating the problem. So you know, I mean he does have a horse in the race, so to speak. Right, he's a major investor in a big AI company and uses AI and Tesla and his other ventures, but nonetheless, ai does have the potential for significant harm to society and to humanity at large. I do not think it is predetermined. I do think that we can control our own destiny, and so that's where I may have a somewhat different framing of the problem, which is let's all get smarter about this and then let's do something about it. Let's all work together to make AI work for the benefit of society. So you know the answer to your question about how do we implement it. You're absolutely correct that one important thing is that we need to educate both the people who are designing the AI technologies, the technologists. We need a better job of having them understand AI ethics. It can't just be a module or a class that shows up once in their training. It needs to be embedded into kind of everyday practice and everyday thought. It into kind of everyday practice and everyday thought. We also need to train the business people and the consumer, everyone around and government officials, everyone around the technologists as well needs to better understand what the unconscious biases are and what the implications are, because one of the things that's important to know is AI doesn't just sort of spring forward from the brow of a computer programmer. Okay, it is trained by data, so the people design the training, but the people choose data and then the data trains the AI, and so if you pick the wrong data, you can end up training your AI in the wrong direction.

Speaker 3:

So kind of one of the famous early examples was I believe it was Google's image recognition system and the image analysis system. So it's a bunch of, you know, young white male programmers aged 28 to 32, of Western European descent, in Silicon Valley primarily, who built an image recognition platform and then trained it on a data set, and the data set they used was a bunch of images that had been generated of people who looked like them, and they didn't see anything wrong with that, because they didn't even notice, because they're like oh yeah, that looks like a person. To their mind, person equals 30-year-old, you know whatever English male. And so what happened was, when the AI was then deployed, it was horrible at telling the difference between you're recognizing someone whose skin tone was not milk white and women. You know someone who's maybe of an Asian ethnicity, anyone who was not a 30-year-old white male.

Speaker 3:

And so one of the worst examples, one of the ones that got the biggest headlines. You know it would confuse people of African descent with gorillas routinely. This was a big black eye for Google. It was a terrible instance of unconscious bias. They have fixed the problem, but it's an example of what happens when you don't train the people who are designing the AIs, and then those people pick the wrong data set to, in turn, train the AI.

Speaker 2:

You mentioned in passing that government officials should be across this, but I think they're probably one of the key points to this, because I think regulators are a little bit behind because they're so busy keeping the bad guys out that they can't keep up with the latest things. And so a good example in another industry is the open banking model PSD2, that the European and UK governments put in a few years ago, and the financial conduct authority, the FCA, here I think they were one that had a sandbox. They basically said we don't know what you're going to do with this new regulation or this new platform. We want you to try and break it. We want you to try and do everything you want and you can do it in a sandbox where you won't get sued, you won't go to jail, because we want to see, as regulators, what the art of the possible is. So back to AI literacy. Maybe we should be running our regulators through training so they understand this.

Speaker 2:

Another example the Queen's message here in the UK every year is on BBC and the Alternatives on Channel 4, a commercial broadcast, and last year they published a deepfake. They had an actress playing the Queen, and I use this example because it means that a public platform like a commercial TV broadcaster raise the issue that everything you see on the screen may not be real. So where do the regulators come in, and who trains and educates the regulators so they can actually make the right decisions on how to police this?

Speaker 3:

I actually think the FCA and the Bank of England are among the most sophisticated government regulators about technology that I've encountered, and I've worked across 150 governments, so I think I have a representative sample to say. You know, they're on the more progressive end of the spectrum in terms of their knowledge. They may be cautious to take action, and there are reasons to be cautious, but, you know, sandboxes are a good thing. That is one of the, I think, better interventions that a government can take in order to understand the effects of a technology. And so, you know, that said, I mean, look, I'm going to be briefing some government officials later today who reached out to me. The UK government, I have found, is very proactive about reaching out to experts and getting input and building capacity, building capacity.

Speaker 3:

So you know, that said, my biggest fears and I'm going to speak globally, not specific to the UK my biggest fears are that you both have a lack of sufficient action, that you have and so the US is a great example here of, you know, there isn't regulation that both protects and enables and it's important that it does both, because the government you know it's too complex a regulatory environment right and no political will to act, or the other great fear I have is too much regulation that innovation is stifled. So an example of this in the parallel industry was New York City. New York was one of the world's top three financial capitals and so had a lot of technology and had the potential to become fintech capital of the world. And the regulators stepped in with the blockchain and Bitcoin revolution and they created something called BitLicense, which had the effect of quashing most blockchain-related innovation in New York, which enabled London to take the lead, and Singapore and a few other places.

Speaker 3:

So I'm afraid of too much and too little government action, because, if you think about the principles of good regulation, you want it to it to, on the one hand, protect you know, provide consumer protection, right. Protect society from harms. You want it to manage risk, including particularly systemic risk, so you want to avoid things like flash crashes on the stock market or what have you. You want it to promote stability, right. You don't want society to be in constant upheaval and and this is an important one you want it to promote innovation.

Speaker 3:

And so good regulation is able to do all four of those things, and in order for a regulator, government official, policymaker to know how to do that they need better AI literacy, and so in a lot of my online classes I do because I've now taught people in 20,000 people in 150 countries I get. About 10% of these folks are regulators and government officials and so, ad hoc, they're reaching out on a more structured basis and I commend them for this. The Commonwealth Secretariat funded 100 government regulators from different countries 50 to take fintech classes at Oxford and 50 to take fintech classes at Cambridge and you know, particularly developing countries. I think that that is a commendable approach. I think we need to see more of that, more efforts to provide more skills, tools and knowledge to government officials.

Speaker 2:

I'm buoyed by that, because it's good to hear that my government is on the front foot and they're reaching out to experts like you because they want to know more, because they realize this is the future.

Speaker 3:

One of many reasons I moved to the UK from the US you may detect my accent is not exactly from, you know, from Whitehall. You know it is because this is a hot bed of innovation for two things that I spend a lot of time on fintech and blockchain, on the one hand, and AI, on the other, and so I think that the UK has the potential to really be an AI superpower.

Speaker 2:

Back to the book Augmenting your Career. What new jobs will we see thanks to AI and what jobs do you think will go?

Speaker 3:

Certainly a lot of AI designers, ai ethicists. We may see AI psychiatrists it might not be called that but effectively, once you have a machine that thinks like a human being, it may develop personality that needs to then be nurtured, and it may develop personality that needs to then be nurtured. Ai could get depressed. What would that look like if the transport AI got depressed? Probably not very good. So we got to sort of think about that. Interestingly enough, a good friend of mine, tommy Meredith, who was the original CFO at Dell that helped them become a multi-billion dollar company, he now invests in a lot of strong AI companies and he said to me, the best AI programmers they can hire, uh out of UT, university of Texas, austin, are philosophy majors. Okay, these are people who understand formal logic and can hold multiple ideas in their head at the same time. Uh, while some, you know, they're waiting for a thing to resolve or a situation to become obvious, which kind of helps you understand Bayesian math. And, in turn, a lot of machine learning systems and deep learning is built around this probabilistic math or maths, I guess, as I should say. So I do think that there are going to be some new jobs emerging, as Eric Mignolson now, of Stanford, said for every robot there will be a robot repair person, and so this is, I think, a notable kind of comment.

Speaker 3:

Jobs that will go away are almost everything right, I mean there are. Ais are getting increasingly good at doing almost anything a human is doing. The ones that will, the jobs that will be most resistant, are service jobs that interface between people, so restaurant servers, healthcare workers that change bedpans and give sponge baths, possibly even doctors, although some kinds of doctors are getting replaced. So, for example, specialists like radiologists, who do image analysis, are getting replaced with AI systems, but primary care physicians, I think, are going to be a lot more resistant to being replaced by AI job category is is eventually at risk. So so it used to be, you would say, low skill, repetitive tasks are things that AIs are going to replace. Now we're seeing slightly more complex things like customer service agents getting replaced and increasingly, management consultants, financial modelers at investment banks, accountants. These other categories are starting to get replaced by AI because the AIs are getting better at what they do.

Speaker 2:

Now in the book you coin a phrase adaptable industry. So jobs might go away, but they may morph into something else. Which industries are most able to adapt to the AI future? Do you think?

Speaker 3:

I mean certainly so. The health services industries are going to be less impacted by AI, probably about half the jobs, disruption of financial services or transport. I think that the creative industries are going to be pretty resilient. People still like to see a singer perform live or an actor perform on stage.

Speaker 2:

Or a keynote speaker on stage. We don't want to get replaced just yet.

Speaker 3:

Well, you know, but we could deepfake a video. I mean, in the COVID era, how do you know that you're talking to me and not Dave Bott? There are certain live interaction settings that people like having a live person. There are a lot of jobs that are threatened, and so I think people want to a lot of jobs that are threatened, and so I think people want to try and create those new jobs that will be resilient to AI disruption, and one of them, I think, will be some kind of hybrid between a person and a machine.

Speaker 2:

The book I think is also a great primer because it explains some of the concepts in the machine learning and those sorts of things deep learning, what qualifies as AI? You actually say that chatbots barely qualify as AI. So what isn't AI?

Speaker 3:

Maybe I'm being unfair. Chatbots are a certain kind of AI. They definitely are AI. They're just a very primitive kind of AI. But AI is getting more and more sophisticated, and so deep learning systems are AIs that think like people. They're structurally so. You have layer upon layer of computation in a network that looks kind of like a human neural network, and in that instance those are among the more powerful AIs that we interact with. We're still not yet at kind of general artificial intelligence or artificial general intelligence, depending on which construct you like, but people are working assiduously towards that.

Speaker 2:

So I wanted to ask about that, because everyone's saying that's the next frontier and that's when AI will think and look and act as much like a human. And again the experts I've challenged on this, they've said AI will never be able to love or feel empathy. But how close will it get to a human and how far away are we from that? Do you think?

Speaker 3:

Well, so first of all, I would dispute that statement somewhat insofar as if it appears to be loving and appears to be empathetic, can you tell the difference? How can you tell the difference between me being actually empathetic and me pretending to be empathetic? You can't. So from that perspective, I don't know. I mean, they might not love or experience love in the same way that we do, and they might actually never really. I mean, this is a philosophical question. Would an AI really think you know, as opposed to appear to think, but not unlike the human being who you know, a psychopath who emulates emotion convincingly? If you can't tell the difference, they can still pass through society and people won't know.

Speaker 3:

So I do think we are a few years away from AGI or GAI. There's kind of a joke which is when you talk to people and they say, well, how far out is this general intelligence? And everyone says five to seven years, and it's been the same answer for 20 years oh, we're five to seven years away. So you know, we're not there yet and there is more work to be done. I do wonder if we get quantum, which is also five to seven years away from commercial applicability, then it might create an AI powerful enough, fast enough, to be capable of this general intelligence problem.

Speaker 2:

Well, it leads me to another problem and the whole net zero debate. You mentioned cryptocurrency and blockchain and Bitcoin before, and one of the criticisms is that it just uses so much energy because it's relatively inefficient. What about renewable AI? Do we also need to think about? If you're going to have really smart, really fast AI, it's going to draw a lot of power. How does that contribute or not to the net zero debate?

Speaker 3:

First of all, it's important to remember that Bitcoin was designed deliberately to be inefficient. That's part of what drives its scarcity value. Second of all, some other cryptocurrencies, like Ethereum, are moving to different computation protocols that are a lot more energy efficient. And finally, you know there are there's a huge movement to create sustainable Bitcoin, meaning Bitcoin that's mined from renewable or carbon free energy sources, like hydro or solar. So you know. So I think that I'm careful about being overly critical of something that I know for a fact is changing to reflect the current focus on the environment. The heat not just the energy consumption and fossil fuel burning that's used to power AI, but the waste heat that's generated from data centers that are running AI. Eventually, it could be all in space, right? The most powerful AIs could just be harmlessly radiating that heat in space, and the cost of lift, of bringing things up into orbit, is getting cheaper and cheaper. And thanks to Elon Musk, there he is again, and so you know.

Speaker 3:

So I do think that you know, 20 years from now we might be looking at a planet that is kind of got a bunch of AIs orbiting us, where the really profound computation occurs.

Speaker 2:

Interesting thought AI in space. You relayed a comment that Julie Sweet from the Accenture CEO gave you that she's, or her company's, reinvesting 60% of what they're saving due to AI savings into reskilling their workforce. Sounds like a really good exchange. Should more companies be doing this, and have you seen examples of that beyond Accenture?

Speaker 3:

Absolutely more companies should be doing this, because you know, if you think about, what makes a company go, it's not just skills, but it's culture, and so you know, and this is what Accenture noted. They said, look, we spend all this time finding people who are Accenture, people who have a certain personality, approach and problem solving headset and intellectual curiosity whatever goes into defining an Accenture person and they're all bright, right. And so we get all these bright people, and then why would we then throw them away when we figure out how to automate a job? Why don't we reskill them into some other job? And so that's what they've invested in doing, because they recruit tens of thousands of people a year and this saves them significantly on that recruiting expense, because the cost of a mishire, even at a junior level, is 15 times base salary, and so that's a risk mitigation as much as it is a cost savings.

Speaker 3:

I do think more corporations should do this. I don't think enough corporations do it by far. I think that there's too much short-termism, which is driven by quarterly earnings pressures, and so corporations said, hey, I can look smart by cutting all these jobs because I automated through AI, and it's someone else's problem to figure out what to do with it. Because the tenure of a CEO of a Fortune 500 company is getting shorter and shorter and shorter over the last 30 years, and so if your average tenure is three and a half to four years, you just need to notch up a few wins on the quarterly results to get a huge bonus in stock and then you move on because you know someone else comes in and becomes CEO CEO.

Speaker 3:

This is where Michael Dell took Dell private, because he said I want to do some massive digital transformation and I can't do it with quarterly earnings pressure. And you see how Elon Musk he's running a private space company that's worth over $100 billion and he hasn't taken it public because he's like, why should I? This is a long-term play. And he hasn't taken it public because he's like, why should I? This is a long-term play, I own a huge chunk of it. He says, and I don't want to have the quarterly conversations with investor analysts that are going to keep us from succeeding in a deep tech mission.

Speaker 2:

I want to talk about one example I've looked at in terms of AI replacing a job. So a few years ago, google came out with Google Duplex. It was a great demonstration of a woman basically talking to a Google assistant wanting to book a restaurant. The AI said here are a couple of options. The AI then rang the restaurant, said a couple of mm-hmms negotiated with the person at the restaurant and booked the time slot.

Speaker 2:

And my argument is this phone, this piece of plastic that I own, knows everything about me. So are we not far away from an AI assistant, a virtual assistant, a digital agent? But going one step further, they then do digital negotiation, digital deals with other companies. So, for example, my health provider, my telco provider, my digital agent, talks to their digital agent. And one example I gave is my health insurance is due next month, but my digital agent talks to their digital agent. And one example I gave is my health insurance is due next month, but my digital agent knows about that. It goes out and does digital deals. One such deal is if I give a one-time hash of my fitness information, I get a deeper discount because now I'm healthy.

Speaker 2:

I then put up a slide that says we'll be having to write ads for robots. Now I fall on stage in front of a bunch of marketers. They start throwing the stress balls at me, saying this will never happen and our jobs aren't at risk. But is that? A real example of the data is already there. The AI is probably there as well. The Google duplex is the world of that last mile. Is that a role that could actually be replaced? A digital agent to run my life, to run your life?

Speaker 3:

I think that we're heading in that direction. Inevitably, people aren't quite ready to just hand everything over, but we're getting closer. So I know a lot of people increasingly who have digital scheduling assistants who act like an EA, and so you can actually email and have a conversation with this bot about getting onto someone's diary, and so that's already happening and so that's already happening. But I think where we are is in a transition phase. So, for example, google and Microsoft have started with these type ahead recommendations. So that's an AI While you're composing, it finishes the sentence for you, but it's in gray and if you like what it suggests, you hit tab and it speeds up your composition. It's based on billions of sentences that have been loaded into their AI to train it, and I think it's actually it's getting pretty good. It's actually it's not bad. Certainly we see digital transcribers.

Speaker 3:

I composed about a third of my book by talking into my phone and then cleaning it up later, and you know the AI was pretty good. I mean, it wasn't perfect, but it's getting better and better, and certainly a lot better when than when we first had the, the speech to text systems, come out in the eighties and nineties. So, so, so I. I think that we are going to have a world. I mean, have you ever wished that you know there could be two of you, cause there's just so much to do? And you're like, oh gosh, I wish, well, you know, we could eventually have a digital twin that would act the way that we want it to, that sort of represents us in the world and frees us up from drudgery. I do see that as not an inevitability but a likelihood within the next 10 years, maybe, maybe even five.

Speaker 2:

Some people would say it'd be scary to have two Andrew Grylls, but that's for a whole other discussion. Now, before we go, because we're running out of time, there's a chapter, chapter five, reskilling and developing cognitive flexibility, and there's some great ideas in there and you talk about basically remaking your brain and there are five things you do that. Can you explain what that means and how you go about keeping yourself up to date and learning new things?

Speaker 3:

It goes to the kind of the fundamentals of how we prepare for the AI future, right? So in order for us to be ready to reskill, to upskill, to stay ahead of technology change and disruption and frankly, it's not just disruption from AI, it's disruption from everything we need to train our brains. Your brain is plastic, right. It is malleable. You can re-skill your brain to absorb knowledge, right, and so there are certain techniques that you can employ that will let you acquire new knowledge faster and use it more effectively. Acquire new knowledge faster and use it more effectively. And so we embed some of these kinds of practices, for example at Esme Learning, which is a cognitive AI learning platform, in how we work with Oxford and Cambridge and MIT and Imperial to create online experiences. So things like practice, right. So you're much more effective at learning something if you attempt to apply the lesson immediately, if you try things out. This is one of my biggest problems with Masterclass as a learning platform. I think Masterclass is fantastic for intellectual curiosity, it's a great entertainment platform, but it's not learning. Okay, you will forget 50% of that masterclass video you watched within one hour. This is called the Ebbinghaus forgetting curve, right. Your brain just will like gone out of your head. But if you actually tried using some of that stuff right away, then you would. You would actually have a you know. It would cement in your, in your memory, better.

Speaker 3:

Reflection is another one. So it's not enough to take knowledge in, you have to actually cogitate on it. You have to think about. So it's metacognition, you have to think about thought. What is this thing that I've learned and how does it fit in with my mental model of the universe and what does it mean? That active reflection again helps you remember things better. Gradual change is another one. Right, this is not something you can just cram for. Okay, people like doing intensive short courses over a weekend because they think it's efficient. Oh, I'm busy, I just want to do it in a 15-hour sprint over two days and then I'll learn something. That's the worst way to learn. Okay, if you do interval training, it's like lifting weights at the gym. You can't build muscle with one eight-hour intensive session. But if you do one hour a week over eight weeks or one hour a day over eight days, you'll make more progress than if you just try and do it all at once.

Speaker 3:

Peer learning is another one. Ok, we learn. We're social animals. People are social animals. We learn better from each other than you know. If we, you know, just have like a sage on a stage kind of blathering at us like I'm doing now, it's much better if we sort of talk about things and discuss.

Speaker 3:

The Oxbridge tutorial is actually one of the most effective ways to learn, and so one of the problems that Esme Learning solves is how do we put that online? And finally, creative exploration. And so, you know, the human brain gets joy from exploring and creating. We get little like bursts of dopamine and serotonin when we experience something new and discover something new. We do that a lot as children. Children are among the most effective, creative people on the planet.

Speaker 3:

And then the education system proceeds to spend a decade and a half training that creativity out of us, and so sit in rows, don't speak up, you know. Raise your hand before you talk, whatever. It does a lot of things that regiment our minds to fit in with society, and that is actually at the expense of creativity. So you know, there's a famous experiment the marshmallow, the great marshmallow experiment and basically you give people a couple of marshmallows, a few sticks of uncooked spaghetti, and I think there's some string involved and basically it's a standardized little kit and you have 18 minutes to build the tallest tower that you can. And so the least effective people at the marshmallow experiment are MBA students, because they spend almost all the time negotiating status with each other and not doing anything. Among the most effective are five-year-olds, because they just jump in and they start playing and they grab things from each other and they start putting things together. Putting things together that creative exploration produces among the highest towers in that experiment and, more broadly, that creativity is how human ingenuity can survive AI disruption.

Speaker 2:

Yeah, I've been a number of corporates and we've done those exercises and even the ones at IBM, and you just sit back and watch the behavior. It's actually more interesting than the end result. We're almost out of time, so I run all of my guests through a quickfire round where we learn a lot more about you in a couple of minutes. So let's do that now. Iphone or Android, iphone, pc or Mac Mac the app you use most on your phone.

Speaker 3:

Let's say Signal. What are you reading at?

Speaker 2:

the. Moment.

Speaker 3:

I'm trying to understand the human brain, so I am reading a book on kind of how we think about thought and final quickfire question how do you want to be remembered? He made the world a better place for billions of people.

Speaker 2:

What three actionable things should an audience do today when it comes to augmenting their careers?

Speaker 3:

Get smarter about AI Explore, play and create. Try and experiment with AI and there are no code systems. You don't have to know how to program and enlist a friend in the journey.

Speaker 2:

Great advice. So how can people find out more about you and your work? Well, if you go to davidschreiercom.

Speaker 3:

that has a lot of information on my books and thought leadership. Also, imperial College, the Center for Digital Transformation. We're going to be doing a lot of cool things out of that and obviously have a website off of imperialacuk. And finally, we have an amazing set of classes from some of the world's greatest thought leaders at esmelearningcom.

Speaker 2:

I'm going to check them out as the very next thing to do. David, a great discussion today. Thank you so much for your time. Thanks, Andrew.

Speaker 3:

This has been fun.

Speaker 1:

Thank you for listening to Digitally Curious. You can find all of our previous shows at digitallycuriousai. Andrew's new book, Digitally Curious, is available at digitallycuriousai. You can find out more about Andrew and how he helps corporates become more digitally curious with keynote speeches and C-suite workshops at digitallycuriousai. Until next time, we invite you to stay digitally curious.

People on this episode