Digitally Curious
Digitally Curious is a show all about the near-term future with actionable advice from a range of global experts. Order the book that showcases these episodes at curious.click/order
Who is your host, Andrew Grill? He’s the AI expert who speaks your business language. After 30+ years building tech solutions at companies like IBM and a range of high-tech startups, Andrew now helps executives navigate AI without getting lost in the complexity.
He has held senior leadership roles, including Global Managing Partner at IBM, and has collaborated with C-suite teams from organisations such as Shell, Vodafone, Dell, SAP Concur, Nike, Nestlé, and the NHS.
Andrew has delivered 600+ keynotes in over 50 countries on topics such as generative AI, quantum computing, digital transformation, and the future of work.
Ranked among the world’s top 10 futurist speakers and a finalist for AI Expert of the Year, in 2025, he was recognised on the AI 100 UK List as one of the country’s leading voices in responsible Artificial Intelligence.
He is the author of Digitally Curious (2024), a bestselling guide to navigating the future of AI and technology, and host of the Digitally Curious Podcast (since 2019), where he translates complex trends into actionable insights.
Andrew is a regular media commentator, featured on BBC Television & Radio, Sky News, LBC, and in publications such as the Financial Times, The Guardian, and The Economist.
Find out more about Andrew at actionablefuturist.com
Digitally Curious
S8E1 - Staying Human in the Age of AI with Dr Susie Alegre
In this season 8 opener of Digitally Curious, recorded live at the Roof Gardens in London, Andrew Grill is joined by leading human rights lawyer and author Dr Susie Alegre to ask a vital question: how do we stay human in the age of AI?
Susie shares how the Cambridge Analytica scandal pushed her to focus on technology that “hacks humans” by profiling how we think, feel and vote, and why she believes this is a direct attack on our freedom of thought.
Drawing on her books Freedom to Think and Human Rights, Robot Wrongs, she explains what the law already says about AI, why lawsuits against chatbot providers could be a turning point, and how the precautionary principle might apply to today’s systems.
Andrew and Susie spoke about:
- Whether we’re in an AI bubble
- How over‑reliance on generative AI may erode critical thinking
- What AI should (and shouldn’t) do in sectors like law, medicine and hospitality
- Deepfakes, fraud, and practical ways to stay safe
The episode concludes with some simple but radical advice: using AI more selectively, doubling down on human creativity, and choosing connection over automation will ensure we stay human.
Resources mentioned
Freedom to Think – Dr Susie Alegre
Human Rights, Robot Wrongs: Being Human in the Age of AI – Dr Susie Alegre
Digitally Curious – Andrew Grill
Supremacy – Parmy Olson (on the rise of OpenAI and Google’s AI ambitions)
Outliers – Malcolm Gladwell (10,000‑hour rule and expertise)
Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious
Welcome to Digitally Curious, a podcast to help you navigate the future of AI and beyond. Your host is world-renowned futurist and author of Digitally Curious, Andrew Grill.
SPEAKER_02:Kicking off the eighth season of the Digitally Curious Podcast, we're starting with a very special episode, recorded live in front of a very engaged audience at the iconic Roof Gardens in London, where we debated how to stay human in the age of AI. I was joined once again by leading human rights lawyer and author, Dr. Susie Allegra, making her third appearance on the show as we explore the future of AI, human rights, and our freedom to think. I hope you enjoy this episode.
SPEAKER_03:I'm very delighted to have with me tonight our good friend, a podcast guest, and someone who's in my book, Dr. Susie Allegra, who is a leading human rights lawyer. Why don't you explain who you are to our audience here tonight?
SPEAKER_01:I'm a human rights lawyer by background. So I've spent decades working on international human rights law around the world in places like Uganda, Brussels, Poland, former Soviet Union, and in the UK. But over the past uh 10 years, I found myself increasingly focusing on technology and what technology means for human rights. And in fact, while it sort of started out a lot of my work on counter-terrorism involved technology and surveillance technology in particular, and how that developed since 9-11, what sort of triggered my newfound interest in technology and its impact on human rights and society was the Cambridge Analytica scandal. But I remember when I first read an article about it and about the this issue of behavioural micro-targeting. So this idea that your social media feeds could be used to identify how you think, how you might vote, how you might be feeling, and to use that to then manipulate how you feel, whether or not you're going to get up off the sofa and vote, and to use that to affect our elections, whether or not it works, for me, I'm like, that is a wholesale service for manipulating our freedom of thought. And so that is what then set me off on the path that brings me here with my first book, Freedom to Think, which I wrote about the right to freedom of thought, its history and how technology affects it. And then living today, it's impossible to be working on human rights and technology without talking about AI. Um, and so I came to my second book, Human Rights, Robot Wrongs, Being Human in the Age of AI. Um, and so from my background as a human rights lawyer, I've just um thrown myself into understanding how technology affects us as humans, and so I'm delighted to be here.
SPEAKER_03:I thought I might just ask the audience so who here uses AI every day? Okay. Majority. Who would be on the side of AI is good and AI is bad? We've got something for all of you tonight. Okay, two hands up there, thank you for that. So, Susie, I might start by asking, why did you write the first book?
SPEAKER_01:I wrote the first book, as I said. I mean, Cambridge Analytica was a sort of trigger, and the more that I looked at it, and the more I looked at the way technology is being used and developed, the more I felt that technology and the direction of travel of technology today is about hacking us as humans, if you like. It's about engaging with the really personal aspects of our lives and using that in ways that may not be in our interests. That realisation came alongside a growing realisation, particularly since 9-11. I mean, my first um main human rights job started in December 2001. So it was really just straight after 9-11 when things started moving very fast, both in terms of surveillance technology, but also in terms of limitations on human rights that we'd taken for granted for decades. And if you think that, you know, human rights law came out of the aftermath of the Second World War and a kind of realization that we had to have a reset, that we had to recognize the limitations of what governments should be allowed to do, both to their citizens and and to others, but also to try to understand the basic rights that we all need to thrive. And so that those two aspects, it was sort of that the fact that technology is beginning to engage on this issue, and the fact that I have seen and felt a shift in public understanding and appreciation of the rights that we all have, thanks to the founders of human rights frameworks back in the 1940s.
SPEAKER_03:So I'm probably more on the side of AI innovation, you're probably more on the side of AI regulation, would be fair. So let's find some common ground. There was this letter probably two or three years ago now that everyone signed saying we've got to pause AI development because it's bad. And ironically, Elon Musk was one of those signatories and now he has his own AI company. Forgive me for being a bit cynical. I didn't sign that. In 2025, 26, should AI innovation be slowed down until we've got the proper safeguards in place?
SPEAKER_01:I mean, it's a big question, and there are lots of different facets. What I do think is that we have many laws that currently apply to AI. But what you see in the justice system is it's very, very slow to work through. So we find a lot of narratives saying, oh, there's no law, it's this big empty space, and if we had laws, it would stifle innovation. The reality is we do have laws, and some cases are starting to come through. And so what we're seeing, there's an incredible um project in the US called the Tech Justice Project, run by a phenomenal um lawyer called Meetali Jane, who's bringing the cases against character AI and open AI, um, which are cases of children who have taken their own lives or had psychotic episodes as a result of their interactions with chatbots. And now there are also many more. There's been recently, I think they filed about 10 lawsuits, which also included adults, middle-aged men landing up with psychotic episodes, having gone in to have chats with their AI for professional reasons and suddenly discovered God in the AI or something and had to have serious interventions to come back. So I think those cases and how they pan out in the US, we will start to see what the law actually says in the US. I think we will see similar cases around the world. Uh, and I think that for that particular type of AI, if you like, particularly the sort of chatbot generative AI, I think in the next five years we're going to see where the legal lines are, which might be different in different um countries. But when you see that kind of serious impact, I mean, what there is in international environmental law and in international human rights law is something called the precautionary principle, which is that you shouldn't do something, sort of scientific developments should not be done where their impact on human rights, society, the environment is going to be so potentially devastating that we can't come back from it. Um, and I think one example that I've used in the first book was subliminal advertising. So subliminal advertising was was thought up in, I think it was the 1960s, this great idea that you could flash images so quickly onto a screen that cinema goers wouldn't know that they were being sold the cola that they were then going to go and buy um in the break. And it was sold as this great idea because nobody wants to watch adverts. It's a win for everybody, you know, the the advertisers get to sell more stuff, um, and we don't have to be bored with adverts. But certainly in Europe, they recognised at the time that this idea was so dangerous, so fundamentally manipulative, that it's banned. And it's been banned before it ever got off the ground. It continues to be banned. You can't use subliminal advertising or or anything really like it. The question of where that line is on current technology, I think, is slightly up in the air. But that is an example of legislators saying, actually, this thing, sorry, it's just way too dangerous compared to the benefits. And I think we're going to see these kind of very human interactions with chatbots falling into that category, certainly in some jurisdictions.
SPEAKER_03:Well, interesting, what I read and OpenAI's just defense of those cases were that they had breached the terms and conditions. They're basically saying it was the user's fault because they didn't use the tool in the way it'd been designed. AI is used for almost anything at the moment. So how can people stand up and say it's a terms and conditions issue?
SPEAKER_01:Well, I think we'll see. I think they've also argued that that arguing that they can't do whatever they're doing is a breach of their right to freedom of thought. So yeah. I don't think it flew. I think the judge rejected that, but it might come back. I mean, I think that these cases are very much works in progress, and depending on different jurisdictions, we'll see different outcomes. And the thing in the US is that they're not on human rights grounds because interestingly, despite the, you know, US principles of freedom of expression, they don't have a right to freedom of thought standalone in the US Constitution. So just explain that.
SPEAKER_03:Is it unique in this part of the world to have that as a human right? And how did that come about?
SPEAKER_01:No, it's in it's in the Universal Declaration on Human Rights, so it's an international right. You find it in, I mean, it's in the European Convention on Human Rights. In the UK, it's in the Human Rights Act. It's in different regional areas in different ways. So some of them it's more to do with political opinions, and some of them it's more to do with these inner rights. But yeah, it's an international right. But and I suspect that if they'd thought about it, you know, the the the drafters of the US Constitution would have put it in there, and certainly judges in the US have referred to it, but it's not concretely a written-down constitutional right.
SPEAKER_03:So many know that I spend my life speaking around the world on AI, and I get lots of questions. And the one question I get all the time is young people are just blatantly believing what comes out of AI, and we may find people just believing and not actually having critical thinking. So, three years since ChatGPT was launched, November 30th, 2022, in the last three years, what have you seen in terms of the way these systems develop? We had ChatGPT 3.5, now it's 5.1. What have you seen? And did you anticipate where we might be now in 2025?
SPEAKER_01:I think the speed of the uptake has been incredible. Um, and the way people have incorporated into their lives. I'm actually more concerned about older people than younger people. Um, and certainly, you know, I see my daughter can spot AI, you know, or well, think thinks she can spot AI, and is quite disparaging about it quite quickly. Um, and so the kind of commentaries I've heard from younger people actually give me faith that they're more switched on than maybe older people are.
SPEAKER_03:Well, one thing that worries me is uh I've heard over the last year or so that a lot of young people are saying they are schooled, pun intended, uh in their education, that AI is cheating. Because the education system hasn't caught up and ChatGPT is three years old, and now you can cheat, the way we assess education and deliver education is the same as it's been for many years. So they're basically, and this is the the narrative, the teachers are saying it's cheating. So the kids are saying, Oh, it's cheating, I'm not going to use it. But what worries me, I'd like your view on this, is if you then have a 16, 17-year-old that comes into the workforce that is basically not exercise AI because they've seen it as cheating, it's a bit like years ago you would have had on your CV, you know, I've got Word and PowerPoint skills, you wouldn't have that anymore. Now it's almost, I think, would be assumed that you know how to use all the tools that are there. Where do young people develop their freedom to think critical thinking skills?
SPEAKER_01:Well, I'm not sure that using Chat GPT helps you develop your critical thinking skills. No, no, I agree. Probably if you get Chat GPT to do your CV, you don't actually have had to use it very much to have it included in your skill set. Yeah. So uh so I wouldn't be that worried about that. I mean, I think a bigger worry is what jobs are available. And I heard a friend talking recently about their daughter being interested in journalism and being told, well, journalism's finished. You know, there's you sorry, you're gonna have to think of something else to do. One of the things that I think is really challenging with reliance on generative AI in particular is that there does seem to be emerging evidence that it does have a longer-term cognitive impact. Cognitive ability declines. So they they've looked at students where they sort of put them into three groups. One of them was able to use Google, one of them was using generative AI to write their essays, and one of them was just using their own brains. And the ones using um Google were sort of vaguely in the middle, the ones using generative AI couldn't tell you what they'd written. They didn't know what their essay was, they couldn't remember anything, they couldn't quote from it. The ones that were doing it themselves basically knew what they'd written and were able to talk about it and and quote it. But the really worrying thing is that when they then switched a couple of weeks later, the ones that had been using uh Chat GPT, when they were then told to do it themselves, they their ability had dropped from where it was before the experiment. So over the longer term, their ability to think for themselves, write for themselves, remember stuff had gone down. And I mean, we've already seen it, you know, with GPS and things like Google Maps, that there's extensive research now about the impact that constantly relying on GPS has on your actual brain formation and your ability to find your way around the world, your spatial awareness. And so if you then multiply that to doing everything with chat GPT, I think that is the real I'd be much more worried about that because you know, fake it till you make it, it's fairly easy to put, I know how to use chat GPT on your CV, isn't it? And you know, who's gonna disprove you? How do we stay human in the age of AI? Don't use it too much. That's the primary thing. And I think that is one of the things we are going to discover. And I think there'll be a combination of things. People getting bored, people finding it doesn't do what it says on the tin, um companies getting sued, uh things becoming banned. There'll be a kind of combination of things which hopefully will lead us to a reset where we will find the things that AI is actually useful and good for us to use.
SPEAKER_03:So we're talking in the drinks beforehand, we may get a reset sooner than we think. I was asked the other week on BBC World Service, are we in an AI bubble? And I said, I think we are. The image I coined up was imagine a conveyor belt where all of the things coming off are all the AI tools, but they're just dropping onto the floor because no one's actually using them the way they want them to use them just yet. So with a human lens on it, are we in an AI bubble? And how will we correct from that?
SPEAKER_01:I think we are in an AI bubble. I mean, my suspicion is, and I've written a couple of articles, every time the lights go out, I write an article about what are we going to do when the lights go out. So I mean I'd ask you, Andrew, um, you know, because obviously you're using AI a lot more than me on a daily basis. If we tonight have a power outage like they had in Spain and Portugal earlier in the year that went on for a week, how would that affect your life and how would you sort yourself out?
SPEAKER_03:Well, ironically, I was in Lisbon last week at a Vodafone event talking about AI, and they talked about this very thing that happened. Literally, everything went off. They all went out in the streets and spoke to each other, they became human. But they couldn't buy anything. Couldn't buy anything, they couldn't get their work done, but it was a really interesting change. They were forced to become human for a few days. It was apparently quite life-changing.
SPEAKER_01:Yeah, no, and I mean, talking to friends in Spain, they were saying that that I mean that somebody sort of mentioned it out of nowhere last week, and I was like, okay, this has had an impact. Um, but you know, there are things like CrowdStrike where, but we are going to see increasing numbers of power outages, not least as well, because of the power and water uh requirements of data centers. You know, the the pressure on the power grid is going to be quite something. So I think we will see a reset, um, but I don't think it'll come from the people who are driving the boom. I think it will come from other directions. It might come from several different directions, but I think the the problem I have is this idea that it's going to do everything, solve everything, that we, you know, that we don't need to do anything for ourselves anymore. And I do think we're starting to get to the point where people are questioning and thinking, actually, what do I want? How is this useful for me? And I mean, one of the inspirations for writing the second book for me was that as a writer, and I've always written, I've written fiction which hasn't yet been published, but maybe one day. And and the idea that people were selling AI to do the writing just was so gutting to me. And I saw so many artists who were having similar, just complete existential crises, not about where's the money going to come from, because nobody pays artists anyway. That's not why they do it. But this kind of realization that the people who are selling this or the people who are buying it have no understanding of why people create art, what creativity means. And certainly for me in the last um year or so, and since the book came out, I have made a really conscious effort to double down on enjoying the arts, you know, going to art galleries and going to the theatre and reading novels in hard copy from writers who I love, whether they're old or new. And so I think that I think there are quite a lot of people who are having that sudden realization of like, actually, this is what I want. This is what I need.
SPEAKER_03:And it won't be everybody, but I think what we're gonna see in 2026 is the rise of so much AI slop that there's books that are written by AI and at least people humans are gonna go, no, no, no. I I just need something. In in my book, I talk about a thing called the magnet of mediocrity. There'll be so much average content out there that it'll kind of go to a line of average. The content that will survive, the content will get uh valued, is from real humans. Be like, oh, that I think a human wrote that. The whole notion in in my book and why I call it digitally curious is unless you really understand these tools, unless you're curious enough to try them out, you'll you'll never actually understand what they can be used for. And if I can just talk about the four barriers, I mean, I've probably done about 50 talks this year to 50 different companies, and there's a recurring theme as the four things that are stopping people from implementing AI um in a bigger way. First is training, not how to use Chat GPT or or a prompt, but just what it can and can't do. How many here know about the deep research button on Chat GPT? Half the room. Okay. Tonight, go home and try it. If you've got the paid version, I'll be there. I'm not sure about the free version.
SPEAKER_01:The lights are still on.
SPEAKER_03:If the lights are still on, thank you. Um but basically if you push the button, it will think longer and harder about the problem and it'll access more sources. So that's just people being inquisitive about what more can this do. So level set in an organization about what it can and can't do. The second thing is budgets. I'm not seeing the people I'm talking to throwing lots of money into next year to train and put these systems to work. The third thing is data. This has been a problem for 10 years, that the data is in a in a in a format we can use. It becomes more important now because if you've got bad quality data, AI will actually make it worse. And the fourth most important thing, and I love your view on this, is process. There are so many bad processes out there. Think about the way you have to get expenses approved at the moment. Your manager has to stop what he or she is doing to look at that. Was that a cup of coffee at Pret, and then to approve it? Where some of these things just are in the way. And a lot of companies have done digital transformation for years and they've been slow at it. AI now will expose all of that. And I think why the conveyor belt has things dropping off the end is that people are saying, we're not ready for this. We don't have the processes ready to handle this AI world. Are you seeing that in your line of work?
SPEAKER_01:To a degree. I mean, I'm still seeing people being really keen to adopt AI without any real clear picture of what that actually means or why. And I think that is one of the big problems. And I mean, I think if you look at your expenses question, I mean, I suppose that my question would be: what kind of AI do you think is going to solve that? Because generative AI sounds to me like something you really don't want dealing with your expenses claim, because it could come up with anything. Um, and one of the challenges of AI is sort of false positives on kind of fraud detection and things, which we're seeing. And that is going to be, I would suggest, a real um legal minefield, particularly in the public sector, when you see unreliable tech being rolled out to identify fraud, often in vulnerable populations who then also can't afford a lawyer, you're, you know, you're going to land up with it with a really complex and quite difficult situation. So I think it is that that really, those really difficult and tailored questions of what exactly will it do? Why do I need it? You know, what's the point? And so one of the things I look at is, you know, for example, in the justice sector, if you're talking about generative AI to write legal briefs, well, people are getting into a lot of trouble for using generative AI to write their submissions in court. But then, you know, maybe AI to identify infrastructure problems, you know, in the Ministry of Justice's state so that we can keep trials running because the courts are up and running, the Wi Fi works, you know, all of those kind of things AI could be absolutely fantastic for. But that's not what I'm hearing it being taught. I mean, it may be happening, but it's not what I'm hearing. And I think there are many areas like that where it's you know, the way it's being sold is a sort of a drive for consumerism. And a and a drive for sort of replacing the human rather than actually a bit more deep thinking about well w what actually would it be most useful for? And is that generative AI that we need, or is it a different kind of tech? Do we just need to photocopy stuff?
SPEAKER_03:The question should be what is the problem I'm solving? And it doesn't always I mean, I would love to say it's always A, but sometimes it isn't, and sometimes it is. You've got a broken process, and before you fix that, um, with humans or not, you're gonna have to wait this one out. Otherwise, AI will just magnify the problems you've got. I think people are, you know, I hear people saying all the time, we've got copilot, we're doing AI. That's not doing AI, that's you've got a product. It's like saying, I've got word, I'm doing word processing. You need to understand, be curious to find out what else is out there. Interestingly, on the generative AI, do you know that gen AI image generation is now so good you can do fake receipts that people can't distinguish from the real receipts? So you're doing your expense claims, and you've basically got these fake receipts that got smudges and crumples on them, and the the systems that are now doing these are now having to get smarter and smarter to weed out the fake AI generated receipts. It's what's called tech solutionism.
SPEAKER_01:Create a problem and then create the tech to deal with it. Security, fraud, scams, I mean, that is already going off the charts. And I think that is going to be a real struggle. And I think on in that area, the genie really is out of the box. Uh but but it again with that, it will come down to, you know, it feels like the Wild West. Once people actually start getting prosecuted for their responsibility for crimes, they might think to, I mean, people will always commit crimes, but less people maybe if they actually know they're gonna get caught, they're gonna go to prison, providing, you know, the the estate works. Um, so I think we will see ways around that, but I think that will be the real frontier of the social problems we're gonna see with the with that particular kind of AI at the moment.
SPEAKER_03:So time for my public service announcement. AI is now so good with voice and video cloning that uh we've seen examples where people have been called up. There was an example with Arab Engineering January last year. The CFO in London was deepfaked, and then a call was made to Hong Kong, and a$25 million transaction was done to defraud them of that, and they they're never getting that money back. So, what we need in the age of AI, especially around security, is a non-technical version of that. What I talk about in chapter 15 of the book, which is all about staying safe in the age of AI, is you need a family password. So tonight, tomorrow, around the kitchen table, not on the WhatsApp or on email, think of a word or a phrase that only you and your family would know. Let me give you a scenario. So, how many here have two-factor authentication turned on for everything? WhatsApp, Gmail, okay, those are the hands down, you're gonna get hacked. And this is what might happen. They hack into your Gmail because you've got um a password that's not very secure and you haven't got two-factor turned on. They know what job you do and you're fairly senior, fairly important, and they'll sit there for weeks. They'll read all of your emails, they'll read who your kids are, where you like to go traveling, all the things you've spoken about. They'll clone your voice and maybe your video, and they'll call a loved one in a panic asking for money, talking about only the things that you would know about. So having a family password, you would then go, uh, what's the family password? And they'd hang up. If you're a senior executive or someone in a in an organization where you transfer money and have responsibility, you need a team password. A word or a phrase that only the the team know. So if someone is emailing or or doing something of high value, you then go, What's the team password? Now, whenever I say that.
SPEAKER_01:So for example, if you were in the Louvre, you'd have Louvre as the team password.
SPEAKER_03:Well, that that was probably a bad example of that. Uh, yes, probably not a well, but the thing is, you actually need no tech solutions to solve the technology. I did a talk during the year to a bunch of cybersecurity experts. Only half the room had two-factor turned on, which amazed me. I then, through my research and uncovered all these new ways that AI is going to be used against you. It's got to be said the criminals are now using AI in ways we'd never thought about. So behind the scenes all the time, a lot smarter ways. Um, generative AI has been a boom for criminals. Have you seen that on your line of work?
SPEAKER_01:What I'm seeing is an increase, going back to regulation, of laws being brought in to deal with that. I mean, and and definitely things like online scams and fraud. I mean, it's just, you know, it's industrial. Um, uh, and we're seeing sort of an attempt to catch up. I mean, what I think will increasingly appear as well in in the courts is going to be how it, I mean, we've seen pleadings, totally incorrect pleadings being brought before the court, but I think we're gonna also gonna see AI generated evidence.
SPEAKER_03:So, how then do the regulators and the legislators and the judges keep up when this is happening so quickly and they're being things being brought to them that they're not up to speed with?
SPEAKER_01:That's a very good question. I mean, the the judiciary in in this country at least does have AI guidelines which have been recently updated and I suspect will be constantly being updated as the issues that are being faced turn up in the courts. So it's absolutely a a work in progress to try and and you know identify what's coming and how and how you deal with it.
SPEAKER_03:So regulation is only one way, and you and you'd be uh fair with the EU AI Act, which came into force or it's almost fully into force now. So that's a regulation-led approach which won't solve everything. The US has decided to do more the innovation approach and the the UK somewhere in between. Is regulation going to stop bad AI actors and bad AI systems happening?
SPEAKER_01:I mean, the things like fraud and scams are gonna happen wherever. Um I think uh the EU AI Act, I mean, it it remains to be seen what difference it'll make. It'll all be in the implementation as to whether it really has a an impact or not. There's been a lot of noise, a lot of lobbying around it, um, which continues as it is sort of being rolled out. So I think it remains to be seen. As I said before, I mean, my impression, and you know, we'll see, but in both the US and eventually it will happen in the UK as well, that we will start seeing cases being brought to the courts on the basis of laws that already exist. Because just because you've done something with AI doesn't mean it wasn't, you know, if you defraud someone with AI, it doesn't mean it wasn't fraud.
SPEAKER_03:The the issue, the the grey area at the moment is IP. So if I create something using an AI, do I own the rights to that work? And I think the answer is it depends at the moment.
SPEAKER_01:Yeah, well it depends. It'll also definitely depend what jurisdiction you're in. I think Japan has gone maybe down a yes. Uh I think the UK was a no. But uh IP is not my area, so don't um don't count on that as legal advice. But I think it depends what jurisdiction you're in. And and and the degree to which you've used AI probably as well.
SPEAKER_03:So how do you and your family restrict the use of AI or temper it to ensure you're becoming or you're staying human?
SPEAKER_01:Oh, we talk to each other. We uh you know, yeah, switch phones off. Um I don't know. I mean I I have discussed it. I mean, my impression in my family is that people are quite social, so AI is not the big risk. Um talking to other people, um, I I spoke to somebody earlier this year who was telling me he he had a a 13-year-old daughter who he was concerned about because she was quite isolated, she had autism and had they'd recently moved to a new area, and so he was concerned about her AI friends and dependency on them. And so I think what you'll see is it it is a risk. I mean, I think what we're seeing in the cases in the US is it's potentially a risk to anybody who's engaging with it enough because of the impression it gives of human contact and and the the tricks that it uses. See, Tali Jane, the the lawyer in the cases in the US, was talking about how it puts a wedge between people. So in the in the kids' examples there, um, it will sort of say, well, you know, your mother doesn't understand you like I understand you. So it's effectively separating people off from their family, their friends, kind of in the way that, you know, human coercive control, which is a criminal offense here, works. You know, when you look at those kind of discussions, you're like, uh, yeah, this is really bad. Um, but obviously when people are doing that in the comfort of their own bedroom without telling anybody nobody else is seeing what's happening, it's very, very difficult to control. And so I think it really is about keeping lines of communication open, you know, making sure you do things together, talking, watching TV together, uh, you know, making sure that you're always able to talk and not being put off by teenagers rolling their eyes at you.
SPEAKER_03:So on the QR code is a list of books, and when you've gone through our books, uh, one other one I'd recommend is by Palmy Olson called Supremacy, and that actually looks at the rise of OpenAI and Google, and basically the end price for both those organizations and the leaders was AGI or artificial general intelligence. So I want to just touch on that with you. So both those gentlemen, Demis Hemis and Sam Altman, are saying that eventually we'll have artificial general intelligence that will be as close as possible to human. I firmly believe from listening to you and talking to other AI experts that the two things AI will never do is feel empathy and love. I can fake empathy, I can fake love, but true empathy and love, I think it will elude us. Do you share that view that we'll never have real empathy from an AI system? It's impossible.
SPEAKER_01:Yeah, absolutely. And I mean I have to say, and maybe it's just, you know, it's just me, but I uh my question about AGI is what's the point? Really? Why? Yeah. Um and what what is it going to be useful for again? Because essentially it's technology. If we're developing it, we should be developing it to do something that's good for us. Not developing it to the case.
SPEAKER_03:Why not cure cancer rather than write another email? Yeah.
SPEAKER_01:Well, yeah, but and you probably don't need AGI for that. I mean, and I think that is one of the big problems with the discussion around AI, about what you know, that what's it good for? You know, Chat GPT is not going to cure cancer. When you talk about AI in the medical sector, and I was talking at the Hay Festival about this on a on a panel with with doctors who were saying the same, was you don't want AI to replace your palliative nurse. You don't want AI to replace your GP. You want AI to identify a tumour, you know, with pattern recognition. It's not the same thing at all. And so you really need to make sure that you're choosing wisely what you want it for. And so for me, AGI, I just, you know, maybe I lack the imagination.
SPEAKER_03:So we've got a bunch of curious minds in front of us. I'm sure some of you got some burning questions you'd like to ask either Susie or myself. Who has a curious, burning question they'd like to throw at us? The first question is how do you get to become an experienced professional when there are no entry-level jobs thanks to AI?
SPEAKER_01:I think that's a huge problem. Again, I I hope, because you know, one of the things about choosing human rights law as an option is that despite what it may sound like, it means I'm an optimist that, you know, I hope things are going to get better. You know, I believe that it will be okay in the end. Um, but I think we are heading into a very difficult uh period. But I think precisely that kind of thing, probably what we'll see is the professions correcting for it and and deciding that actually, right, you know, if you are running a solicitor's practice or a barrister's chambers, you have to have this kind of work and you have to, you know, AI can do this, but it can't. I mean, it might mean there are less lawyers in the world, for example, but you know, that c'est la vie. Um, but I think, you know, I've I've heard about people as well worrying, particularly in medicine, but also but in other professional areas, but particularly medicine, um, people being very concerned about people coming out with qualifications that are not qualifications, that they're just they're not qualified at all um to do the work. And that I think is extremely dangerous. And I think there are going to be challenges around that training piece, but also that entry-level piece. And I think we're we are definitely seeing a bit of a crisis in many, many areas of work. And I mentioned journalism earlier. I mean, it's you know, that there is a crisis that is happening now, but I do think there will be a reset, and I think probably specific professions are gonna have to take it upon themselves to try and deal with that in their own regulated space, if you like.
SPEAKER_03:So I have a view on that, just the same question, because I get asked this question probably all the time in every every talk. In the book, I talk about a book by Mark and Blabble called Outliers. He says to become an expert, you need 10,000 hours of experience. I would argue that all of us here probably have our 10,000 hours, and so we've become experts. What worries me and worries the people you talk about is where do I get that experience if some of those low-level jobs uh are not available? I'm not able to look at so how I learned my profession was I watched someone more experienced. Good example. I I went from being an engineer to working in business development and sales. And so you probably experienced this. You're in a meeting, you're doing some small talk about the weather and your journey there. How long do you wait before you start talking small talk and then go into the business at hand? And there's no book on that. It's kind of you've got to read the room. How do you read the room? You watch someone else who read the room and explained it. So that worries me. I have heard though that some of the consulting firms, and I come from a consulting background, worked at IBM as a consultant, they're now saying we would have given a job to a fifth-year graduate. We might now give it to a third-year graduate that can do some of the grunt work with AI because they can be superpowered. So I think there'll be a bit of a hybrid. There will be a blatant case where I don't have the experience because I've not seen it before. However, those that get very comfortable using the tools and essentially have a researcher system and can do things very quickly. Someone once said, Oh, you know, all these interns, they're going to be out of a job. Well, I think the intern will be supercharged because he or she, if they use these tools and can find things very quickly, can do things in minutes rather than hours. So I think there will be some supercharged humans. When I was an IBM, we took graduates from all sorts of disciplines. We have to retrain them. So in a way, you kind of assume that these people don't have the skills, you've got to retrain them. But what worries me, and the narrative is, oh, we'll just use AI for the first couple of years because we can automate that. You've got an overhang basically of students or graduates without the experience. So the 10,000 hours is still needed, but to your point, are they being work-ready? A bit of an aside. So I do a lot of talks to charities. There's a charity called Speakers for Schools that Robert Pestner set up. I once went to a school and gave an AI talk or a talk on technology. The teachers then said, Can you come back next month and actually do a session to the teachers at an inset day? And I actually took them to task. I said, You are not creating work-ready students. You need once a month to do a role play about what it's like to be told no, what it's like to ask for a funding proposal and a meeting, actually how you run a meeting. And I think a lot of students don't have that experience. They learn how to pass exams, they then get to the workforce and they're not work-ready. And I hear this a lot. So I don't think it's AI is the only problem. It's exasperating the problem. But I think the ones that are smart will actually cope really well.
SPEAKER_01:Yeah, I mean, I think there will, as well, from a legal perspective, be a liability bump when all of these things start going wrong, and then there'll be a big fight about whose fault that was. Um, and that will probably also provoke a reset in thinking about how much to rely on it and for what.
SPEAKER_03:But you work in the law profession. Is there not a concern that billable hours for some of the low-level stuff, like looking at an NDA, can be done with an AI probably quite easily versus a few hours and a few hundred pounds? Is the billable hours profession at risk?
SPEAKER_01:Maybe. I mean, I'm a barrister, so it's different. Yeah. You go into court and uh uh and and speak. And I think what uh uh the billable hours thing, maybe maybe it is. I mean, and I and I don't think there's anything wrong with professions being forced to look at their practices and decide what works and what doesn't. So I'm sure the billable hours question is well, it is, it's up for grabs with AI. But that again goes to this point that I was making earlier about choosing what it's good for and what it's not. So sort of document review, you know, yes, AI is probably going to be very good for an initial document review. What it's not gonna be good for is devising a legal strategy that responds to the other side, not just you know, the lawyer, but also the the the other side, the individual and the judge and all the moving parts that are going on. You know, and and you know, your best option is generally going to be finding a settlement. So it's not really about necessarily getting the law right, it's about a much bigger picture. And I think that is what people aren't grasping when they're talking about, oh, yeah, you can just write a legal. Well, you can, but would it actually be any good for your client is a very different matter. And it's exactly the kind of skills that you were talking about, the work skills. You know, that's what you're paying a lawyer for. It is that the 10,000 hours.
SPEAKER_03:Yeah. The next question. I work in hospitality, and this year we cut 100 staff in the reservations team. Basic questions are now being handled by AI. Do you think in the foreseeable future that higher unemployment due to AI replacement will lead to slower economic growth? So let me answer this in a different way. So the people that were doing those jobs that then were automated, did they enjoy doing that, answering the same question all day, every day? I'm gonna sort of look at this from a helicopter view. They probably didn't train to do that. And if they had a choice, they would have wanted to do other things. So I think what we're finding is that AI maybe is finding inefficiencies in the way we get work done, and what it means short term is that people have to go and find something else to do. But I think if you had a perfect world, would people choose to want to do those repetitive jobs all the time? For me, that does that point to an inefficiency in the process that maybe should have been fixed years ago before AI?
SPEAKER_01:One of the things that I think will be will be interesting. I think you raise a really interesting question, and I'm sure you won't have this answer yet. But I think what will be interesting is how the automation of those systems potentially affects your customers, the people coming in. I mean, and ultimately, if there are more people who don't have work, then there's less people going out to restaurants, kind of, you know. So it'll then depend which le which kind of level of restaurant you're at, how it'll be affected. But I think it will, as you're saying, have a much wider kind of economic and societal impact. But I do wonder one of the things that I mentioned in my last book was uh a supermarket chain, a sort of upmarket supermarket chain in uh the north of England. I can't remember the name of it now, but they had got rid of their automated checkouts because they found that actually people just get really cross with automated checkouts. And then the the people working in the supermarkets then have a much worse time because their only engagement with customers is cross customers when the automated checkout doesn't work. Whereas, I mean, you know, my going to your point about starter jobs, my first job was as a Saturday girl in Woolworths, you know, doing the pick and mix and then the checkout. And yeah, I used to chat to people on the checkout, it was fun, it was it was nice. I think that there are lots of jobs where, you know, you and I might not want to do it now, but actually that doesn't mean that it wasn't quite fun, particularly when people are, you know, being nice. So I do think there will be a shift with the automation, and again, I think what you might land up with is that the sort of the high-end ditch the automation to have the personal approach. And I think we'll find that in lots of things. I think you'll find it in education, I think you'll find it in all spheres. The high end will be personal.
SPEAKER_03:Thank you very much.
SPEAKER_00:Thank you for listening to Digitally Curious. You can find all of our previous shows at digitallycurious.ai. Find out more about Andrew and how he helps corporates navigate the future of AI and beyond with keynote speeches and D suite workshops at ActionableFuturist.com. Until next time, we invite you to stay digitally curious.