Digitally Curious

S7 Episode 6: From Particle Physics to Parliament: Making Governments More Human Through Artificial Intelligence with Dr Laura Gilbert CBE

with Actionable Futurist® Andrew Grill Season 7 Episode 6

How does a particle physicist end up shaping the UK Government’s approach to artificial intelligence? In this thought‑provoking episode, Andrew Grill sits down with Dr Laura Gilbert CBE, former Director of Data Science at 10 Downing Street and now the Senior Director of AI at the Tony Blair Institute.

Laura’s unique career path, from academic research in physics to the heart of policymaking, gives her a rare perspective on how governments can use emerging technologies not just efficiently, but humanely. 

She shares candid insights into how policy teams think about digital transformation, why the public sector faces very different challenges to private industry, and how to avoid technology that dehumanises decision‑making.

Drawing on examples from her work in Whitehall, Laura discusses the realities of forecasting in AI, the danger of “buzzword chasing”, and why the next breakthrough in Artificial General Intelligence might well come from an unexpected player, possibly from within government itself.

This is a conversation for anyone curious about the intersection of science, policy, ethics, and technology, and how they can combine to make government more responsive, transparent, and human-centred.


What You’ll Learn in This Episode

  • How Laura Gilbert moved from particle physics research into government AI leadership
  • The strategic role of AI in shaping modern policy and public services
  • Why forecasting in AI is harder than it looks—and how this impacts decision‑makers
  • The balance between technical capability and human‑centred governance
  • Why governments must look beyond the tech giants for innovative solutions
  • Lessons from the Evidence House and AI for Public Good programmes

Resources

Tony Blair Global Institute Website
UK Government AI Incubator
Laura on LinkedIn
Raindrop.io bookmarking app

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

Voiceover:

Welcome to Digitally Curious, the podcast that will help you navigate the future of AI and all things tech with your host, actionable futurist, Andrew Grill.

Andrew Grill:

Today on the show we have Dr Laura Gilbert, CBE Senior Director of AI at the Tony Blair Institute for Global Change. Laura embodies a spirit of digital curiosity and visionary leadership, harnessing artificial intelligence to help governments deliver resilient public services for better outcomes for citizens globally. With a doctorate in particle physics from Oxford and degrees from Cambridge, laura has a diverse background spanning defence, intelligence, quantity, finance and medical tech entrepreneurship. Awarded a CBE for Services to Technology Analysis in 2023, she's also a visiting professor at LSE and a seven-time British Cervat kickboxing competitor. Welcome, laura, hello. Thank you very much for having me Analysis in 2023. She's also a visiting professor at LSE and a seven-time British Savate kickboxing competitor. Welcome, laura, hello. Thank you very much for having me. So, to start, could you share what your role as Senior Director of AI at the Tony Blair Institute for Global Change entails and how it advances AI's impact on government and public services globally?

Laura Gilbert:

Tony Blair Institute is sort of taking a slightly changed role, I think, in the world of political advice and leadership. The Institute focuses very much on advising world leaders to try and generate better outcomes for their citizens and drive better decision-making in government, but the work now is taking on a more practical tone. We are building up a tech incubator, so bringing in expert AI, data security people to actually deliver products and solutions that are specifically tailored for the needs of governments and, again, to try and drive that better decision-making process.

Andrew Grill:

So we met recently at the Dell Executive Networking Forum in London where you picked my interest in getting you onto the show by discussing the Tetlock study. It was fascinating and it had the audience enthralled for, I think, five or six minutes. Could you explain what it is and how it applies to experts like us trying to predict what might happen with AI over the next five years?

Laura Gilbert:

This is something I quote very frequently, so I find it fascinating. I think, for context, I first became interested in the Tetlock study when I joined Downing Street in September 2020 and was trying to sort of figure out why the use of evidence wasn't as widespread as I thought it should be. The Tetlock study was kicked off in the mid 1980s by Philip Tetlock and he wanted to understand something very similar really. And he wanted to understand something very similar really. So he found about 284, 285-ish policy professionals and people that were working in journalism or government or that sort of thing, and he asked them a series of 100 questions and the questions were relatively simple and it was along the lines of this thing that's happening in the world now, in 20 years' time, will there be more of it, less of it or roughly the same? And they had to give their predictions and he very patiently waited 20 years. Then he wrote up his research and I think Daniel Kahneman read this paper and commented that the policy professionals had done about as well as monkeys throwing darts at a dartboard in terms of predicting the future. So they did slightly better than just random guesswork and they did less well than what was rather fancifully termed the minimally sophisticated statistical model, which is pretty much just chart what happened before, draw a straight line through it with a ruler and assume that's going to be the next thing. Interestingly, they did worse in their own area of expertise than they did in areas that they weren't expert. It wasn't statistically significant, but it was slightly worse. So they're actually less good at predicting the future, where they really knew their stuff and they were highly, highly confident and that was the main outcome of the study that they actually were going to be proven right, of the study that they actually were going to be proven right. And even when they were presented with the results of this study, they tended to come up with a lot of justifications.

Laura Gilbert:

You know, well, I was nearly right. Well, if only this other thing happened, I would have been right. And it's a very interesting psychological study and it tells us a lot about our ability to predict the future. And for me, it means that it tells us a lot about our ability to make good decisions, which is what I'm particularly interested in. We're overconfident in our ability to predict what happens next, and what that means is that when we say to ourselves, well, if I do this, then this will be the outcome. We believe we're good at it and, across the board, pretty universally, we're not very good at it and therefore future prediction for me is something that I try to think about in terms of possible outcomes and the kind of things that you need to evaluate and monitor, to check the direction and to really press back on my thought processes because I know I'm very vulnerable to this to make sure that I'm not making those assumptions and assertions that lead me in the wrong direction.

Andrew Grill:

So I find that fascinating, because there are so many people out there that think they can predict what's going to happen next in AI and the answer I give is you can't. And when I'm asked for long-term predictions, I take a long intake of breath and I go. Well, I don't think this is going to be right. There have been a couple of things recently that have come true that have taken six or seven years. I actually interviewed someone about six years ago about AGI and he said the company that's going to nail it will be one we haven't heard of yet and probably the open AIs of this world, if you know that category. But I'm wondering now, as we sit here in 2025, who might be that other silent company that's going to really blow the doors off?

Laura Gilbert:

There's a few long bets that I might put a tiny bit of money on, but I think you're absolutely right. We really didn't. Not only did we not know chat GPT was coming, but OpenAI didn't know chat GPT was coming. Deepseek was a real surprise. So anyone that tells you that they know what's happening next is probably, at best, fooling themselves, and in some ways, that's you know. It's a very useful thing to, I think, bear in mind when you're thinking about planning for the future, not over committing to a paradigm different, but there was a study around the same time on tech predictions and they showed about expert technologists being right roughly 20% of the time on those longer-term predictions and quite a lot of the time when you are and I used to see this a lot in quantitative finance when people make a good bet and then they win big on it, they believe that that reflects on their genius rather than the fact that a certain statistical percentage of bets will win. So we need to be careful about that as well, I think.

Andrew Grill:

So a question for you and I get asked this all the time how do you stay up to date, how do you stay on top of what's happening, how do you gaze far enough ahead to make a prediction or understand what's happening next to advise your clients versus just doing the Tetlock?

Laura Gilbert:

and putting the data in the dartboard? Great question. I definitely don't stay up to date. It's very.

Laura Gilbert:

I think it's getting harder and harder in the world to be up to date with almost anything your ability to be up to date with the increasing flow of information and there are many things I'm interested in. You know everything from the sort of news about various wars going on at the moment through to you know, tech innovations or the latest band. It's harder and harder. So I think I think what I try and do is I try and be very, very well informed about anything that's going to directly affect what we're doing here. And then I try and have a network of very interesting people, yourself included.

Laura Gilbert:

So I really like LinkedIn for this, where I am scanning once a day to see what people are talking about, and I carve out two hours in my diary a week to go and try and learn something, to look things up, the things I've bookmarked that I haven't had time to read during the week. I try and go through them, but it would definitely be a lie to say I'm very well informed. I'm quite often surprised by things and I think that's healthy because life is very busy. Advances in technology have not made our working lives easier. They've made them more complex, if you ask me. So I think getting that balance where you feel confident enough in your own work and sphere and interested enough in everything else is the best I can do. Could I put?

Andrew Grill:

the word in your mouth curious you certainly can.

Laura Gilbert:

Yes, you certainly can. I think curiosity is the reason that we're all here and enjoying our work, and it matters to me a great deal to enjoy my work. So I think I really related to your curiosity. The future belongs to the curious. I was reading a book recently by Tim Hartford how to Make the World Add Up belongs to the curious. I was reading a book recently by Tim Hartford, how to Make the World Add Up, and he sort of cites curiosity as the main way to protect yourselves against misinformation and to make sure that you have the best knowledge and information. And I thought of you as I was reading it.

Andrew Grill:

Actually, Well, it's interesting the way you stay up to date. I have a similar thing. I also use LinkedIn as my newsfeed because I follow and connect with people that are interesting Some people I don't actually directly connect, I just follow them. But what I do, I use an app called raindropio. It's a great bookmarking app, so if I see something on LinkedIn or somewhere anywhere, I'll grab it and, like you, I put some time in the diary to go through it. But because it actually captures the moment in time, it actually takes a snapshot. So if someone were to delete that, I would still have it, so I can go back to it. I can search through it. In fact, the book has all the links that are referenced as a raindrop page and that, I think, is very healthy, because I don't have time to stop and read everything, but I want to go back to it and find it and sometimes finding that thing you saw three weeks ago is really quite difficult.

Laura Gilbert:

One of the things I am finding, though, is that my worldview on LinkedIn is getting narrower, so this point about expertise becoming narrower the things that the algorithms are showing me are, like today, almost entirely agentic workflows based, and I need to figure out how to widen it out.

Andrew Grill:

So you know something where I'm probably entering. You know different prompts and sort of connecting more widely and picking up some of those wider interests would be very useful at this point. Well, a bit of a segue. So if you look at your LinkedIn feed my LinkedIn feed you would think that everyone's doing a Gentic. It's a Gentic and it's working brilliantly and people are saving all this money. What I say, and I've got the foresight that I'm every week in a different organization, I'm in the trenches.

Andrew Grill:

The last two years I've been speaking to companies of all sizes and examples when we met at Dell and the customers that were there. What surprises me is that no one's doing a GenTec. They're all. I haven't even heard about it. Sometimes, when I mentioned at the Dell event, probably a few people in the audience had heard about it. Once I had a woman who said I've got my own indenture gauge and I'm playing with, but what I see?

Andrew Grill:

There are four things that I think that are holding companies back and I'm wondering if you're seeing the same thing. The first is training. Not that you have to go and learn how to use ChatGPT, but people don't even understand what it can do. So just an awareness is the first thing. The second is budget. No one is saying in the companies I'm talking to, we're going to put a large amount of money aside next year for training and AI tools. They and AI tools. They're just not doing that. Third thing is data, which is a perennial problem. The data isn't there. And the fourth thing processes. The processes that they're doing today won't work any better under AI. In fact, they'll be worse off. So are you seeing those sort of blockers from stopping people to be agentic all day, every day?

Laura Gilbert:

It's a very interesting one. So on your first point about the sort of people, I think a lot of people have picked up chat GPT at this point it doesn't teach you how to use it and it looks almost deceptively simple. So you know, what I've been really thinking about here building tools, particularly for political leaders is the way that we wrap those large language models is to try and force it, to ask the user what they really want. So if you go in and I did this the other day, actually sort of with a senior politician you go in and you say to say, claude or ChapGPT, can you tell me about North Sea Oil? It will come in and give you maybe a summary. But the second time I tried it it told me about the commercial interests around it. And the third time it gave me the history, and none of those prompts were different. I hadn't given it any indication of what I wanted. And if I was writing a paper and it had done that, then I might go oh well, that's all that is and I'm writing about the history of. So it guides you in a way that's uncontrolled. So we're sort of trying to build tools that go the other way and say you want to write a speech. Could you tell us about the audience? Could you tell us about the tone you're trying to hit, any points you want to include before it runs off and uses all of that energy? And we're really seeing I think you know quite a lot of that naive use case infiltrating Government's another great one and this is one of the first tools we built in government when I built the AI incubator at the end of 23,.

Laura Gilbert:

Sort of all this work kicked off and it was to do with government consultations and there was a need there. So when government goes and does a consultation, they do 700 or 800 a year public consultations requirement and the average consultation returns it's a new metric for you about as much text as 400 Brexit withdrawal agreements and traditionally you have a team of maybe 25 analysts. They'll work three or six months to go through that and come up with a report, and so of course we built a tool that could run through and write that report in an hour and directly reference back to all of the comments. So people aren't excluded, but it's all in one place and you can find it and it's a great example of answering the wrong problem with AI, because actually consultations are not a good way to get public opinions.

Laura Gilbert:

They're full of lobby groups, they're full of niche kind of people that maybe have the time to do this sort of thing. You're not getting a cross-section of people's genuine, unfiltered views at all. So and I think we see this a lot in and it's to be clear it's still a good piece of work. It saves a lot of money while you're still doing it that way. But a lot of the time we're seeing people building AI to replace workflows when what they should do is not do that workflow at all. You are perpetuating a system or workflow, an outcome sometimes that is the wrong one by naively going and just putting technology on it.

Andrew Grill:

I always say to people if you're looking to process, why have you always done it this way? And AI won't fix that. It's interesting you talk about prompting because often I'll give an example. Three weeks ago I was in a room of small to medium-sized business in the north of England and they really hadn't had any exposure with with these tools at all, and one thing I gave them that became a light bulb moment is ask it a question, but then ask it to justify its answer, and what it will then do is it will show you it's working in a way and how it was thinking. You can then go that's the way I would think, and the light bulbs in the room went off, thought I, I didn't know we could do that.

Andrew Grill:

So there's just this basic understanding of using it like a search engine rather than saying justify what you're doing and give me some answers that will challenge my thinking to your point, to then almost prompt me. Well, what else do you need? What else do you need to better understand how to answer this question? If I was sitting next to an intern, if they're smart enough and they had that problem to solve, they'd be going. Well, what's the audience and where is it and who are the key decision makers that are going to be there? When we see a gentic AI, it will do some of the work, but right now it gives you an answer and just sits back and says, well done my job.

Laura Gilbert:

Yes, and it's really interesting to sort of watch people go through that journey when they do do that, because I think it's just it's so very unintuitive to people the way it works. And you know, when you look at the reasoning as well, you you get a different response if you tell it you're going to ask for its reasoning in advance. So if you get it to generate, then you say, please give me your reasoning. It's faking the reasoning really. Now that doesn't make it not useful, because if you view something like Jack, GBD, Claude et cetera you know Mistral, if you view them as brainstorming assistance, fantastic, it doesn't actually matter if the reasoning is real, because, as you say, you're challenging whether or not.

Laura Gilbert:

I would have thought of it that way. But when people take it at face value, they can be heavily misled and really make mistakes. So we need to change the thinking and actually this is one of the things we're looking at in policy generation. Here is can you put in an option which just goes no holes barred, just go nuts, come up with wild policies that would be impossible to implement? That's very, very useful as a brainstorm for government officials thinking actually we really need to get this done. What can we break and what can't we, and get out of the mindset of, well, nothing can be broken. You have to work within current constraints. So it's very useful for brainstorming, but I think that's not how people are mostly using it.

Andrew Grill:

Well, I crossed my fingers. Did it live at this event I was at. It's a family-owned business been going for 150 years. I basically tasked it to go and look at the next five or six years where they should go. I didn't know whether what it was going to show would be valuable. So I put it on the screen and said no holds barred, does this look sensible? And they looked at it and went well, we hadn't thought about that. We hadn't thought about that.

Andrew Grill:

Again they didn't know they could use it for brainstorming, because, no holds barred, you're not wasting any time really.

Laura Gilbert:

But back to my first point, the training just showing people level, setting what they can do and what they is that you know.

Laura Gilbert:

The kind of people you really want to use this well are often also the kind of people who are very busy, and you know if you're in a leadership position, you're finding a lot of people who are telling their companies we care about AI, we're going to use AI.

Laura Gilbert:

You know, come on, everyone AI skill up and they themselves still wandering around with a notepad and you know there's a culture signal there. But it also means they really don't have a mental model of what they're asking people to do and what's possible and what's not possible. And we find this across the board in technology, and particularly in government. It's fundamentally so easy for people to pull the wool over a leader's eyes, pretend that something takes longer than it should or that it's more expensive than it is, or tell them a solution is easy, when actually it's not, and they're going to pass that work on to somebody else who's going to have a really tough time. If you are not using it yourself, even to write you an agenda for the day or solve a minor problem, you really can't expect your company, I think, to implement it well.

Andrew Grill:

Well, that's my whole notion of being digitally curious, and you have to have that mindset at a very senior level. So back to your idea of a politician. If he or she actually did some pre-work to know what was possible, they could then brief their advisor to say I've done a first pass, ie, I know what I'm doing, you do the next bit and expand on that and just set it off running. I think that will be a really nice way, because they're not going to be experts at it, but just start off at 10.

Laura Gilbert:

We're working with a world leader at the moment whose use case is very similar to that. They want really high quality briefing outputs. They know what they want and they want a degree of control over it, so that they're working with their staff rather than waiting for the information to be filtered to them. And that can be having worked with a lot of government ministers very disempowering, because for the most part, historically, a government minister gets highly filtered information evidence that goes into policy making, through to information about how the department's functioning and who's doing what, and whether or not there are any blockers and how severe those blockers are and at what point they're going to learn about them. All of those things are very, very heavily gatekept. You give tools like this to very senior people and it allows them to challenge the people that work for them and in the public sector. I think that's a good thing.

Andrew Grill:

I'm just thinking now about a reboot of yes, minister and Sir Humphrey App will be having apoplexy because Jim Hacker can actually do his own AI research and it cuts him out totally. That would be a lovely way to reboot that series. In the age of AI, what do you think? Would those roles still exist?

Laura Gilbert:

I only watched yes, minister, I think, about 18 months ago for the first time. Because I only watched yes Minister, I think, about 18 months ago for the first time, so I wasn't when I went into Downing Street. I wasn't, I didn't have an interest in politics. It was, you know, a bit of a sideways career move, and it was terrifying how much it hasn't changed at all. There was even an argument about effectively open data that we're still having now and decades later, and what can we put out to the public, and whether or not there's infrastructure to do that, and so on and so forth.

Laura Gilbert:

I do think that, if we get it right, the adoption of AI combined with better digital services, better data infrastructure, so on and so forth, could really meaningfully change the way that governments operate, and I think you are seeing that in some of the world governments of the day. You know you look at the way that Estonia sort of operates as a really standout. You know forward-looking digital government, and I believe it's really changed their processes and their decision-making, and I'd like to see that roll out much more widely. I think if you have more empowered decision makers who are able to cross-check and research in a way that's achievable and accessible to them. Then you have a system with more inbuilt challenge, with much more accountability and accountability, honestly, is very low in the civil service in the UK, certainly and we could have something where actually you know you do get that value for money that we'd all love to see sort of across the board. So fingers crossed, but that's very much what we're driving towards and trying to help happen.

Andrew Grill:

So you touched on there. Politics wasn't a career option for you. I'd love you to tell listeners how you got to where you are today. Your story from being at university to where you are today is fascinating. How do you summarise that in a few minutes?

Laura Gilbert:

Well, it's been a series of accidents really is probably the best way to put it. So I went off to do physics at university, so it's just what I was interested in. And then I wasn't quite sure what to do next. So I left and I got a job and it was advertised and it didn't really say who it was for, it just said physicists needed, and I was slightly adrift and I'd done some very boring work experiences at some places I won't name, so I was slightly despondent about my choices at that point and it turned out to be in defence intelligence. So I spent a year doing that, fascinating. I learned a great deal about techniques and it's the first place I really did any coding as well. But it wasn't for me for various reasons, not least because when there was sort of an active bomb no one in the building seemed worried about it and I thought that might not be the world I wanted to live in. So it was a very interesting space. But I then sort of had a look around and decided to go back to university and be a particle physicist and I really enjoyed it. I got a teaching job in Oxford very early in my career, much earlier than I was supposed to be allowed to Really enjoyed teaching the students. It was fascinating and I loved being a physicist.

Laura Gilbert:

But it got to a point about six years later where the government cut £80 million of funding just very surprisingly, and they changed the funding council name from Particle Physics and Astrophysics to Science and Technology, and it was a real change in direction and everyone lost their jobs. You know people without tenure. Overnight they were clearing their desks out and my supervisor very kindly said well, don't worry, laura, we've found a job for you. You're one of the lucky ones and you can go to Fermilab, which is in Chicago and it's the middle of winter and it's like minus 50 degrees or something, so it's already not wildly sold and we'll pay you $800 a month, which was worth about 400 pounds at that point. You get a free room in a student dormitory.

Laura Gilbert:

I sort of went. I am nearly 30. Absolutely not right. So I thought I'd better do something else and I went to the career service classic and I said what job can I do that you need to have done something like a particle physics PhD to be eligible for. So I haven't wasted my time and they said quantitative finance, end of book. And so I applied for some hedge funds and got in and had a deeply interesting time.

Laura Gilbert:

It's a very different kind of science. So you know, with your particle physics you're looking for absolute proof of something it is there or it isn't there, and you've got to prove it within five standard deviations of certain incredibly high confidence With finance. You've got a needle and you've got a very big haystack and the haystack's on the back of a rickety camel and half of it's on fire. So it's a very different way of using data. You need information about market sentiment, how people think. You need to think about the interactions between government announcements and earthquakes and these kind of financial instruments. And I learned very different techniques the first time I really used AI.

Laura Gilbert:

Actually, the third company I went to was a high-frequency trading company and it was interesting because it was run by these people who had previously sold their last firm for 200 million and they'd gone off and got venture capital to come and do it again, and the venture capitalists were confident because you know they'd done so well and so they said well, what we're going to do is we're just going to hire really high IQ people that's mainly the criteria and then put them in groups and you know it's five-hour IQ tests with a New York psychiatrist to get this job and put you in a small group of three and these teams of three went off and did things and we'd got this project, trying to use genetic algorithms. So, the idea being that you can't really predict what high-frequency models markets do, and you're going to build these algorithms and then you run them on the hard-past-age and the ones that succeed you breed, you literally swap code over and try next generation. The rest you kill. It doesn't work at all. Wow, hill, it doesn't work at all. Wow, it's absolutely awful. And in fact, mostly these experiments weren't working. The company was losing money. So, um, I sort of worked for three companies, all all Well, two of them collapsed and the third one tanked pretty dramatically while I was there.

Laura Gilbert:

It was a curious statistical anomaly and nothing to do with me, and I learned a couple of things. I learned lots about different methods for research and data etc. And I learned that I didn't really like finance and a lot of it was because I wasn't very proud of it. Met people for dinner as a particle physicist. Then they say that sounds interesting. I go oh, it is, and in finance, I'd have to tell people I was in finance and feel good about it.

Laura Gilbert:

So my friend was doing this medtech startup and he had 10,000 pounds and he had a killer app in his mind and he went to this tech company app development and said can you build this app? I've got 10,000 pounds. He said, absolutely yes, we can Gave him back the app. And it didn't turn on, it just crashed. So he said, well, it doesn't turn on. And they said, well, give us another 10 grand, we'll make it turn on. So I said, well, I'm pretty sure I can build this up. He doesn't have any more money. You know, and whilst I was in finance on evenings, weekends, on the train, et cetera I was building these apps and I didn't think it was a very good idea to start with, but they were very simple and we measurably improved people's lives and it was targeting people who were homeless, people with multiple and complex needs, people who were unable to communicate, with very severe disabilities, and you know I won't go into details, but you could literally see the impact on people's lives and it was wonderful.

Laura Gilbert:

So after this third company there's a long story had an operation. I was unwell for a while and by the time I went and was offered another job in finance, I realized I just didn't want to do that, joined this startup and did that for 10 years. We built it up, we took it through small media enterprise. It was sold. We exited march, the second, 2020 with the idea that we would you would go and do a bit of consulting and of course, that was just as COVID.

Laura Gilbert:

So a few months later, having handled the idea of the children being at home all the time and sort of you know, got through summer, I saw this job advertised in Downing Street and I'd had some glasses of wine and thought it'd be a good idea throw a CV at it and was absolutely astonished to get it. To be honest and I think I hadn't it was the director of data science. I hadn't realised I was a data scientist until that point, so and the rest was sort of it was very interesting, it's I'd gone from coding in a basement which is what? Because I was CTO of the medtech company, but very hands on and you know, wasn't big staff Through to, but very hands on and you know, wasn't big staff through to.

Laura Gilbert:

Well, dressing like a grown up for one thing and sort of walking into the prime minister's office every morning was quite a culture shock, and my job became more about learning how to persuade and influence people and building an amazing team. I mean, they are phenomenal to do that modeling, but actually try and get people to listen to it, which is where the TELOC study comes in. How do you get people who are very entrenched to be able to change their mind is one of the biggest problems. So, yes, I did that until earlier this year, and then I've joined the Tony Blair Institute to do something very similar, but with worldwide impact, and I'm thrilled to be here.

Andrew Grill:

So what you built for number 10 in terms of data science how did that? I mean, when you started there, generative AI probably wasn't really a thing. While you were there, it became a thing. How did that impact what you're doing there and what you're doing now?

Laura Gilbert:

We were already using the early versions of large language models, actually in a few projects. There was something particularly that I have a bee in my bonnet about, which is maternal death statistics and, as is right and good, you cannot, in number 10, look up people's health records, so you can't just go and do a research project. What you can do is pull out the publicly available incident reports, and I think about 70% of all of the incident reports where people come to harm or nearly come to harm are actually in maternity, and we're still in a position where nearly one in 10,000 babies that are born, the mother dies. So if you know 100 women, they know 100 women, and we're four and a half times more likely to die if you're a black woman. So I felt there was quite a lot to do there. So we were doing sort of LLM experiments when chat GPT hit and it was suddenly something really different. So we were very well positioned and already knew how to kind of work in this place. What changed was that?

Laura Gilbert:

Well, the first thing that happened was, I think, six people immediately declared themselves new government head of AI and there was quite a scramble because, you know, suddenly it was interesting Suddenly the data scientists might get invited to the parties, um. So it was uh, you know, it's a point at which what we were doing was suddenly of interest to people, um, and there was a this massive scramble for people to try and position and get money really and sort of come up with ways to capitalize. What happened to me was sort of two things in a row. The first one was that I met Geoffrey Hinton very early on, who terrified me, so sort of kicked off this piece of work to try and get everybody quite worried about the safety aspects, which obviously then you know, there was then the summit and there was the safety taskforce etc. And following that, a real acknowledgement that actually we needed to do something much more practical because we were worrying about the risks of other people doing things in AI and we were not worrying about the risks of us not doing it.

Laura Gilbert:

And it became very apparent you know, if you are running a bank and you don't want to be hacked, you find yourself the best hackers and you hire them. We needed to do that with AI. We need to find the best people, get them in the building so that they could help with risks. But also the NHS, you know the funding that's available, particularly the NHS and other services. They're really a threat. You can't afford to keep running them indefinitely not the way we do now, not the way we do now. So if we don't pick up our game and start deploying these kinds of technologies to deliver preventative healthcare that's less expensive, to get people through the system more quickly, to save money in the administration, et cetera, if we can't do that, then we are just going to decay.

Laura Gilbert:

So I built this AI team up and, if you're interested, aigovuk, they're doing many interesting projects. We adopted this and I feel very strongly about this. We adopted a mantra which is radical transparency. You know, if you're building a product, the code goes out in the open. There are transparency reports. The team writes blogs on what they're doing and why they're doing it and shows excerpts and shows videos, so the public really know it is designed to be in their best interest and not, you know, to restrict benefits or whatever, and it really is.

Laura Gilbert:

And the other thing we really wanted to do and again this is still very important to me now was I wanted to make government more human for people, and it sounds really counterintuitive, but I would go around and say we're going to make government more human with AI, but it's really really important to think about how you're using this technology and what you're giving people. A really good example of this if you write into the Department of Work and Pensions a handwritten letter, it will take 50 weeks 5-0 for somebody to read. It will take 50 weeks 5-0 for somebody to read it. If you are handwriting in, that's a certain cross-section of people. They might be very digitally disenfranchised. They probably don't have access to a lot of services and some of them are highly vulnerable and what can happen is after 50 weeks, when somebody gets back to them, they're not there anymore, they haven't survived, and that gives me chills every time I think about it. So there's a lovely piece of AI that will read those letters, looks for vulnerability and anybody who appears genuinely vulnerable. Somebody will call them right away.

Laura Gilbert:

And you've made a government service human and I want to do that sort of worldwide, I think, using AI to enable people who are doing caregiving to give the care, to enable teachers to really focus on children's social development, to be able to diagnose children much earlier when they have the sorts of conditions that benefit from early intervention, and to keep them safe from harm, to give people jobs that they actually enjoy doing.

Laura Gilbert:

One of the things about health and social care sector is, roughly, I think it's one in 20 people in the UK work in that sector and that's not one in 20 workers, it's one in 20 people children. So if you can come up with improvements in the working lives of those people so that they are happier and healthier and in a good mental health space because it's very stressful work they go home. There's a knock-on effect on their families. There's a knock-on effect on their communities. The kind of impact you can have for making those jobs more rewarding, less stressful and draining can impact the whole country really in one go. So I feel very strongly about this and this is one of the things that we've really focused on not just automating things. You're trying to give people a human, faster, kinder service.

Andrew Grill:

Looking ahead, what do you see as the next big thing in AI? Not necessarily technology, but where we're going to use it. That we may not be thinking about now.

Laura Gilbert:

I hate these ones because it's pure guesswork. I don't know. I tell you what I really think. I think that there's almost a bifurcated future ahead and in one of the spaces we have a world where some people are really enabled and really empowered and, you know, really supported by AI and technology and do very well out of it, and other people are left behind and the inequality widens. More people don't have jobs, more people move into the billionaire space.

Laura Gilbert:

Or there's a world in which we can take this kind of technology and we can narrow inequalities and we can give everybody a basic standard of care probably a basic standard of income sort of comes into that Earlier interventions when they're unwell, earlier interventions when they need mental health support, all those sorts of things and make their lives easier and safer. And it's not a prediction, I think it's a choice and I care very deeply about that. So what I really want to see from people, tech companies through to laymen who are putting pressure on service providers I want to see people send a signal that they care about the second world coming true. So I couldn't do a good job of predicting the future, but I can tell you the future that I want and I feel accountable for.

Andrew Grill:

I think what you're saying is the future we need is a world of ethical AI.

Laura Gilbert:

I think that's exactly right. You've summarised that much better than I did.

Andrew Grill:

We're almost out of time. We're up to my favourite part of the show, the quickfire round, where we learn more about our guests, even more than we know already. Window or aisle Window, always window, your biggest hope for this year and next.

Laura Gilbert:

I am building a team here and I built two teams in government recently who I passionately love and respect and adored working with, so my biggest hope is that we succeed in building a very similar team here. It's going well so far, but if you walk into work every day with a smile on your face because of the people you're working with, that's a great day.

Andrew Grill:

I wish that AI could do all of my Laundry. The app you use most on your phone WhatsApp. The best advice you've ever received.

Laura Gilbert:

Two pieces of advice, if that's all right. Emily Lawson, who used to run vaccines delivering is incredible. She told me that if you are in a job where you feel angry more often than you feel optimistic, you should leave it, and I think that's great. And the other one is not quite advice, but Alex Chisholm. Sir Alex Chisholm, who was the permanent secretary in cabinet office for a while while I was there, he told me that to succeed in the civil service, you have to be relentlessly optimistic, and I think that's true of life, and I engraved it on a flask relentlessly optimistic.

Laura Gilbert:

What are you reading at the moment? My favourite book in the world is the First 15 Lies of Harry August, and I'm just rereading that again quickly. Who should I invite next onto the podcast? Ed Dominguez at ServiceNow is a very interesting man. He used to work in government as a special advisor and he's now working in public policy. How do you want to be remembered? I'll tell you what. My father answered this question shortly before he died and he simply said I've had fun. I think I would like to think that of myself, and I think how people remember me. I would like them to think that I absolutely always tried my very best.

Andrew Grill:

So what three actionable things should our audience do today to understand how we can use AI for good?

Laura Gilbert:

Practice it yourself. If you can't understand AI, you can't understand how to use it for good, so you need to get your hands dirty and try and break. It is my top tip Go and give chat GP logic puzzles. That was fun. Understand what's gone wrong before. Very often we have people who think that the answer is about regulation and publishing ethical guidelines. It's not when we've got this wrong before. It's been lack of professionalism, people that didn't think through checking whether or not it responded in the same way to white people and black people, for example. And I think, thirdly, it probably is about your intent. It won't happen by itself. Adopting AI for greater good is not something that is a side effect of developing this technology. It has to have people who care about it, as we discussed.

Andrew Grill:

So care about it, demand it, get involved. Laura, a fascinating discussion. We could have talked for hours. How can we find out more about you and your work?

Laura Gilbert:

So I definitely recommend looking at my previous team, aigovuk, the Incubator for Artificial Intelligence, because they are well advanced and doing amazing things. Follow me on LinkedIn. We will have some announcements coming up building our first team, our first products now and it should be sort of fairly public very soon.

Andrew Grill:

Laura, thank you so much. I hope we speak again on many, many things.

Laura Gilbert:

Absolutely. Thank you so much for inviting me.

Voiceover:

Thank you for listening to Digitally Curious. You can find out more about Andrew his keynote speeches and brand partnerships at actionablefuturistcom. You can order the compendium book to this podcast at curiousclick slash order. Until next time, stay curious.

People on this episode