The Actionable Futurist® Podcast

S5 Episode 15: Stephanie Antonian on AI's love letter to humanity

May 19, 2023 Chief Futurist - The Actionable Futurist® Andrew Grill Season 5 Episode 15
The Actionable Futurist® Podcast
S5 Episode 15: Stephanie Antonian on AI's love letter to humanity
Show Notes Transcript Chapter Markers

We all know that ethics are important in AI, but beyond doing the right thing, are we actually focusing on the things that matter with the current AI tools?

While ChatGPT can write you an analysis of a Shakespeare play to help you pass the semester, should AI be used for more pressing world problems - and are we building AI on the wrong paradigm?

AI Ethicist Stephanie Antonian thinks so. I first met Stephanie at a recent Cadastra event around e-commerce, and over networking drinks, we debated these points about AI.

Having worked for Accenture, Google, DeepMind and GoogleX, she has been thinking about the role of AI in humanity for some time.

She has written a series of essays, the latest one titled: “On Generative AI: Denying the Necessary Limits of Knowledge” and asks the question: What if uncertainty was the secret to advancing knowledge?

Her thinking sparked my curiosity, so a few weeks ago, I packed my portable podcast recorder and we went for a 90-minute walk around London’s Regent’s Park to discuss these issues and more.

We covered a lot of ground (literally around 4 kilometres) and one phrase that captivated me was "AI is a love letter to humanity".

We explore this and much more in this fascinating episode including:

  • How Stephanie got started in AI Ethics
  • Stephanie's experience with AI
  • The biggest issue in AI Ethics at the moment
  • Dealing with algorithmic bias
  • The issue with AI regulation
  • Highlights of working for Google, Deep Mind and X
  • Advice for graduates working in tech
  • How can AI be used for good?
  • Dealing with the hype around Generative AI and ChatGPT
  • Humanity’s problem of fact vs fiction
  • The problem with ChatGPT
  • Open-sourcing the truth to train AI
  • Should AI development be halted?
  • Stephanie’s essays
  • Love and AI
  • The role of empathy in AI
  • The link between AI and self-worth
  • The hysteria in the AI industry
  • Are we building AI on the wrong paradigm?
  • The opportunity for AI
  • The need for ethics and integrity in AI
  • Where will the next phase of positive innovation come from?
  • AI’s love letter to humanity
  • Will AI take our jobs?
  • How does AI compare to previous innovations?
  • Are you worried about AI?
  • Three actionable tips to better understanding AI opportunities & threats

More on Stephanie
Stephanie on LinkedIn
Aestora website
Stephanie’s Essays


Your Host: Actionable Futurist® & Chief Futurist Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Andrew's upcoming book

Speaker 1:

Welcome to the Actionable Futurist podcast, a show all about the near-term future, with practical and actionable advice from a range of global experts to help you stay ahead of the curve. Every episode answers the question what's the future on, with voices and opinions that need to be heard. Your host is international keynote speaker and Actionable Futurist, andrew Grill.

Speaker 2:

For this episode of the Actionable Futurist podcast. we're outdoors in the Regents Park in London with my guest, stephanie Antonian. We're recording this the top of spring in 2023, so I thought it would be a good idea for a pod walk, or a walk cast, and discover these beautiful gardens while we talk about something that's been in the news almost constantly since November 2022. And that's Generative AI and, in particular, chat GPT. Stephanie, welcome and thanks for agreeing to come on this pod walk with me. Thank you so much for having me. We chatted after the events about your views on AI, which were quite challenging, and I thought well, let's get some of these on tape. Your website says we are a private creativity house designing a world not yet imagined, and you have a fascinating background. Perhaps you could tell our listeners about how you got started in this space in the first place.

Speaker 3:

I started my career, like most people, trying to make sense of the world and what was happening in it and why there was so much bad stuff happening and how to not be involved in the bad and to be on the side of good. And it sounds naive saying it out loud, but that was what was going through my mind, and so I started thinking that I would go into the church and that the church would know the answers. And the more I studied it, the more I realized it wasn't so clear cut. And then I did the traditional move into management consulting.

Speaker 2:

It's a Neville pivot from religious studies to accentuate.

Speaker 3:

Yeah, i mean, the Bible is the best selling book of all time, So there's a lot to learn about corporate strategy from it. Then I went into financial services strategy consulting and again they didn't have the answers. I mean it's incredible how little banks know. And then went into tech thinking they would have the answers, and it was just a series of exploring different industries, looking for somebody to follow, like a person or an organization, but somebody who could tell me how to lead a good life. And sadly I caught a lot of the shiny Pokemon cards and had to accept that nobody had the answers and that I might have to make some decisions for myself. And so then there's a big career shift in moving from looking from external things to looking internally, to be like to work out what I was actually meant to do, what I actually fought by myself and how I actually wanted to interact with the world, and so much of the gray that is in it.

Speaker 2:

So do you have a tech background, or is this something that technology was an avenue, or because you're at a management consulting company that does a lot of tech? was tech the sort of the focus, or you just sort of fell into that?

Speaker 3:

I fell into it and that when I was at Accenture, i realized data privacy was going to be a big thing when I was drawing some data flows and working on a few projects there, and then when I went to the European Parliament on this youth initiative and realized, wow, no, this is really going to be a big thing because we're having totally different conversations And at the time that there were very few people working on data privacy. What sort of year was this? How long ago was this? This was maybe 2011, 2012.

Speaker 2:

So this was before Cambridge Analytica.

Speaker 3:

Oh yeah.

Speaker 2:

Really that blew the whole thing up and everyone went. Facebook did what with my data.

Speaker 3:

Exactly. This was well before it, when it was really niche and incredibly nerdy and not cool at all and nobody really believed you when you were saying that it was going to be a really big problem. Then I just started focusing on that and learning more about that and because it was such a new problem by the time it started to hit the public agenda, then you actually were an expert.

Speaker 2:

So what was your view when the whole Cambridge Analytica thing happened? I think it was 2018 when the Guardian exposed it. You probably saw this coming, but what was your reaction when you read those sort of headlines?

Speaker 3:

My reaction is I bless you have loved what a tough job to do to be the one exposing it and facing all that backlash. But then, whenever the stories really break, they're just sad. They're just incredibly sad because there's always so much more detail in the stories and you realise that it was even worse than you thought and you see it in more detail, with lots of people bringing it together, and it's just sad.

Speaker 2:

They're sad stories really, just as an aside it's not about AI, but about data privacy. We're talking about that. Do you think consumers, even after the whole Cambridge Analytica, facebook expose they really care about data privacy? Was that the knee-jerk reaction? I thought that would change a lot of things, but we seem to have sort of gone back to the way things were.

Speaker 3:

I just think there's only so much the human brain can process and the level of complexity in the data privacy issue now is so high that you just can't put that on someone, and we shouldn't. When I go to the doctors and I'm prescribed a medicine, i don't look at the medicine to see if the details of it are going to kill me, because I can trust that we've got institutions that will do that for me. When I go into a building, i don't look to see if it's going to crumble on my head. No, there's laws, there's regulations that mean I can just live my life, and we need that with data privacy, because even the data privacy expert cannot understand the full detail of what is happening. So why would a parent of three children have time to get a higher level than a PhD qualification to just understand what to do for how to search something on the internet? The idea of putting the burden on individuals is just giving up really.

Speaker 2:

So I want to talk a bit about your career, because you've had a just to prove we're outside. We've got a crying baby across there as we're walking along.

Speaker 3:

She's upset because what she's got to do with data privacy.

Speaker 2:

So your CV reads like a who's who have tech giants Accenture, google, deepmind, google X and now your own company, astora. Perhaps you could tell us a bit more about your experience with AI and these leading companies and what you've learned along the way.

Speaker 3:

I think my key takeaway that I've learned from working in these places is that it's never really about the tech and it's always about the people. You can have companies that do very similar things but hire totally different people and the output and the impact is really dramatically different. Before, in my younger years, i used to be very focused on the tech, on what it means, on the implications, on the capabilities, but now I'm just a lot more relaxed, like that's not how you make successful innovation just by focusing on the capability.

Speaker 2:

And what I find. also, i do a lot of work talking to six formers about careers and someone says what about if I'm not really interested in tech? and I say, well, in every industry there's so many different requirements. So data privacy or AI, cyber security, thinking like a criminal, thinking like AI thinks? you need those sort of deep thinkers and diversity of thought. So what would be your message? because you're not a tech expert or you haven't studied technology, as far as I know, what would your advice be to people that are saying I want to get into tech, but I'm not a techie?

Speaker 3:

I mean, i think the key question is what is tech? What are you defining it as? Right now, it impacts everything and it touches on everything. I mean, the advice that I give to anybody younger in this day and age, where the system is crumbling and things are changing constantly, is to move from thinking externally of I'll do this job because then I'll get that opportunity, and to start going internally. So what do you actually want to do?

Speaker 3:

What's actually interesting to you? And it might seem nonsensical, it might seem very random, maybe you want to do puppeteering, i don't know, but it's just not impossible. that that's the path that gets you the breakthrough in this day and age, because maybe that's the missing link for the innovation or the tech company, in that they need to turn it into a puppet, and that's your path. So right now, because the whole industry is so shaky, it's a really cool time to be starting a career, because it's almost like we've taken all the guardrails away from you, where you can't lean on external things and you actually have to do the work to find out what's true to you and what's interesting, and there's no guarantees about what's going to happen with that in the future.

Speaker 2:

And so what are the sort of things you focus on, those companies I mentioned? You sort of your expertise and your interests is in ethics and integrity and privacy and all those sort of things. Give us an example of sort of the things you work on at those different companies that you can talk about.

Speaker 3:

That's a really interesting question because The type of work I've done has been very different in each of the companies. So, for example, at Accenture I was a strategy consultant, mostly working on financial services M&A deals. Then at Google I was an analytical consultant, so I was mostly looking at business strategy there too and analysing data. At DeepMind I was on the ethics team and at XIs and a whole bunch of roles, but it's always been different and I wouldn't describe myself as somebody who's interested in ethics or in that field. I guess the traditional topics now, if I can say, even though it's an industry that's barely been around is the issues of algorithmic bias and fairness and economic justice and privacy Pretty much all of the biggest issues in humanity repackaged into some new form to make it look like it's new. Yeah, there's a lot of work to do, because it's everything that's ever impacted humanity.

Speaker 2:

Susie Allegrae, who's been on the podcast before. she's a human rights lawyer. She was actually going to be at the Castro event and appeared on video. I think you two would really enjoy meeting each other. She's grappling with these problems too, and she knows the law. And yeah, with AI now coming in, i'm going to talk about some of these deeper AI things in a minute, but it really does start to push the boundaries of what's reasonable and what data is out there and what you can do with it. What's the biggest issue in ethics and AI at the moment, would you say?

Speaker 3:

I think the biggest issue now is the necessity of the tech we're creating, and that can be an umbrella term for so many other issues. But the question we're not asking is do we as a human race need this tech, yes or no? And why? That's the biggest issue, but we're not incentivised to tackle it. So instead, the industry is incentivised to talk about problems that have always existed Racism, sexism, inequality has always existed, and society has tried to find ways to move forward on it. But that's not a good enough answer to be involved in that debate. So it's like let's package it as algorithmic bias and let's make it an issue that only tech can deal with, where we'll ignore all the laws that have already been created to try to minimise it. We'll blow past them with loads of opaqueness so no one can really see what's happening, and then we'll make it really overwhelming and panicky and get everybody to focus on that instead of just following the laws of the land.

Speaker 2:

Well, the laws of the land, and this is something that Suzie talks about. They're not able to keep up because the AI and all these new tech seems to find new ways to bend these laws.

Speaker 3:

I don't 100% agree in that if you take algorithmic bias and we're in the UK now, there's really great anti-discrimination law that already exists, because it's not a new issue. What's happening is that loads of the tech companies are totally breaching UK laws, but because they've repacted it as something new, no one's going after them for it, but it's always been there. You can't discriminate for life-impacting decisions like jobs based on race or anything, but it's basically just that the industry is very good at bamboozling people into thinking we've got this new issue and it's totally new and we've all got to run to it and it's just. This is the issue of our time And it's not. It's always been there. It's always been a problem And there are. You know they're not good solutions, but they're better solutions And you have to first abide by the laws and then do something extra. You're saying the laws are there. Yeah, there's anti-discrimination law in the UK is really good and it took like 10 years to get into place.

Speaker 2:

So how are the regulators not seeing that some of the AIs breaching laws that already exist? Are they just not able to marry up? The bet is actually a discrimination issue.

Speaker 3:

I think it's that the industry and some very big tech enthusiasts have just done a really good job of trying to make it seem like a different problem.

Speaker 2:

They're doing a great job of that. Who unbundles that? Is it people like you and podcasts like this that basically call it out and say, hey, you should be looking at this and not getting AI washed.

Speaker 3:

Probably. I mean, I'm not actually too worried about AI anymore. The more people get involved in this debate, the more they realise there's nothing that new about it.

Speaker 2:

You've worked for some amazing companies Accenture, Google, DeepMind, Google X. Tell me some of the fond memories you've had about working at those amazing companies.

Speaker 3:

Yeah, so I've definitely caught a lot of the shiny Pokemon cards. Working at Accenture was really fun, especially on the grad scheme, and it was where I learned to suppress everything about me and build the right skill set and learn how to make slides and build models in a way that really is robust. And then, when I went to Google, it was my first time of letting go of some of the suppressed identity. And when I joined Google UK, the lead was Eileen Norton, who was just magical. She was just one of the most incredible leaders I've ever met in my life And she just just from who she was. It was the first time that I'd been somewhere where it was like it's OK to be you, and that was really transformative.

Speaker 3:

And then I went to grad school and then we spent some time at DeepMind. And then, when I ended up at X after taking a career break, it was just working with the most incredible, loving, kind people who really took it upon themselves to help build my confidence And were the ones who encouraged me to build a store. And, like, my heart fills with gratitude about that time at X. It was some really cool experiences and I'm really grateful to have had them.

Speaker 2:

So someone who's listening to this now, that's a grad that's working in tech. What would be the one piece of advice you'd want them to have?

Speaker 3:

You learn everything at the right time for you. So sometimes when you're in a grad scheme you're a bit beaten down because it's quite hard And you're always learning and you're not really you know. Maybe you're not writing essays about the meaning of life, but there's so much value to learning that skill set. So I think one of the things that I'm learning a lot more in my life is the power of timing and knowing when to sit and when to leave. And it's OK to go through times in your life where you're just building up the skills because you actually know nothing.

Speaker 3:

And that's what a grad scheme is And that's what's so good about the Accenture grad scheme, because you're expected to know nothing but you will know by the end, and that was really valuable. And so now, even in setting up a store, it's the combination of all of those skills and all of those different experiences that's helped me to be able to come up with these wacky ideas, build the client base, execute deliverables that are really high standard and bring people along. So I guess I'd just say chill out, it's all going to be cool.

Speaker 2:

So let's now move on to some of the AI platforms that have been in the news lately chat, gpt and OpenAI, and all these are the flavor of the month. What do you think we're getting right now, and how can AI be used for?

Speaker 3:

good. When it comes to these platforms, i guess the way that I look at it is outside of right and wrong, because a lot of the work is well-intentioned and it's hard to know in the moment what's right and what's wrong because we don't know how it will play out to give such big value judgments. But what I think is predictable, that's happening is the rate of self-implosion of these platforms, and I think it's totally predictable when you look at human nature and human behavior and systemic trends of perverse incentives in the economy and things like that. And so it's entirely predictable what's happening now.

Speaker 2:

So you mentioned that they're going to implode. What's going to happen and what's going to be the trigger?

Speaker 3:

I think there is an existential risk to AI with generative AI, but I think the existential risk is to the AI industry itself. I think that the hype of this has gone so far and the utility of it really isn't there to justify it, and so it will self-implode with people losing faith and will go into an AI winter on it. But from that, the reason why I don't mind so much is that from that then we'll readjust to using AI for applications that are actually really important, so maybe we'll be using components from these platforms. You know how exactly that will look.

Speaker 2:

I'm not sure of I think at the moment it's really good for journalists to write all these questions and write all these copy about what AI can and can't do, and they all. it was something on John Oliver, who has this TV show in America called Last Week Tonight And he did a montage of all the newscasts going oh, that sentence was written by chat GPT. Now, when I've worked out how to explain how chat GPT works in a sentence to my mom, it basically predicts the next word in a sentence. So it's not as smart as people maybe think it is. it's able to do things at scale. So guess what? when I typed last night who is Andrew Grill into chat GPT, it told me I'd written books that I hadn't and awards that I had been given than I haven't either. So there's misinformation in there. So I think you're right. we're going to have a bit of a pendulum swing where people go, oh, this is horrible, but all, why don't we cure cancer? Why don't we cure climate change through using AI?

Speaker 3:

And also, why don't we improve our filters for this information? So my biggest issue with chat GPT is that humanity's biggest problem is the inability to differentiate between fact and fiction. So that's what has led us to war, that's what's caused famine, genocide, the worst parts of human nature and human history, and so when you build a tool that makes it harder to differentiate, you're obviously on track for destruction. I mean, that doesn't take a genius to work out, and if you look at where we are now, we with the internet, with all the digital trends that have happened, we have so much information, but we don't really know how to curate it. So one thing that is valuable from chatGbt and all these discussions that it's really bringing to light that we're actually overwhelmed with information and we don't have good filters at all. Is chatGbt the solution? No, no, it's not. But is it showing us how important getting this right is?

Speaker 2:

Back to your point about data privacy, where you can't expect someone with three screaming kids to understand GDPR law. how can we possibly teach the average person on the street what to look for? People are getting phished and email spams and now banks are saying hey look, this is the way you can tell if it's a scam. How do we educate people about AI Because it seems so perfect? Well, who says it seems perfect.

Speaker 3:

I mean, my response to that was going to be why do we need to train people and not train the AI? The problem with chatGbt and generative AI now is that it doesn't want to tell you the truth of when it doesn't know, and so, like we've seen this, we've seen this message repeated throughout history, whether it's the story of Adam and Eve, whether it's research in psychology or quantum mechanics, you have to embrace uncertainty, and so I was telling you this story at the event. But in the story of Adam and Eve, the serpent comes to Eve and says if you eat this, you'll know everything, and her thinking that she should know everything is what pushes humanity outside of the Garden of Eden. All we've done with chatGbt is innovated the serpent. We're lying to ourselves and saying we're going to invent technology that's going to tell us everything, but that can't be done. So we can fix generative AI and we can fix the platforms if we create a backbone to the systems where we say this is what we know, this is what we don't yet know and this is what we can't know, and we create systems that say, hey, don't know Right now the way the industry is set up, because it's selling the serpent. You cannot throw your hands up and say it doesn't know. So that's where we've got the challenge, And so I often get asked who's going to win Google or Microsoft?

Speaker 3:

What's going to happen with search? And it's like I don't know why you can't see that they're both going to lose, because the problems in search right now are data privacy, phishing, exploitation, fake news. They're real problems that are ruining people's experiences. That means they can't trust what they're finding. That's taken the technology back, And these generative AI tools don't solve that at all. So no, all that's going to happen is they're going to argue amongst themselves. Meanwhile, with the advertising model and the weird content from interacting with people, the answers are going to get worse and worse and worse. People are going to trust them less and less and less, and, lo and behold, a third party who's going to actually solve the problems of the day using AI is going to rise, and everyone's going to go to that.

Speaker 2:

So this was the crux of why I wanted to have you on the podcast, because we had this sort of very intellectual discussion at the CadastraVin which was like wow, someone who challenges my thinking. So just back to the issue where the AI doesn't know that it's wrong or it's hard to tag that something is wrong, or we know it's wrong. Who decides that it's wrong? And if it's a human, how do we trust that human? Are we in a bit of a vicious circle?

Speaker 3:

Well, that's where I think we've got to collaborate a little bit more, because it shouldn't be the tech companies They don't have to do everything And so what I suggest in the essay on generative AI that I wrote is that we should have two open source lists, one that's scientific truth and one that's law and social truth, And they are owned by different groups in society and they're open source for people to see, because the thing about truth is that it evolves. So, you know, we once thought the earth was flat. We don't anymore, but we don't know that it couldn't be a new shape. I mean, who knows? That's the truth about science It's always evolving.

Speaker 2:

I guarantee that someone that's listening to the podcast will vehemently disagree with you. say the earth is flat, suppose being open source, the will of the crowd.

Speaker 3:

if everyone says no, it's round, then probably it's round It's not the will of the crowd, in that it's the will of the established institutions. So we do have established scientific institutions and there are things that we accept as fact, and the only way we progress in society is to say this is what we accept as fact, this is what we don't yet know, this is what we can't know. Let's focus on what we don't yet know and try to move that forward. It might go back and change the fact and we'll do that, but there has to be some type of path, some type of stepping stone to progress. Okay, the problem is that now we're debating whether the earth is flat or round. That's the debate of our time. And so what type of scientific progress does chat GPT enable? Because it looks like it's actually taking us backwards, because we're here debating about things that actually We know to be true, yeah.

Speaker 3:

It's remarkable to me that right now, investing in progress in technology means denying science and going backwards in science, and soon we should probably take a look at that chasm that we're creating and ask ourselves if we're really on the right track, and that's the fundamental point that people now are trying to trick chat GPT into saying things that are wrong.

Speaker 2:

But I haven't really heard St Othman and others really say well, this is how we're going to fix it. I think what they've done is sort of opened it up, opened the Pandora's box and sort of said well, how are people going to use it and where do we need to put the guard rails in? Since they launched in November, i think they've done a lot of work to remove the ability to, for example, explain how to make a bomb. I think if you ask how to make, can you make a bomb? back in November It told you, and now it says making a bomb is probably not a good idea.

Speaker 3:

That is good and it's a start, but you won't solve the problems until you ask other people for help. So you ask scientific institutions, you ask policymakers, lawmakers, and you accept your role as a company operating in countries that have their own social contracts. It's actually not for them to decide what the truth is. There's not that much that we would say we know as fact. We're still talking about the low-hanging fruit. We're not talking about the actual issues of the day where we don't know Like that's the truth.

Speaker 3:

Let's hold our hands up and say we actually don't know how to move forward in the way that is best for everybody in society, and we're trying to work it out. If you had an algorithm that could show you some range of the debate and help you understand that this is all to play for and changing and the defining issue of our time. so stop focusing on the earth being flat and let's focus on how we improve people's rights. then we'd actually be able to evolve. But that's not for the tech companies to come up with. That is for civil society to do, and the problem is so many of the tech companies are scared to really embrace allowing other people to make decisions, and they don't see that that is the saving grace for their platform.

Speaker 2:

There was a letter a few weeks ago, written by a number of prominent people, saying they want to pause the development of chat GPT or other generative AI platforms beyond chat GPT-4. So what's your view on that?

Speaker 3:

I mean, i love the letter because I think the value of the letter is to just put in writing the I told you so And for that I think, yeah, there is good value in that You want your name down to say, hey, i just want this on record, i told you. But other than that, i'm not sure about it because the people who are signing it have spent years talking about how this is going to be terrible. So I don't know what six more months it's going to do for somebody to listen to them, but they have been saying this consistently now for a while.

Speaker 2:

But how do you just turn it off? I mean, and who could regulate that? Because the servers are in America or in Europe, and could someone say you need to turn this off because it's going the wrong?

Speaker 3:

way, i think that we consistently underestimate what people want and what people's personal power is, and so when I say turn it off, it's just people don't need to use it. So I don't need Microsoft to turn it off, i just need to not use it.

Speaker 2:

It's a boycott basically with boycotting. You heard it first during the podcast Stephanie attorney is calling for a boycott on Shep GPT.

Speaker 3:

I wouldn't say I'm boycotting it. I'd say it doesn't help me in my life, so I don't see a use for it. But I'm not. I'm not angry about it enough to be like boycott, boycott, boycott. But what I'm saying to you is as just a normal person, given the needs in my actual life, this product doesn't help me at all.

Speaker 2:

Do you think there's a time it would?

Speaker 3:

ever help you, maybe if they solve some of the bigger issues, if they could tell me what is true, at least by this person's standard, what isn't true, what isn't known, maybe.

Speaker 2:

And what has to be done to get there? Do they have to pause themselves and go okay, let's not develop Shep GPT 5. Let's actually work on the guard rails. Let's work on a platform to have these open source lists come in there. Why aren't they thinking about it? Or are they and they're just not telling people because they're the hard work doing it so that they become, or keep being, good corporate citizens?

Speaker 3:

Firstly, it takes time to do things. There's a reason why there's red tape of bureaucracy, because that's how you keep democracy safe. It takes a lot of time, it's very complicated, but the other thing, which I mentioned in another essay, is about paying attention to what the economic incentives are. So right now, there is a lot of money made on ads that say things like the earth is flat. They do still make money from it, whether it's intentional or unintentional which, to be fair, it is largely unintentional. They are trying to get that off. But fake news and misinformation and bad actors do comprise of quite a big amount of revenue, and so there just isn't the economic incentive to fix this.

Speaker 2:

It's the argument, i suppose, that government rakes in so much money from alcohol and cigarettes and other things that may not be great for you that they say, well, to stop it would just be financial suicide because we get a list of money and tax from it.

Speaker 3:

So similar argument what's it got to give? I would say it's similar to the Twitter argument. And so when Musk was saying that such a high percentage of fake profiles which is great because he's made it private, so he actually can do something about it But if that's happening on other platforms, there is a shareholder responsibility to not solve that problem, and that's where things get really dangerous. I mean, how do we change it? The markets will correct themselves, won't they? I mean, again, i'm not too worried about it because there's only so much you can exploit people before they rise up against it.

Speaker 2:

Now your one voice on this and having you on the podcast and the essays we'll talk about in a minute. is one person doing that? Are there multiple Stephanies around the world that are calling for change, And will you be able to tip the scales and have people listen to you?

Speaker 3:

There are lots of people. It's interesting because sometimes you have to take the risk alone And then, once you do that, you find that there's loads of people on the other side. But I don't know if it's my intention. It's not my intention to tip the scales. It's just my intention to just be a bit more truthful to myself about what I'm actually seeing and what I actually believe and understanding. you know personally where I want to design AI products, or you know what area of the industry I would want to build my own career on in a time that's so unstable.

Speaker 2:

One of the reasons we talk today is you mentioned before the essays. You've written four essays on the website, But you were reticent about even publishing the first one. So talk me through how they came about and what's been the reaction and what's coming next.

Speaker 3:

Basically, it was just me taking time to make sense of what I was seeing in the present. So it's interesting, because a lot of people say you can't predict the future, but actually prophecy is always written in the present moment. So it's about how well do you understand what's happening today? And the better you understand what's happening today, the better your implications and so predictions are. So George Orwell is talking about what's happening today and then giving the implications from it, and what I realised was I actually was so disconnected from even accepting what I was processing as happening today Because, even though you know, when I was working in AI ethics, i felt like there was something wrong.

Speaker 3:

I actually didn't have the space or time to process what that was Like. Why am I not happy here? Or why do I think these issues are bad? And it's very difficult in this industry? because there's such an over-intellectualisation of it And everything becomes so academic and so detailed that the confusion is just so sophisticated that it's really hard to know how to come down. And it was. You know, my biggest issue with the AI ethics industry was it was just a bit mean. It didn't come from a place of love, it came from or at least in my experience with it, because obviously it's a very big industry now and it's very different. But it was an egotistical thing about being the ones who set the rules and debating all day long about what people's rights should be, without focusing on the actual actions and how you impact people's lives, and it just didn't seem very loving.

Speaker 2:

Is that across the industry? or are people drawn to that Because they'd be like, maybe why people want to go into law enforcement? They want to have a level of authority.

Speaker 3:

I mean, no, it's not true for everyone. There's also really amazing people working on it. It's just like any industry. there's a whole mix, but just it happened to be you know where I was placed and just what felt a bit weird. But it was also that. you know, love is not a topic right now in AI which I really do think it should be But it's not an intellectual topic to say well, you know, if you want to build a company that wants to have a positive impact in the world, well, how positive is the impact you're on your employees? Are they happy?

Speaker 2:

first, All the AI experts I've spoken to have said that the one thing that AI will never do is feel empathy or love, and that's probably a good thing, because we need the humans to be doing some things and checking the AI that's in there. But where does empathy fit with AI and ethics? You're saying we need more of it.

Speaker 3:

Well, i think it's really good that AI will never be able to love or feel empathy, because it's machine, it's not a human. But I also think that that is one of the best bits about being human our capacity to love and support others, and so we should be building AI that helps humans along the way to feel more love and to feel more empathy. But we're not going to get to that stage until we have teams of people that value their own capacity to love.

Speaker 2:

So one of the essays you talked about the link between AI and humanity and people's own self worth And that's again one of the reasons we're talking, because I haven't had people talk about this in a very non technical way And self worth and self importance and self awareness is also very important. Where's the link there?

Speaker 3:

The link is that the more you're open to love, to loving yourself and to loving others, the higher your self esteem. And it's not ego or vanity, it's where you see yourself genuinely as part of the collective. So you've been able to let go of needing to think that you have to be exceptional to be worthy and to do all these big displays of dominance and power for people's like you knew, except that, hey, actually I am worthy just like everybody else And we're all connected and we're all together in this. And so what you see is that you know a lot of studies done now that the higher somebody's self esteem, the more open they are to loving themselves, others in life, and the more connected they are to everybody else.

Speaker 3:

And what I found in my own career and in my own journey is I'm really not speaking for other people on it, but what was driving me to have such an exceptional CV was insecurity. Really, what was driving me behind always being exceptionally overachieving and always wanting to do more was a lack of self worth. Things in my life imploded and then I stopped working between DeepMind and X, stopped working and went into palliative care because my father was sick. It was the first time of just being able to make a decision of what I actually wanted to do in my life, not for my CV. At the time I thought I was just blowing my whole CV up. But there was now something more important And that was a big shift from moving into who I just wanted to be and raising my level of self esteem.

Speaker 3:

The more I work on that, the way that I view the AI industry is totally different, because I also used to think, oh, i have to work really long hours and I have to do this because the future of humanity hangs on me. And then I read this book and it was like that's an inverted ego, like who do you think you are that you're going to save humanity? And I was like, wow, oh, my gosh, that is. That's just a very overachieving, exceptionalist ego that's pretending to be so humble.

Speaker 3:

Once I let go of a lot of those things then I saw that there was this huge hysteria in the industry that's really nonsensical and that totally bets against humanity and everything that we're grateful. And then I sort of went on my own personal journey of just letting go of a lot of limiting beliefs and feeling like I had to be tied to certain things. What's interesting is that in doing that, then I met lots of other people who've been doing that and on that path, and then that's how I also ended up at X, which was the most amazing time ever, with really incredible people who have such a strong capacity for love, and so it's probably no surprise that they've created so many big innovations that have really shaped society. In thinking for all of that, the first essay that I wrote was really making sense of my own journey and what I had been through in this experience in the AI world and what I wanted to focus on and work on.

Speaker 2:

A little too before, but you were a little bit hesitant in even publishing that. You thought there might be some backlash.

Speaker 3:

Talking about that, oh, i was petrified Because I think when people tell you about their stories of personal growth and stuff, you only get the high level where it just sounds like people just skipping through fields.

Speaker 3:

But actually it's sometimes absolutely horrific And when I was writing it it was so painful to write the essay but then I had huge amounts of fear of putting out, because I used to always write essays internally to be like, oh, this is going to happen or there's this issue, and I never had the courage to say something externally because so much of my identity was in these big brands. So I was like I'm a representative for you And you know, like Google, i love Google, i love them so much. It's taken me a lot of time to reconcile that the way I can honor the Google founders is to take their key lessons and then apply it to something else. But there was a real like a real fear in going against the industry And also because when you start talking about things like love, people do sometimes look at you like you're really silly. You know, in the midst of an over intellectualized debate, when you start saying that it's just a bit mean, people look at you like you just don't know what you're talking about.

Speaker 2:

So when does AI fall down If it's perpetuating this anti love or low self esteem? How's it doing that? Give me some examples that we can latch on to.

Speaker 3:

Okay. So I would say that right now, we're building AI on the wrong paradigm, and so that's why we have huge rises in productivity, but also huge rises in depression, amazing investment in health tech, also huge rises in demand for euthanasia, lots of connectivity rises in that, and then also lots of rises in loneliness. So what's happening doesn't really look like it makes sense in terms of progress. When you start looking at all the numbers and the trends, and if we start looking at why, then my theory is that we build on the level of action. All of our innovation is looking at action, but underneath actions are thoughts, and underneath thoughts are emotions, and so you know the most influential scientists, like Einstein, tesla, LoveLace they are all talking about consciousness and something happening much bigger, and that that's where you look for the big spark. We've just taken that away and just made it really, really basic, and so we don't understand why, when we intervene on action, it doesn't work. It's because we have an option to intervene on emotions. When you look at some of our most popular apps, like Instagram and things like that, what the algorithms are doing unintentionally is realizing that if it impacts your emotions, it will hit its optimization faster. So if you say, if you think, okay, we've got fear to love on a scale, if I make you feel guilt and shame, you're going to click more and you're going to spend more time on it. And so, because we're not paying attention to it, what we've built are all these systems that push us to negative emotions, and the problem is, once one of those negative emotions become our base emotion without intervention. That's where we are.

Speaker 3:

What it means is that there's a really big opportunity for the future of AI in that, if we recognize this is all happening, then we can ask a, we can just ask a paradigm shift in question, which is how do we build AI applications that move humans from fear to love? What do applications look like that move people the other way? And how do we build systems that help people realize their own capacity for empathy, their own capacity for love, where they can then slowly show up as more creative, more innovative and more socially focused in a way that's actually authentic and real? So I think the current AI paradigm is breaking itself, because it's not true to human nature, it's not true to humanity, but that's good, because let it break And while other people are working on this new wave, which will be something much more real and valuable. And now, because of the letters, we've got the list of people to go to fast.

Speaker 2:

So the problem you mentioned before is that at the moment, misinformation pays just like crime pays, because people will click on misinformation. How do you make love pay with AI?

Speaker 3:

Firstly, i think there's nothing more powerful than love and actually people are incredibly willing to pay, to make the pay and stop When you're creating new industries, you're not going to find the existing models. It's funny because I often get a lot of pushback and be like well, you know what they say you should invest in the seven deadly sins because that's how you generate the highest return. And I'm like, okay, cool, that's the devil. I'm not saying you're wrong, i'm just saying is it all? is it just a given that we agree with that And you can have the devil, but you also need the angel or whatever language you want to put around it. There's a duality to it. That's important, but we've just blasted through the opposite and then just been like double down on those sins And it's like oh, do we really agree? Like, do we want to talk about this? Maybe there's an alternative? And I think why wouldn't we just give people an alternative and see what they pick?

Speaker 2:

So I want to tie this back to some of the other guests we've had in the podcast my good friend, lynn Gribble. Dr Lynn Gribble, in Australia, when chat GPT came out, we did a very quick podcast about plagiarism And I thought she'd be all over and saying this is horrible. She said now this has been around for years. People like cheating and getting away with things. What we need to do is teach our students and our employees about ethics and integrity. Is that a discussion that we need to have more of?

Speaker 3:

Yes, it's like humans are amazing and we actually want to be good people and we want to do good things, and reshifting it to what the best bits of humans are and trusting humans is going to be where we make the biggest shifts. It's like somebody had invented an anti-bullying tool, but there's really young, bright sparks. She's a Rhodes scholar. I was at Harvard College. What it does is, when you send a message, if it could come off as bullying, it says, hey, this might be bullying. Are you sure you want to do that? And it's like those shifts into going from scolding humans to actually just helping nudge humans to be the best they can be is really where we're going to see huge leaps in innovation, progress And is it small companies like this lady who developed the app that are having a eureka moment?

Speaker 2:

I mean, most of the big tech companies we think about all started in a garage somewhere with a crazy idea. Where's the next phase of good AI innovation going to come from?

Speaker 3:

I think it's going to come from really small, unknown groups that are outside of incentive.

Speaker 3:

The other thing that I think is really cool is that in the first century, if you look at the economics of the first century, what helps Christianity thrive is that there's huge amounts of patronage money that enter the system, and so people are disappointed and disillusioned with the system.

Speaker 3:

They're angry, they've waited a long time like they're fed up, and patronage money comes into the system and then there's this viable person to catalyse everybody, and it's not too dissimilar to what's happening now, where people are not happy with the system. There is high levels of despair, high levels of disappointment and also huge amounts of patronage money coming in, and they're looking for the smaller players and they're looking for the socially focused players and they're looking for people who can catalyse this angst and anger into something more positive and create a lot more change and focus it on love. And so I think it's the combination of everything that's really interesting now, but especially the patronage money, because money is now flooding the market and it's going to much smaller companies and smaller people that are much more socially focused than ever before.

Speaker 2:

You say that what AI is actually doing is writing a love letter to humanity.

Speaker 3:

Yes, because every time we build these systems, what it's saying is your capacity to love is the answer, and it's what's the most amazing bit about you. And so we're constantly getting the same answer back. In fact, we're always getting the same answer back, whether it's any of the Abrahamic religions or any major religion really, whether it's sociology and Durkheim, or economics and Adam Smith, it's we'll always get the answer back that the answers love, and that what makes humans special and what is the secrets to solving all the world's problems is humans capacity to love. And now AI is saying it too, which is why we're like quantum, quantum, quantum. Okay, forget AI, quantum. We're like oh no, chat GPT is telling us that love is the answer. Change it, change it, change it. And now we're getting that same message from AI, which is the secret to everything is humans ability to love, and let's just accept that.

Speaker 2:

Is it AI's job to fix the problems of the world, and if it's programmed by humans, is this even possible?

Speaker 3:

It's not AI's job to fix the problems in the world. It's our jobs to fix the problem, and AI is a tool that we can use to do that. But if you want to, if we look at the problems in the world, what is really the problem with hunger? we have enough food in the world to feed everybody in the world. you know, with money we have enough money to end poverty. with climate change, we have enough tree season and enough land to fix the climate, but we don't because we hate ourselves, like the core, central issue is coming from low self esteem. So if we wanted, even if we wanted to build AI that could solve the world's problems. all it's going to do is work out The way you solve the problems is you get humans to love themselves, and then they'll fix everything, because everything will flow much, much more easily in the way that it's meant to.

Speaker 2:

I always try and make the podcast actionable so people can go away and do their own research. What are some things that people should be doing now to get their head around the problems that you've brought out today?

Speaker 3:

That's a really good question. I think that anybody who's listening, who's overwhelmed by what's happening in AI, should go for a walk, like we're doing, and just look at a tree and remember that, even with AI as advanced as it is now, we honestly have no idea how this tree works. We couldn't even tell you what's going on with this tree, let alone even dream about re-creating it. that is how miniscule AI's capacity is in terms of the actual wisdom that's all around us, so you could just take a deep breath in and remember it's really not that spectacular.

Speaker 2:

You say you don't use AI, but have you been playing with the tools and has anything that you've seen surprised you?

Speaker 3:

I mean I use AI and that I like use Instagram, and, but is that using AI? Well, i mean, it's in there.

Speaker 2:

It's in there, but sometimes we don't realise that. You mentioned before about indirectly being programmed to make us feel bad about ourselves. I just wonder whether that was its intention, because I keep reading stories about people in Google and Facebook and other companies Don't let their kids use it because they know how bad it is. All the dopamine and noxotoxin comes out.

Speaker 3:

Yeah, i mean, i still wouldn't say it's consciously intentional. I think what's happening is it's working out on the layers underneath that we're not setting it to, because we've also created just an era where we don't want to talk about emotions, we don't want to talk about the softer skills and we want to convince ourselves that it's all. We can just pull it all down into ones and zeros. But I don't think it's happening intentionally.

Speaker 2:

Have you seen any really interesting uses that maybe our listeners haven't heard about yet?

Speaker 3:

I need to get a head shot. Mine's like eight years old and it's just such high pressure in the morning to have to look good enough for a head shot. That could also last a decade. So I did look into doing those, but then they were just a bit not quite right. But I think there will be some really cool use cases for reducing research, reducing a lot of the menial boring tasks. I think that that capability really is there and it will grow as people work it out.

Speaker 2:

I read a lot of people saying the whole adage that AI will steal your job. I think people are now saying that AI won't take your job. It's someone who knows how to use it better than you do will take your job. What's your view on that?

Speaker 3:

Have you ever read Bullshit Jobs?

Speaker 2:

No, it sounds like a great book.

Speaker 3:

It's really great and it starts with an article, and it's basically saying that we should have been working a three day week by now with the tech advances that we have, but instead we created all these bullshit industries like corporate finance and corporate law and management consulting, and it's really interesting Because it basically says, deep down, we know that they're not really needed, and so we make everybody work really long hours and pay them loads, and then we pay nurses and teachers and doctors less because they have a job that benefits They at least have the morality of that, so they should be punished. But it's basically that I'm not sure what will happen to jobs, because whatever will happen will be deeply emotional, so we could create totally new hybrid industries or jobs that are super unnecessary, or we could go back to jobs like being a hairdresser is a very robust job now, more than being an AI engineer. It doesn't pay as well, though.

Speaker 3:

Well, it depends how long we think people are going to get those AI engineering salaries, and then also it's just about what enough really is, so I'm not 100% sure what will happen in the job market. I mean, one thing that I'm really interested in now and this is actually a bit of a tangent is the dark ages. One of the questions that you had was about where have we seen tech like this before that's been really successful. And that question really struck me because I was like this isn't really new tech. The way it's being presented to us as consumers is not really new tech, because it's going into search, which we've always had. So it's just up leveling something that we have always had. It's not like it's a new service or tool that could fundamentally change it.

Speaker 3:

And then, with what I was saying, and what is the necessity of that, where does it actually solve problems that we were struggling with? It's like, oh, i don't really know. It might add some small incremental things, but given the wealth we're talking about and the money that's consolidated into this, is that right? And then one thing that's always struck me was I went to the London Museum and then there's this mock-up of London under Roman rule and then the dark ages, and it goes from like sears and stuff to mud huts And I'm just like how did that happen? It's so weird. It's like we forget that humans can just be like no. And what happens with the dark ages is that people get fed up of the technical progress because it still depends on slave labour, and so they go for less immediate tech, but that doesn't exploit people in the same way. And so the tech that comes out of the dark ages is really good, like windmills and things like that, but horse saddles, all these things that actually become really pivotal to growth, but they're doing it on a totally different ideology, which is that slavery is bad. You can't just have manual labour being something that everybody else does and then you exploit on it.

Speaker 3:

And there are some parallels to what's happening now with generative AI, where it's like the consolidation of wealth and the amount that people actually have to work underneath to push that wealth up is massive for gains that aren't that big, and so why would we not be expecting people to start looking for something totally different? Are you worried about AI? No, i'm not worried about AI, because I think human nature is consistent and the way we'll react to it is consistent, and if it falls down, it's because it wasn't valuable to us and if there's lessons that we need to learn the hard way, then we'll learn them. I think that a lot of our fear about AI does come from our fear of death and not letting things go. So I loved working for Google.

Speaker 3:

I think it's an amazing company, i think it's incredible, but also it is a fallible thing. That follows a life cycle like everything else, and that's it. It's the same with all these big businesses that we just sometimes we just won't, we're just not willing to let things go, to see what they should evolve, to be like Even chat GPT when I'm saying I think it's going to explode or it's not going to be successful, but it doesn't mean that that wasn't what it needed to do in the course of true innovation. It doesn't mean that it doesn't still have value. So I'm relaxed about it because I'm just like there's a flow and a process to life where humans always learn and move closer towards being loving, and that's just playing out.

Speaker 2:

So my favourite part of the show, Quick Fire Round, where we learn a bit more about our guests iPhone or Android iPhone. Window or aisle Window In the room or in the meniverse In the room. Native content or AI generated Native Your biggest hope for this year and next.

Speaker 3:

That more people start talking about love.

Speaker 2:

I wish that AI could do all of my.

Speaker 3:

I wish that AI could reduce my self doubt.

Speaker 2:

The app you use most on your phone.

Speaker 3:

Notes and Instagram.

Speaker 2:

The best piece of advice you've ever received.

Speaker 3:

Only accept no from the decision maker.

Speaker 2:

What are you reading at the moment?

Speaker 3:

I'm reading books about the science of time travel. for the next essay that's coming out, who?

Speaker 2:

should I invite next onto the podcast.

Speaker 3:

Sarah Hunter.

Speaker 2:

And how do you want to be remembered?

Speaker 3:

As someone who loved well.

Speaker 2:

So, as this is the actionable future as podcast, what three actionable things should our audience do today when it comes to better understanding the opportunities and threats from AI systems?

Speaker 3:

I think the key question to ask yourself is what do you already know that you need to do in your life to make it better, but that you don't want to do, and why? And then start looking at what is out there to help you on that journey and what AI applications could be that could help you on that journey. And then number two is take a deep breath in and know that you're already amazing. And then, number three, go back to human nature and read the books that are not about tech, but they're just about humans and how great humans are, and you can use your own experience to apply them. But in life, we're never really creating anything new, because the truths are the truths. We just apply the wisdom to new context. So even all the essays that I've written, they're not really saying anything new. They're using traditional wisdom and they're applying them to new topic.

Speaker 2:

How can people find out more about you and your work and also the essays?

Speaker 3:

You can find my essays on the website Asterocom And if they're interesting to you, reach out. There's a contact on the website and it would be thrilled to talk to anybody interested about them.

Speaker 2:

Stephanie a fresh view of thinking about AI and ethics. Thank you so much for your time today. Thank you.

Speaker 1:

Thank you for listening to the Actionable Futurist podcast. You can find all of our previous shows at actionablefuturistcom And if you like what you've heard on the show, please consider subscribing via your favorite podcast app so you never miss an episode. You can find out more about Andrew and how he helps corporates navigate a disrupted digital world with keynote speeches and C-suite workshops delivered in person or virtually at Actionable Futuristcom. Until next time, this has been the Actionable Futurist podcast.

How Stephanie got started in AI Ethics
Stephanie's experience with AI
The biggest issue in AI Ethics at the moment
Dealing with algorithmic bias
The issue with AI regulation
Highlights of working for Google, Deep Mind and X
Advice for graduates working in tech
How can AI be used for good?
Dealing with the hype around Generative AI and ChatGPT
Humanity’s problem of fact vs fiction
The problem with ChatGPT
Open sourcing the truth to train AI
Should AI development be halted?
Stephanie’s essays
Love and AI
The role of empathy in AI
The link between AI and self-worth
The hysteria in the AI industry
Are we building AI on the wrong paradigm?
The opportunity for AI
The need for ethics and integrity in AI
Where will the next phase of positive innovation come from?
AI’s love letter to humanity
Will AI take our jobs?
How does AI compare to previous innovations?
Are you worried about AI?
Quick fire round
3 actionable tips to better understand AI opportunities & threats
How to connect with Stephanie