Digitally Curious

S5 Episode 19: Unlocking Generative AI's Potential: Ethics, Creativity, and Impact - recorded LIVE at London Tech Week

Chief Futurist - The Actionable Futurist® Andrew Grill Season 5 Episode 19

What if you could unlock the full potential of Generative AI and its impact on your life and company? Get ready for a fascinating fireside chat recorded live in front of an audience at the offices of leading international law firm RPC during London Tech Week.

The Actionable Futurist Andrew Grill was interviewed on stage by Helen Armstrong, a Partner in RPC’s IP and technology disputes team.

The discussion examined the risks, issues, and ethics surrounding this powerful technology and the roles played by giants like OpenAI, Google, and Facebook in this rapidly evolving space. 

This episode also covers the current applications and trends of generative AI in the retail and consumer sectors and how it's already making a mark on our daily lives. 

As we navigate the complex world of AI regulation, Andrew shared his insights on explainability, transparency, trust within AI systems, and the implications of the UK Government's white paper on AI. 

The episode also touched on the challenges of IP rights, GDPR, ongoing AI model training, and the importance of auditing systems to prevent bias.

Don't miss this thought-provoking conversation as we uncover the incredible potential of generative AI, its ability to unleash creativity, and the crucial need for ethical use of this game-changing technology.

We covered a lot of ground in this episode, including:

  • Generative AI and Its Impact
  • Chat GPT’s definition of a futurist
  • What is Generative AI?
  • Why AI is so popular now
  • The risks of using Generative AI
  • Why ChatGPT so confidently provides incorrect answers
  • How ChatGPT actually works
  • ChatGPT data sources
  • Is ChatGPT that useful?
  • The “magnet of mediocrity”
  • Where is Generative AI being used?
  • The “enthusiastic always-on intern”
  • The need for critical thinkers
  • The responsible use of AI
  • Challenges and Considerations for Generative AI
  • The AI black box problem
  • The challenges for regulation around AI
  • Can we trust AI?
  • Regulation areas for AI
  • Government response to AI regulation
  • Are you involving your risk department around AI?
  • Recruitment considerations for AI teams
  • The future of Generative AI
  • Enterprise AI Implementation
  • EnterpriseGPT challenges
  • Will AI provide us with more free time to be creative?
  • Actionable items for tomorrow
  • Your two tribes and the opportunity for a hackathon
  • Why AI comes at a cost
  • Is your data “AI ready”?
  • Will AI replace human creativity?
  • Adobe’s AI products
  • Accenture’s use of AI generated imagery in a report
  • Generative AI will drive more creativity

Audience questions included

  • Who is responsible for ensuring AI training data is valid
  • Will AI disrupt or strengthen the economy?
  • The environmental impacts of Generative AI
  • The difference between human emotional intellig

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

Speaker 1:

Welcome to the Actionable Futurist podcast, a show all about the near-term future, with practical and actionable advice from a range of global experts to help you stay ahead of the curve. Every episode answers the question what's the future on, with voices and opinions that need to be heard. Your host is international keynote speaker and Actionable Futurist, andrew Grill.

Speaker 3:

My name's Helen Armstrong. I'm a partner here in the IPN Tech team. We couldn't let the London Tech Week pass by without a session on generative AI. Being a litigator by background, i thought what better way to do that than to cross-examine I mean gently chat with an expert in the field. So I'm very pleased to be joined by Andrew Grill. He is a former IBN managing partner, a trusted technology board member, and I'm pleased to say that he is also a host's atoperated technology podcast, which we might even feature on this chat.

Speaker 4:

We're actually on the podcast right now.

Speaker 3:

So, as an aside, i wondered what a futurist was, because you are the Actionable Futurist on your podcast. So I asked chat GPT, and the answer that it gave me came with the following warning It is important to note that futurists do not possess magical powers to predict the future with absolute certainty. So there we go. Chat GPT has already given a disclaimer, but, joking aside, andrew has a huge amount of knowledge and experience in this field, and we're really grateful for him to give up his time this afternoon to talk with me. So I think we should probably get started, as I think we're going to have a lot of questions at the end. So we'll get cracking. We'll start with the basics. What is your definition of generative AI? What is it and what is it not?

Speaker 4:

Well, generative AI is not new. It's actually been around since around the 1960s, in very early versions of chatbots. Around 2014, the technology became better and we could start to resemble humans in image and in video form. But in 2017, google actually developed a thing called a transformer. So when you hear chat GPT hands up, who knows what the GPT stands for? It stands for generative, pre-trained transformer. That's very techy speak, but basically it generates things. It's been pre-trained with lots of data and the transformer bit is where it actually works that out, and what Google managed to do was work out how to massively scale this so you can actually look at lots of data in real time and in parallel. And of course, we know that then became work that OpenAI did with chat GPT versions 1, 2, 3, 3.5 and now 4.

Speaker 4:

But generative AI isn't all AI. There are many other systems that use AI. If you unlocked your phone today, you're using AI. If you've had a Spotify recommendation using AI. if you're looking at a Netflix recommendation, not only is it using AI, it's starting to use generative AI to show you a picture that you will like and it'll be different to Helen's picture. So you're actually seeing this in everyday life. So it is transformative, but it's not new.

Speaker 3:

Now, ai has been around for a long time, as you say, but there's been a real explosion of interest. It is everywhere You know. Journalists are talking about it, musicians are talking about it, politicians are talking about it, activists are talking about it. What do you see as the trigger for this recent increase in activity and interest in AI, and generative AI in particular?

Speaker 4:

I know we've got Slido. Let's do a market research here. Hands up if you have used chat GPT. If you haven't, I'm not sure why you're here. Keep your hands up. Keep your hands up if you use it on a daily basis. So a few people, That's interesting. So I think it's fair to say that while AI has been around for a while, chat GPT has been the fire starter. Who has heard of a comic called John Oliver? He was an English comment in the US. He recently did a thing where he did a montage of newscasters basically going that sentence that was written by chat GPT.

Speaker 4:

So it's now in the news. My parents live in Adelaide, Australia, and I spoke to them the weekend and they were talking about chat GPT. I said hang on, hang on, Where did you hear about this? I was on the news. So finally it's in the news. Everyone's talking about it. But why are they talking about it? Because finally non-technical people have access to AI systems. We all know how to send a WhatsApp message, to use a chat bot.

Speaker 4:

The friction has been removed. Two or three years ago, if you wanted to play with an AI model, you'd have to be a developer, you'd have to write a Python script, all this stuff. But what open AI has done is made it incredibly easy and remove the friction And now everyone can try it. And they have these models where they show how long did it take to get to 100 million users and chat. Gpt got there in two months, So 100 million people plus have played with, including most of these people in the room, And it has now opened up the possibility because you can play with the AI models. But importantly and we'll get onto this it's also exposed the risks, the issues, the ethics that are now, thankfully, front and center. But I'm here to assure you, even though futurists get it wrong, we will not all be killed by AI robots in two years. That is not fact.

Speaker 3:

As a lawyer. I like the fact you brought up the word risk and we certainly will move on to that later. I think it's really interesting actually watching how many people put their hands up. I would be really interested to know how many people have used chat GPT in their personal life versus how many people have used it in their professional life or are using it within their business. I don't know if we can put a show of hands for the latter, so if you're actually using it within your business at the moment, interesting. So we've got a few people, but certainly not as many as are using it in everyday and their personal life.

Speaker 4:

But are you aware that if you use it in your business environment and you put confidential information in there, one it's GDPR are not compliant. Secondly, it stays there. There is a slider you can actually turn on and off to say don't capture the chat. but they still keep it for about a week or 30 days for training and monitoring purposes. But Samsung got in trouble because some of their people started to put code and board minutes into chat GPT wanting to summarise it, and guess what That is now in the training model. So be very, very careful what you put into these models. GDPR still applies.

Speaker 3:

Yeah, and I guess, if we're talking about the risks of using something like chat, gpt, i mean, i'm sure a lot of people read I think it was on BBC News about the lawyers in the US who actually used AI to prepare some court documents, including referencing cases that were, in fact, entirely fictitious, and when the judge asked them where they'd found them, they had no answer but to say, oh, ai made them up. So I think it's called hallucinations essentially, and it'd be really good. I know we're aware of it, but can you just explain why AI hallucinates and why it's so confidently states things which are actually completely false?

Speaker 4:

Let's just go back to how chat GPT works. I'm going to explain this in one sentence. Chat GPT all it does is it predicts the next word in a sentence. So if I ask you who was the first person to land on the moon, you would probably confidently say the next two words are probably going to be Neil and then Armstrong. So all that chat GPT and these systems do is they read billions and billions of words and they then look at patterns. And so actually, when chat GP is giving you an answer, it has absolutely no idea what it's typing. It has no idea of the meaning or whether it's right or wrong, and it says that with confidence as well. So the hallucination is when it is actually trying to match things up and it gets into a bit of a loop. There was another story in the New York Times where a journalist was chatting overnight with one of the models and it started to say you should leave your wife and I love you. And it kept saying I love you and it just wouldn't get out of this loop. So it had gone into this loop and the only other answer that it could find that matched the pattern was that I still love you. So that's when it starts to hallucinate, and the challenge is the data that's in. So show of hands. who knows the data that was put into chat GPT to train it?

Speaker 4:

We think there are five sources. OpenA and R have not been open about that. We think the first one is the open internet called commoncrawlorg. You can go there and download all of the internet free. So there's a lot of rubbish in there. The second thing is Reddit links. There's a website called Reddit. If you have more than three upvotes.

Speaker 4:

that went into chatGPT. The next two sources are some books unpublished books and a small amount of different words and books from there. The first one was Wikipedia. So when you look at those five sources, they're not all completely qualified. Now, in OpenAI's defence, what they did do is they hired 40 people from Upwork to go through and do a series of tests that's called reinforced learning. So they would ask it questions and the 40 people would then go and say is this a good answer or a bad answer? Now, importantly, those 40 people were sourced from diverse backgrounds. You don't want people just like me answering questions about ethical or political bias, because I will have my own conscious bias built into that. So we'll get onto this, but part of the challenge is, if the data is questionable and you haven't checked it with a diverse range of humans, you're going to have this hallucination problem.

Speaker 3:

So let's cut to the chase. Is it really that useful if we can't rely on it?

Speaker 4:

OpenAI. What they did is they launched something to the market. That was a good test. It wasn't quite ready. Now Microsoft although they're investors in OpenAI and certainly Google and Facebook they've had to scramble and go. We can do that too, but Google and Facebook have more to lose. In fact, when Google launched Google Bard, they had a video that explained what it could do, and one of the questions was about the first satellite to view moons outside of the Earth. And it got it wrong And Google's share price dropped $100 billion in a day. So the risks to Google of getting it wrong are quite high.

Speaker 4:

So is it useful? I think it's incredibly, an incredible watershed moment. Would open AI done? It allows people in the room to play with it and test it. It means that we're going to see the edge cases What is it that works and what doesn't work? And initially, i think you could actually say how do you create a bomb? And it would tell you. Now it quite confidently says making a bomb probably isn't a good idea. Here's a number to call to get some help about doing that. So is it useful?

Speaker 4:

I think there are lots of better things we could use AI tools on, like queuing, cancer, climate change, those sorts of things, but at the moment it's a bit of fun, and I'm sure you've all played with those image generators to make different images of yourself and your friends, and those sort of things.

Speaker 4:

What worries me, though, is we're going to see what one writer called the magnet of mediocrity, that everyone's going to be using chat GPT, and all the answers are going to be fairly similar. So how do you then rise above the noise? In fact, i was talking before to one of my colleagues about an example of a law firm that was actually wanting some interns, and 80 out of 100 people that wrote in had used chat GPT, and all the answers were the same. Apparently, they've been told never to apply there ever again. So what worries me is you're going to have all these people using chat GP to write awful content, flood the world with awful content, and we're going to go. Why have we done this? So I'd like you to play with it. I'd like you to then think of things we could do with it to actually improve humanity.

Speaker 3:

So it's about understanding the limitations of the technology and really ensuring that we increase creativity, that we don't allow it to stifle it, because obviously it's trained on data that's previously been produced. It's not itself producing entirely new creations, as it were.

Speaker 4:

That's the whole point is generative, So it has to have had something to work on. So to know that Neil Armstrong is the right answer we had to read that somewhere. Famously, Getty Images has been suing Stable Diffusion, one of the image recognition companies, because Getty Images their publicly available stock imagery has a big Getty watermark. And guess what, When they generated their own version of Abraham Lincoln using that, there was a stretched Getty watermark. And so Getty said that's not cool, because you're using our proprietary IP rights in generating something else. So that's a problem as well.

Speaker 3:

So I guess, following on from that, where are you seeing this technology being used already And what are the trends that you're seeing out there?

Speaker 4:

Some really cool examples. So if you have to create lots and lots of training videos, you probably go and hire some of the guys in the back with the camera. You'll set up a green screen. You'll have a presenter read things off the auto queue. They'll get it wrong. It'll take half a day.

Speaker 4:

You can now use video synthesized video presenters. In fact, in my talks I start off, i get introduced by a synthesized lady who actually introduced me onto the stage And it's almost there And one of the other day. There's a company called Synthesia. It's a UK company based out of Voxxbot and Cambridge graduates. You can up like a photograph and it will animate your eyes, your lips and it'll look like you're talking. Now it's almost there. But there are some really interesting uses, like that Where I use the Netflix example, where you've got to mass produce different variations of image and creative. That can be useful as well. One of my clients sells DIY products. Rather than going and photographing every single model they have from every single angle, they could use generative AI to replicate that and have synthetic photography. The challenge there is Do I believe that what I'm seeing is a real photograph or has it been synthesised by AI And that is going to be an ethical issue as well. Do we disclaim that what you're seeing has been created by an AI model?

Speaker 3:

Interesting. And so what are the opportunities, particularly in the retail and consumer space?

Speaker 4:

Well, part of those are inventory. So, again, if you've got to do lots of photoshoots, you can start using that for that imagery. We talked before about the metaverse. You can combine virtual triads. If you're in the retail space and want people to try out what they might look like in certain clothing or certain things that can be used, you can generate a device for that as well. Seo if you have to run search engine optimisation, that can be used to write great copy that's actually going to resonate. Well, you can test it multiple times.

Speaker 4:

Writing reports Microsoft now are starting to launch a thing called Copilot. I saw it demonstrated at London Tech Week on Monday on stage. You can start from a blank page and say, okay, i've got to write this report. Why don't we feed it the meeting minutes from last week and a product brochure and it will then in real time create you the first draft of that report. Now I liken chat GPT to an enthusiastic always on intern. He or she does a great job. They're so enthusiastic, they're so confident, but you wouldn't give that work raw to a client or a judge or publish it on the internet. So I think it's a great start.

Speaker 4:

So a lot of this generative AI can be used as a great first draft, as I say, from getting the ideas from a meeting onto the page and then bringing it to life. So part of it is actually playing with it. Playing with it with imagery, with video, with text, with music. You mentioned music before. I went to an event a few weeks ago where the music had been generated the lyrics, the notes and the melody by AI. It was awful. It just had no soul. So that's the thing, the mediocrity about this. You actually have to say it's a great first draft, it's good for research, but I'm going to apply some critical thinking on top of that to make it something I would want to give to a client.

Speaker 3:

So we're not all being done out of the job just yet.

Speaker 4:

No, but what this will mean is if everything is the same. In fact, here's a good test If you want to see what average looks like on the web, type something in chat GPT. What it spits out will be average. So don't do that, and certainly if you're applying for a job, don't do that, because it'll be the same. I think what's going to be really important for students coming through is the ability to be critical thinkers, to evaluate that. Your years of training as a lawyer, the presence you've gone through being on your feet in front of a magistrate or a judge, knowing how to react that can't be replicated with AI. Where you have empathy, where you have feeling, where you have to have critical thinking I don't think that will ever be replaced by AI. It might help me get close to it, but what we're doing now, i really think can't be replicated by technology.

Speaker 3:

So it's the emotional intelligence behind it.

Speaker 4:

Yes, and that's one thing that AI will never do. All the experts I've spoken to in my podcast says that AI will never love and it will never feel empathy. So if you like those two things, you're okay.

Speaker 3:

But it will repeat I love you over and over again, If you program it too.

Speaker 4:

It'll do anything you want.

Speaker 3:

So we can't talk about generative AI without looking at the responsible uses for AI. Can you just elaborate on that a bit more?

Speaker 4:

Yeah. So we have a real point in time here where we have to think about the impacts, and so, if you look at what banks do, they have model risk groups that basically develop models, they test them and then they put them into production, because they know that the FSA are going to sue the pants off them if they get it wrong. What we need to be worried about, though, are bad decisions that are done by generative AI models. Right now, in my home country of Australia, there's a Royal Commission into robo debt. The government back in 2015, decided they would run an AI over welfare recipients had they been overpaid if they had, by how much? and we're going to send the debt collectors after them. But what they did was they used average income rather than the income they received on a very spiky basis, and so the Royal Commission is saying you actually had some very bad decisions because you used AI. Now, if I have a bad Spotify recommendation, that's okay, but if I'm denied credit debt or have to leave the country, someone's got to be able to prove, and I'm sure across the other side of the aisle, you're going to say can you prove what was the input to that model and what was the output.

Speaker 4:

The problem with AI systems is they are a black box. We don't know exactly how they work. So you're going to start to see leg distillation around explainability Can you explain how it works? If someone's income is this, why was that decision made? You have to have a thing called observability. When the model is running, can you watch to see if it starts to hallucinate? Is it actually going off the rails? And the other thing is transparency Can you actually explain and publish how this is going to happen? So the challenge is going to be for the regulator no, the next session is talking about regulation is do we regulate the tech or do we regulate the use of the tech? And I think there's a fine imbalance between the two, because if you can say, yes, this is why the decision was made, there are existing laws that cover it around discrimination, all those sort of things. But I think, because AI is this black box, we go well, forget about GDPR, we'll think about human rights law and we'll just see what AI does next.

Speaker 3:

It's really interesting because you mentioned their explainability and transparency. I think it all comes back down to trust. We have to be able to trust the AI. If consumers don't trust the AI, they won't use it, so they won't engage, and I think that's what's so important And that's why it's obviously me bought up in the government's white paper, one of the five principles that are set out there. So, moving on, i think obviously we're going to talk about regulation in some detail, but could you highlight some of the issues that we face?

Speaker 4:

We already covered a few of them. So, if you're the Getty Images, one was interesting. So the issue of IP and rights So, because the open internet has been used to train chatGPT, there will be information that is public but is owned by someone else, and so how do we actually manage that? And even on GDPR, the right to be forgotten if you put into chatGPT who was Andrew Grill, it gets some things wrong. It gets them wrong in a nice way. It says I've won some awards and written a book, which are both not true yet, but maybe it's a futurist. But basically, how do I then go to open AI and say that information is wrong and retrain your model? So that's going to be difficult as well. The issue around copyright and IP At the moment, if you're in the US and you'll know more than I do, only a physical person can be given IP or rights. So if you're generating this with AI, where's the fine line between I told at the prompt, so I own the rights to that thing, it's developed or it's been developed by tech, so you don't own the rights? And that's going to come into play as we are mass producing, mass generating these images and videos and music. Who owns that, and no one's got that right yet. What's interesting and maybe the last panel will cover this the UK, back in March, brought out a pro innovation AI white paper. In the last few weeks, rishi has said maybe we need some guardrails. And what do you think, sam? And what do you think? So it is evolving.

Speaker 4:

I'm writing a book at the moment and I don't know where to start and stop because every minute something changes. I started writing before ChatGP existed, when I published. It will be ChatGP six or seven. The regulations will change, so how do I write a physical book with that in there? But these issues will not go away. The challenge in every technology that's been released I've seen whether it be Metaverse, web3, iot the regulators can't keep up. They don't know all the intricacies. They need industry to help them bring together. Sam Altman, who is the CEO of OpenAI, has been doing a world trip. What I'm reading is that he's telling everyone that we might be killed by robots in a few years. So regulate against that and forget about all the other stuff that's happening near term. But issues around rights management and those sort of things are paramount, and everything you're doing today you should be running through your risk department. Is your risk department even up to speed with the fact that this tool exists in your organization? Because someone's going to get sued.

Speaker 3:

Yeah, and I can certainly say that we are seeing it already. Clients are coming to us asking us about the IP rights, in particular, who owns the output from a generative AI model. There's also confidentiality issues obviously going in and then down the line you know we're thinking about, you know what will be the disputes in relation to the ongoing training of the model, because you know my background is in disputes arising out of large IT project failures. Now, often the testing in those projects is a one-off testing at the end of implementing the software Does it work or does it not?

Speaker 3:

Actually, the problem with AI is that it keeps changing as it trains, as it, more data is input, so so it's not a question of just deciding at the point of implementation is it working? but actually, you know, in a year is it working, in two years is it working? How are we going to deal with that, both in the contracts but also just in practice? how are we going to audit the system continually to make sure that those anomalies aren't coming up? and also not just the the false output, but the bias that might be there, the underlying bias, and how are we going to stop that? so I think those are important things that we are lawyers, as lawyers are all grappling with at the same time as everyone in the business, so I think you're spot on.

Speaker 4:

Well, if you work in HR and you're recruiting for roles in AI AI technicians model, are you recruiting people with a diverse enough background that, when they this role become really important because they have the keys to the kingdom and when they've trained it, as you say, and set it off to work, have they trained it with the bias built in? Famously, years ago, the Google Bros started some image recognition systems that was not able to recognize people of color because they never thought to train it on people of color because they were white males. So there was a conscious bias built in. They went oh, forgot about that.

Speaker 4:

I was having discussion last night, a networking event. You need the gray hair and the the iPhone babies to basically be talking together and actually understanding both aspects, because one use of the technology there may be some unintended consequences we haven't thought about and different generations will actually pick those up. So the model training that's probably something not many of you have thought about is incredibly important and lawyers need to be across it, hr need to be across it, the board need to be across it, because this could go how I especially if you've got lots of people or money at risk that is being decisions are being made using AI And there's the data issue as well.

Speaker 3:

So you've got your data scraping, all of the issues that come with that. You know the government in the white paper has mentioned. You know you have to still comply with the GDPR. You can't just say, because the AI model did it, i'm not responsible for any breaches that result. So I think everyone needs to be very conscious of the fact that, while the legislation wasn't, wasn't made with AI in mind, they still need to comply with it at the end of the day. So you may not have magical powers, but you are the futureist here. So what is the future of generative AI?

Speaker 4:

So I think it has a bright future. At the moment we've got the training rules on, we're playing with it, we're finding silly things to do with it. But imagine your boss comes to you and says Andrew or Helen, what's the best performing product from this time last year? what should be released for next, next season? you go away and get the data you might ask your intern to do that and probably hours days later you come back with an answer. Imagine if you had ethically and legally trained a generative AI model with all of your company's data. You could ask at the same question and it comes back in seconds. So I think the power of generative AI is going to be enterprise GPT and it's happening already.

Speaker 4:

I saw a demonstration of the night where you can actually load data into that and it starts making sense of it, and that is a different type of training. But it means it's firewall, it's your data, it's secure. But imagine having everything and not just product brochures but matters you'd worked on, everyone in the company's information. So who is the best person to talk to about AI here at RPC? you could probably ask lots of people, but imagine a chat bot coming back to say here's the person, here's the sources to why we think this is the person and this is their contact details. So that is the future. The challenge, though, is it's expensive. It's about ten times more expensive to do a search query than a generative AI query, than a chat, than a search query, because of the computational power needed. So right now, microsoft is saying to ration their AI level servers to say you can't have as many as you need because we don't have enough to go around. So enterprise GPT is the future. I think we're seeing it in pockets. It's expensive, it's hard to train.

Speaker 3:

When we get that right, i think everyone in this room will be typing queries in and delighting customers in seconds, not weeks, and actually, if we think about it, if we use AI to do some of the jobs that are more mundane, shall we say, it frees us up to actually be much more creative and to spend that time really thinking about things and getting into understanding what else we could be doing, rather than just doing that every day, which is very exciting you say that and I have two views on that.

Speaker 4:

Yes, i totally agree with you and I want AI to basically fix the money. Took my life when I have to renew things and buy things, but when we got these, we were told this would actually give you some freedom. Everyone has it with and you probably checked a multiple time. So the challenge is and this one of my guests in my podcast talked about needing the freedom to think I think famously with Steve Jobs, who said many years ago if 30 years ago in Silicon Valley we had all of these social media networks, nothing would have been developed. We would all been playing with things. So we actually need to say, yes, this will free up some creative time, but can we use the AI to go? no, stop looking at that, don't do that. That's not an important task.

Speaker 4:

What Microsoft's co-part will do in your email is basically say let's summarize what you need to do. You need to respond to this. Here's something I prepared earlier. So I think you're right. It has the promise to free us up, but because human beings can be a little bit inquisitive, curious, we may do things. Even this morning, i got distracted on different things before I got here. So it has the promise. We just need to not have the the attention deficit and have freedom to think and then, finally, your podcast is called the actionable futurist.

Speaker 3:

So so what actionable things should the audience and we all be doing so say tomorrow to make the most of this new technology?

Speaker 4:

so keep playing with it. So if you just had a dabble with it, keep playing with it. I often ask my my clients user for two purposes type something work related and type something to be with your personal life. And I did a session a few weeks ago and it gave my clients some homework. I said, before we do this next week, i want you to sign up for an account, because not all of them had who will be blocked by the firewall? just sign up. I want you type something in to do with your personal life. And one of my delegates basically said I asked it to write a poem about my dog and she said it was actually really good and it explained the nuances and the fluffy the dog or whatever it was. And I said how long would that have taken you to actually do? probably most of a day. And it did it just like that. So why I did that was?

Speaker 4:

you then lean forward, you go, ah, that's how it could be used in my personal life and you become what I call more digitally curious and you then say, well, let's actually look at what imagery can do. Can we do something with music? could we do something with our product department? can we do so? by playing with it and becoming more digitally curious. Then you are wanting to explain and understand more what it can do, understand how roles will change. I had one podcast guest say that the role of the developer will probably morph into more of a product manager because the code be written for them, so they are then managing their creation and becoming more of a feature person rather than how does the code work.

Speaker 4:

And the other thing is in every organization you actually have the answers. They were ready. You have two tribes. You have the going digital people like us that have been there for a while And you have the born digital.

Speaker 4:

Their first toy was an iPhone. They live and breathe this stuff, and what I often ask people to do is hold a hackathon. You ever know what a hackathon is. You get in a room like this and you look at five or three key business problems And what happens is the born digital and the going digital. Both have a point of view which is very, very different, and you can probably solve these things in an afternoon. So embrace the people you have in your organization, because the dirty little secret is that AI is not intelligence, not more on. It relies on humans to train it, to set it to work, and that creative thinking is so important. So I want you to embrace and use all the people you have in your organization to come up with other ideas, because the answer isn't always we'll use AI, just as the answer hasn't always been blockchain or the metaverse. It might just be good old getting around the table and solving the problem.

Speaker 3:

So it's a bit like Jason was saying in the first session. It's really thinking about where can we use AI, where would it be helpful? It's not just a case of going oh, we want to use AI because everyone's using AI And I've heard AI washing being thrown out there. So it's really thinking about where can we use fully use AI, and then where do we not want to use AI?

Speaker 4:

Because it comes at a cost a cost of training it, setting it to work, buying it, the risk issues, all those sort of things. It is not free. You have to pay for chat, gpt to get more features and eventually there won't be a free model. You have to pay for more of it. So look at the cost benefit. Is there a value exchange between that? But also look at what you can digitize already. Are there processes that you've been dying to put into a digital format? Can you do that?

Speaker 4:

And I know we're out of time but think about the data. You have The challenge with these AI models. You have to train them with data that makes sense. So think about the data you have, the data you need and the data you'd like. And where might you get that from? Because you may not have data that's actually AI ready. So it could be a moot point that we want to do AI, but our data is all in spreadsheets. There's no way we could train an AI system on spreadsheets. So it comes at a cost, but it has huge benefits.

Speaker 3:

Well, I'm sure we've got questions out there in the audience.

Speaker 5:

Thank you very much. Very exciting topic, very exciting conversation. I would like to, if I got Andrew right on the creativity topic, to come back on this bit. If I got your point right, you think that not you only, but many people think that AI will not replace, like humans, creativity And at the same time we were saying that we do not know what exactly happens inside box.

Speaker 5:

Yeah, it's black box, and when we speak about humans, creativity, the main challenge, as I know at least, is that it's also kind of black box, yeah, like scientists working to figure it out. So could you maybe explore a bit more on that Why we are so sure that AI is not creative and it's not replacing humans to certain extent and cannot be like creating itself?

Speaker 4:

Lawyers love precedence. Let's go back to some precedence When the phone came out, that would make our life easier. We're now doing more and more things like that. When the lift came out to. There are lots of precedence where we thought that these jobs would be wiped out, but they haven't been.

Speaker 4:

If you look at when you let's look at creativity and doing imagery with generative AI. Now, i'm not a graphic artist. I use things like Photoshop. Adobe have now started to put into their Photoshop product a thing called generative AI fill. You can literally circle.

Speaker 4:

There was an example they had of a picture of a woman on a bike in the desert, and so what they did is they lassoed this area and they said we would like red, we would like yellow road lines put in there, and so it basically went off and generated the road lines and inserted it into the thing straight away. You still need some creativity to actually come up with the idea for that, and if you use some of these tools like mid-journey or stable diffusion, you have to type in a prompt. There's a really interesting use of this from a company called Accenture. Everyone knows Accenture. They did a report on digital trends and littered through the report. They used imagery that actually corresponded to a part of it. What they did, though, is they said here's the text that generated that image. Here are the additional prompts we needed. Now their prompts. I'd never heard of plus eight, vdr, all these things that only a graphic artist would need would know about, so I think part of it is we have to have the idea in the first place. Generative AI generates from something that's already there, so the creative spark has to be there. It can make it a lot easier The podcast we're on right now.

Speaker 4:

In olden days, i would get the audio tape, and I would literally splice it together and do all those sort of things. I don't have to do that. I use Adobe Audition. I will take it. All the breath noises, all the mistakes that we made will all be gone using AI and those sort of things, so I still have to be creative and look at how I want my audience to experience that. So I don't think creativity is going to be stifled. I think it'll be enhanced.

Speaker 4:

Which graphic designer these days draws with pen and paper for a campaign? They use a tool. So, and you're other, talk about creative thinking. I think it will actually open up Some of these things I've seen. Some of the examples have been mind blowing in actually visualizing how things might look that we've not even thought about. So I think the generative AI will actually spawn more creativity. Everything I've seen and my personal view is that it's not going to stifle creativity. It's another tool to make humans even smarter. Where we get to worries me because at some point where the neurons in here, we can't be any smarter than we are, so at some point we will saturate with all these tools around us.

Speaker 2:

In your opinion, andrew, if AI has got the potential of being more efficient or there is a quotation mark better than humans, who is responsible, in your opinion, to ensure the data that we key in to train AI? is quotation mark correct? For example, a fork is to be used to eat food with and not to stab someone in the eye.

Speaker 4:

This is a really interesting question. It's almost what is the truth And I one of my podcast guests, a lady called Stephanie Antonian who worked for a deep mind and a bunch of other AI startups as an ethicist, posed this question. I put it on LinkedIn, i got shot down. but let me try it. What about if we had an open source version of the truth?

Speaker 4:

So the fork analogy is it's law that the fork cannot be used to stab someone because that's called a weapon, but it is a agreed truth that the fork is used as an implement. So is that on a open and fully accessible database? that has to be checked. So when someone does a query and it's washed against that it then has to look at the open AI or the open database to say is this correct? I put this on LinkedIn and I got shut down because who then maintains the database and what goes on, what goes off? what is the truth, what is not truth?

Speaker 4:

So I think there are ways to look at that. Technically we could do that, but then you have people going, but that's not right. So if I say who won the presidential election last time in America, there would be probably a very difference of opinion in this room. So what is fact and what is agreed and what do we wash that against? So that's something that can be solved with technology, but dear old humans get in the way and we have a point of view and we have a conscious bias. So that's the answer, i think. whether it will work is to be tested.

Speaker 6:

What are the big policy concerns in tech? There's been obviously concerns around big tech And I was wondering how you saw AI. On the one hand, you could see it as a source of disruption, but on the other hand, you mentioned how some of the technology has been developed by Google, microsoft's investor, and so on. So is it too soon to say, or do you have a sense of whether it's going to disrupt the income of the saw all strengthening?

Speaker 4:

The problem isn't we've alluded to this before that there's a cost to doing AI. There's a generative cost And it's 10 times more to generate AI than normal search queries. Which is why Google haven't rushed into doing exactly what OpenAI have done, because they've got a search business to protect. Now you'll start to see generative queries enhance the search results, but someone's got to pay for that. Sam Altman actually said can you please stop asking at queries because we're blowing out our budget with Microsoft? Originally Microsoft were paying. They were giving Azure credit for free. Now they've invested $13 billion.

Speaker 4:

It costs a truckload of money to run these services, so that is going to be the domain of those companies that can afford that. So very early models of GPT-1 and 2 could be run on a desktop, just as you could mine Bitcoin years ago on a desktop. Now you need lots of computational power, so will, i think, select the big players that have that. And then the question is if we need access to these tools, is it equitable? The question remained back in the whole.

Speaker 4:

We don't talk about Bitcoin anymore. No one's talking about it today at all. Bitcoin is incredibly energy inefficient because of the way that it mines and checks for the chain. Genitive AI is also incredibly inefficient, so at some point someone's going to go. We're killing the planet again, and even faster, by asking for questions on Genitive AI. So I think it will be initially the domain of the companies that can afford it because of computational power. As the smart boffins work out how to make it more and more efficient, that price will come down. But I think for the determinant future it'll be big tech that run that, and that's a danger, because big tech is then the gatekeeper as to how these models get trained. So the faster we can get Enterprise GPT out there and start using it for effect inside organizations, i think we can then leave the silly queries to people on OpenAI and Chat GPT.

Speaker 7:

You've said multiple times about AI's inability to be creative and have kind of emotional intelligence and things like that. But what is the true difference between having emotional intelligence and creativity and replicating emotional intelligence and creativity? Because we already know that Chat GPT has been able to find creative solutions for problems, especially earlier this year when Chat GPT was able to hire someone to pass a recapture test for it, so clearly it's able to replicate some degree of creative thinking. So is there a difference between replicating creativity and actual creativity? Because clearly it can replicate it and probably will have the ability to replicate it far better as the technology advances.

Speaker 4:

That's a great question, but if I go back to the example you gave, i don't think beating captures is creative. I think it's sneaky. It's just learned how to do that because it learned how many traffic lights. I hate those, by the way. I really, really hate them. If you go into my website, there is a capture, but it's hidden And it does different challenges rather than having to do traffic lights.

Speaker 4:

I think you have a valid point, though, but I still think and I hope, because I'm a human being that we have a streak of creativity and we will always be ahead of the AI because we have to tell it what to do. And I think good example Nick Cave, the singer. Someone did on ChatGPT write me a song in the style of Nick Cave and sent it into the cave, and he was appalled. He said there's no emotion, there's no feeling in this song. It's not the way I would have written. And he got really angry because he said creative people have this streak. That's going to say choir. So I think I'm happy that. I think humans will still rule the overlords. I think it'll get close.

Speaker 4:

When you first used ChatGP, and it looked like it was a human typing something back to you. You thought that's pretty smart So it can fool you most of the time, but I think even these deep fakes we're seeing we can go. It's called the deep value problem. I think that is tech. But it's so good, is it really? How can we tell it apart? The question I would ask is how can we tell fake and real? apart that, if we have a thing that is professing creativity and empathy and love and emotional intelligence, is it really me or is an AI version of me? I think that's the scary thing to look for.

Speaker 8:

When I first saw about ChatGPT4, it was so exciting I tried to use it to do some stuff. It was really good at the basic things And I think there was this overhype that it was going to replace humans very quickly, but it became evident.

Speaker 1:

That's not the case.

Speaker 8:

So I think one of the cases you've made is that humans will always be the overlords, but I'll be judging it based on the current iteration of the tech. And as things get more advanced isn't it possible that the AI does become creative, does become able to formulate the problem and the solution. Is that something that you thought about?

Speaker 4:

Yeah, it's going to get close to great creativity. So if you had a 12-year-old on stage, their experience of life is going to be very, very different to ours. So they can be creative, but you would go. Well, that's a child's version of creativity And I think that's where it's at the moment. I agree it'll get better and better And it will start using what's called multimodal. So ChatGPT4, now you can actually show it a picture And there's a great one. It basically had a VGA connector that plugged into a phone that looked like a lightning connector And it asked it why is this funny? And it worked out because it was ironic. So it's getting closer. I think it's going to get quite close.

Speaker 4:

And then the challenge I said to the other question is how can you tell the two apart? So you're right, at the moment it's a great first draft. Or GPT4. It's a great fourth version. We will get better and better because as humans we're going to go oh, i wish I could do that And we'll start the trainer that way. But I think it'll get close, but I don't think it will be as good as the best person their field can be, that someone who is an amazing public speaker or a major surgeon or amazing orator. They will be the standout person, because they just blow the room away versus someone who's OK. So I think we'll still have people that can be that level above. But you're right, gpt6-7 will get closer and closer to perfection. I hope it doesn't get there, though, because I like being in a room full of people.

Speaker 3:

I think we have to end on that note because we have run out of time. Thanks very much, everyone.

Speaker 1:

Thank you for listening to the Actionable Futurist podcast. You can find all of our previous shows at actionablefuturistcom And if you like what you've heard on the show, please consider subscribing via your favorite podcast app so you never miss an episode. You can find out more about Andrew and how he helps corporates navigate a disruptive digital world with keynote speeches and C-suite workshops delivered in person or virtually at actionablefuturistcom. Until next time, this has been the Actionable Futurist podcast.

People on this episode