Digitally Curious

S5 Episode 13: Heather Dawe from UST on Generative AI

Chief Futurist - The Actionable Futurist® Andrew Grill Season 5 Episode 13

We’ve heard so much over the last few months about Generative AI and in particular, ChatGPT, so what does it all mean for companies, and what’s coming next?

To answer these questions and more, I spoke with Heather Dawe, UK Head of Data at UST. She is is a well-known Data Leader with over 20 years of experience working across the industry.

Heather has worked at a senior level as a Statistician and Data Scientist within government, the wider public sector and industry.

With expertise in the health, retail, telecommunications, insurance and finance domains, Heather seeks to problem-solve and innovate with real-world challenges, within large organisations, academia, start-ups and incubators.

She also pioneered the development of multi-disciplinary data science teams within the UK public sector.

Heather is passionate about helping others to develop their skills and expertise. She is an advocate for democratising AI as well as achieving greater diversity in those who develop it.

We covered a range of important topics in the field of Generative AI including:

  • Defining AI and Machine Learning
  • Where Generative AI fits in the AI family
  • What the “GPT” in ChatGPT means
  • How ChatGPT actually works
  • Chat GPT3 vs ChatGPT4
  • Is ChatGPT a step-change moment for AI?
  • Are we expecting too much from AI?
  • Do we need a pause on AI developments?
  • How can we regulate new AI platforms?
  • How Heather has been using ChatGPT
  • Does ChatGPT have a political bias?
  • The power of Generative AI for company data
  • How Cybercriminals could be using Generative AI
  • Heather’s work with clients at UST
  • Generative AI predictions in 1, 3 and 5 years
  • Three actionable things to better understand AI systems

More on Heather
Heather on LinkedIn
UST Website

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Pre-order Andrew's upcoming book - Digitally Curious

Speaker 1: Welcome to the Actionable Futurist podcast, a show all about the near-term future, with practical and actionable advice from a range of global experts to help you stay ahead of the curve. Every episode answers the question what's the future on, with voices and opinions that need to be heard. Your host is international keynote speaker and Actionable Futurist, andrew Grill.

Speaker 2: We've heard so much over the last few months about Generative AI and a particular chat, gpt. So what does it all mean for companies and what's coming next? To answer these questions and more, my guest today is Heather Doar. She's a well-known data leader with over 20 years' experience working across industry. Heather has worked at senior levels as a statistician and data scientist within government, the wider public sector and industry, with expertise in the health, retail, telecommunications, insurance and finance domains. Heather seeks to problem solve and innovate within large organisations, academia, start-ups and incubators. She also pioneered the development of multidisciplinary data science teams within the UK public sector. Heather is passionate about helping others to develop their skills and expertise. She's an advocate for democratising AI as well as achieving greater diversity in those who develop it. Welcome, heather. Thank you very much, andrew. A very interesting topic. AI is not new, but Generative AI has been in the news a lot lately. So let's start with some definitions. How would you define AI?

Speaker 3: AI is the capability of a computer system to mimic human cognitive functions such as problem solving and making things.

Speaker 2: So that's AI. How would you define machine learning?

Speaker 3: Machine learning is. It underpins much of AI and it's the general term for the mathematical models that AI uses to predict the outcomes of events and act on these predictions. So simply, ai is typically is generally predicting what's going to happen next and acting on it, and it's the machine learning models that predicts makes those predictions.

Speaker 2: For many people listening today, this term Generative AI is fairly new. They might have heard about it last year around the whole JATTPT launch. How do AI and machine learning then differ from Generative AI? or have I got them the mingled? you're wrong. Where do they all fit in the Russian dolls?

Speaker 3: In the Russian dolls. Generative AI is kind of a subset of AI. Generative AI is the term given to groups of machine learning models that generate output, And this output can be written content, computer code, digital art, like we're seeing from JATTPT and similar models.

Speaker 2: Now the GPT, and JATTPT stands for Generative Pre-trained Transform, which sounds like a very technical term. So how are you to explain what GPT means to my 80 year old mum in Adelaide, australia?

Speaker 3: Yeah, that's really quite a complex term. A GPT basically uses complex maths to predict the most likely words and pictures that it should use to respond to a question or prompt pose to it. So if I asked JATTPT to explain the function of a human kidney, for example, it would come up with a paragraph or so of written content that tells me what human kidney does. That's what it should do. We'll talk a bit more about what it might do later.

Speaker 2: I read an explanation of GPT terminology. It basically helps to predict the next word in a sentence. Is that too simple or is that a good way to start the definition?

Speaker 3: Well, that's just it. It doesn't know what it's writing, and so it's continually predicting the next word in the stream of words or computer code or whatever it is outputting, and that's exactly what it does. So it uses those machine learning models that I discussed previously to do that. So it's continually making predictions and essentially going with its best guess for what to write next.

Speaker 2: And the guess is pretty good because what I've been playing with and I've heard and read and other people playing with it, it seems to be getting it fairly right. There are some spectacular fails as well. You made an interesting comment there that it doesn't know what it's writing. So artificial intelligence is, in a way, a little bit dumb. Is that fair?

Speaker 3: It's fair to say it can do some amazing things. One of the most important things we need to recognise is that it's not human. I'm not an expert on human kidney, but if I was to write an essay about it, I'd go and research it and find out things And I'd know more about it as I wrote the words. Chat GPT, if it writes an essay or a paragraph about the human kidney, really doesn't know anything about the human kidney. It's just predicting the stream of words. It's just string together to form a sentence is to describe what the human kidney does.

Speaker 2: The way I look at it is it's a great first draft. For example, i don't know much about the human kidney either, and if someone said, write an essay, i'd go and start googling and research that, and if I gave that to a graduate, he or she might do the same thing. So it is a great first draft. I think You still have to look at it and go with what I know about the subject. Is that actually the best way to explain that, or was that going to look like it's been generated by AI? So I think we should get away from the fact that it will solve every problem. It's a good first draft. It gets us to the point where we might understand better what we're talking about, and then the human takes over. Is that fair?

Speaker 3: That's fair to say. It's also really important to recognize And this is happening less now as the model is mature, but it will still happen Sometimes chat GPT can just come out with completely wrong statements, But the risk is actually it appears to be a completely sensible thing to say Misinformation can sneak in really quite easily And it's actually quite hard to pick up if you're not an expert on the thing you're reading.

Speaker 2: In a little while, because ethics and integrity are important when it comes to AI November 2022,. we all learned about chat GPT and it uses chat GPT 3.5. Now, out of the box, we have chat GPT 4. What are the differences between the two?

Speaker 3: GPT 4, as you might imagine, is an extension in advance on GPT 3.5. Gpt-4 is multi-moldle, which means it can interpret images. Chat GPT-3 only interprets text. So GPT-4 can interpret an image, translate it to words. What it looks like a monkey swinging in a tree, you know, or describe that in words, and so it can actually interpret images. So you know that multi-moldality is an important extension on Chat GPT-3. Gpt-4 is reportedly harder to trick and is less likely to come out with those wrong statements because it's gone undergone more reinforcement learning And once the model's underpinning GPT-4, it's been trained. It undergoes a significant series of reinforcement learning to help it to know essentially know what's right and wrong, and so it's undergone more training, reinforcement learning, in that regard, than GPT-3. And it's faster. Gpt-4 processes up to 25,000 words at once, which is about eight times greater than GPT-3. That speed comes extra expense to the end user. So that's a trade-off, you know, whether to have a think about whether you should use GPT-3 or four, depending on the way you want to use it.

Speaker 2: Launching this into the wilderness. What OpenAI did? they need people to play with it. They need to see the edge cases. As we expect, people have done crazy things with it And, as you say, it learns. And if, as humans, we can remove the conscious bias and say that use of it is a bad use, that use of it is a good one.

Speaker 2: When GPT-4 came out, a couple of the early use cases I thought were just amazing. Someone drew a plan for an app on a napkin, basically fed it into Chat GPT-4, and it wrote the code. The other one I found incredibly funny for a number of reasons, was there was a lightning connector that looked like a VGA connector plugged into an iPhone And it asked Chat GPT-4, why is this funny? Now I think even a graduate would struggle with that, because there's a whole nuance as to why that's funny. And so the advancement from GPT-3 to 4, it's becoming more human-like, because when it can actually understand that, why is this funny? why is there humor or irony in there? I think that's incredibly powerful. So 3.54, what do you predict we'll see in Chat GPT-5, 6 and 7?

Speaker 3: I think that nuance you just sort of alluded to those, and it's really important. GPT-4 is obviously far better at that than GPT-3. And you would imagine GPT-5 will be better again. They didn't suddenly create this last November. These models have been around for really quite a long time and it's not only OpenAI I've been developing them Google, facebook and the other big players.

Speaker 3: What OpenAI did was put it out to the market, as you say, and put out for use and explore the edge cases in a way that was kind of accepted to the wider audience, because they said you know, actually this isn't completely ready. we want to explore, and you know you to use it so we can learn faster. And that's what's happening now, and it's happening at a pace. And obviously, the ways in which industry competition is playing in that space, it means it is moving it a massive pace now compared to what it was. You asked me the question about chat GPT-5. Yes, it will be more advanced again, likely to understand more nuance. I still don't think, though, that we're approaching that sort of human level of understanding of nuance, and I don't actually think we'll get there with large language models. These things only get better and better at what they do, but they will have limitations.

Speaker 2: What I really enjoyed about open AI launching this into the wilderness late last year was it's made my job of explaining AI, and probably yours, a lot easier, because the business person can actually play with it, and I encourage my clients to be digitally curious, to log on, open an account and just type something in there Type who are my competitors, who am I? Even though you and I know that it's not perfect, the fact that it's generating something from scratch that is way beyond what a search engine would show, i think is remarkable. But it then shows people and I've always said this, regardless of technology, once you start playing with it yourself, you have the ah ha moment and you go oh, wow, okay, now I get where this is going to enhance my business or it's going to impact my business. Is this the step change the AI industry is looking for? Is this that watershed moment that we've been waiting for, to have that frictionless experience for end users to really embrace AI?

Speaker 3: I think it is definitely a step change. Whether it's kind of the iPhone moment is yet to be seen. Some people have suggested it is, and maybe it could be. It's not like the technology is suddenly the last few months got that much better. It was advancing at a pace before that. What it's done, as you say, is enable people to play with it, to explore it, to break it, to seek to understand that and, importantly, to imagine with it. I can do this, what does that mean for my business and how could I help? whatever I do, ai has been there for quite a while, a long time, in many guises, you know it has changed the game.

Speaker 2: You mentioned the iPhone moment. It's actually a good analogy because iPhone one was actually pretty boring. It had Edge and other phones had 3G by then. It had a lot of limitations. I don't think it even had an app store when it launched there. However, the form factor, the way it was easy to use, the frictionless nature of how to use it, apple was able to integrate and innovate very quickly and now most people I know have one. Maybe it's a good analogy that it wasn't perfect at launch, but it had the makings of something quite incredible. I think also the issue around people trying this and just seeing what it can and can't do. However, i've seen lots of people have these. You know, pay me £100 and I'm going to show you a cheat sheet on how to completely revolutionise your business. I think people are still missing out on what it can and can't do. Do you think there's a risk we're expecting too much from AI in the near term and further out?

Speaker 3: I think we're definitely sort of riding a hype wave at the moment. In my role at UST, we work with large enterprises to help them with their data science and AI usage. Talking to them, they can see the potential for AI now in ways they didn't necessarily see before. The hype, all the news and everything and the way it has gone, you know mainstream really. So things are happening and changes are happening and people are using it more and it is becoming, you know, in the process of becoming assimilated increasingly into the software we use every day. But that's just it. I think in some ways we won't even be aware of how much you use, because it you know, like the iPhone, i need this thing. Everything I do, it's certainly in the digital space. Suddenly it sort of catches up on you and suddenly it's all around you. In terms of expectations, we're a long way from artificial general intelligence and I don't think we're going to get there in a hurry, if I'm honest. So expectations are high and I think we'll see some great changes from what's coming.

Speaker 2: As a futurist, i spend many hours on stage trying to predict the future, and one of my sort of set pieces for a while has been the notion of a digital agent that very soon will have an agent looking after the manutai of our life, so it knows that our health insurance is due and it goes off and does a digital deal And, to your point, we don't even notice some of these things are happening And I think part of the joy for me of AI is it will remove some of those friction tasks that I hate doing and they can go away and we have to trust that. And actually this morning, in preparing for this interview, i was reminded by a podcast I did last year with Susie Allegrae, who's a human rights lawyer. We spoke early December 2022. Chatgpt had been launched, but no one was really talking about it. She was talking about issues around ethics and integrity and AI and the freedom to think. In fact, i did a post reminding people of that podcast this morning on LinkedIn.

Speaker 2: But I think we should touch on ethics and AI. The whole notion of AI ethics has been around for a while. Now that people can play with it very easily, they can see how it can be tricked, as you say, how it can promote misinformation, and recently we saw several leaders, including Elon Musk, call for a six-month pause on the development of systems quote more powerful than that of ChatGPT4. What do you think these leaders are worried about? How can we ally their fears, and is this even a realistic prospect?

Speaker 3: The fact that these AI leaders and their leaders in their fields came together and made that statement, wrote that letter. They're worried, so we should be really. The challenges in actually halting progress in this space would be extremely hard. Some would perhaps stop and others wouldn't. Another very important point they made was the need for tighter regulation and tighter understanding of what these models are doing and how they're doing it. I believe it comes down to tighter regulation. The problem with that is that government regulation never really keeps up, and this isn't a criticism. This is how it works. It never keeps up with innovation, and certainly not in the AI space. We've needed a clear set of regulation through AI for some time now and we haven't got it yet. And if we did have that, then there'd be clarity about how to ensure AI was safe, fair and ethical. It would be clear with how we should use it in industry and how, indeed, we shouldn't use it in industry and wider society.

Speaker 2: That's not something I think we'll ever solve. I mean the issue around misinformation, just around social media networks and the networks not being open to how their algorithms work and how things are voted up and voted down Although, having said that, recently Twitter actually released their algorithm and you can actually see what makes a post go viral but again, that's some human being or some group of people saying if you like a post on a Wednesday, then you get three more points or whatever. So that's really interesting. Just as an aside, years ago there were all these influencer platforms. There was Cred and Clout, and I was a CEO of Cred, and where we were different to our competitor across the road was that you never quite knew how the influence was measured, whereas we actually had a table that if you tweeted this, you got this many points, and so we were quite transparent. So the issue around transparency is key and you touched on it before.

Speaker 2: These are black box models. I actually asked last night on LinkedIn what are the data sources that power, chat, gpt, and we think there are five sources. One is a very large data center around open data. I'm just reading here off something I looked at the Common Crawl data set. Webtext 2 is a set of web pages of outbound Reddit links. There are a couple of books in Wikipedia. It's still very broad. So how do we regulate these things when we're talking about black boxes and we don't know how they work? Do the regulators have to go to AI school?

Speaker 3: How do we regulate that? It's a really good question that, if I'm honest, having worked in government and also when I was working in the NHS 10-12 years ago, i was working on mortality indicators in the NHS, which are very political and contentious At the time, the only way to using mortality indicators to reduce the number of people who died in hospital. You know, there was an important reason for why they were being used. Unless the method itself was transparent, which we ensured it was, you know the argument would be about the methodology, the machine learning model itself that was used to predict the outcome. So that transparency is really important.

Speaker 3: It's hard with black box models to understand them, but there are ways of making them more transparent than they are today, and that's using explainability methods and other things that make them clearer about how they predict as they do. These models are extremely complex and very hard to decipher and to understand. What the UK government is planning to do is the regulator will have a role to regulate the use of AI, but it will be on the business or whoever's developing the AI to make sure it's safe and fair, and that in itself brings us its challenges. It's important to recognise that AI and its uses in industry does go beyond generative AI, and actually explainability in AI is an important thing.

Speaker 2: So just touching on explainability how should we talk to our children and our leaders about AI?

Speaker 3: My kids talk to me about AI. They're 11 and 7 and AI is going to be something, obviously what they're growing up with it now and it's always going to be a part of their life. So how do we talk to our children about it? One of the most important things with kids today is helping them to understand that their own creativity is going to be fundamentally important going forwards. And so our boss our boss is.

Speaker 3: I was at an event, an evening event in Leeds a few weeks ago, when the chief operating officer of a large UK company was on a panel. He was quite sure AI was going to have a major impact on his sector. He wasn't going to sit back and watch, but he was going to keep you know, keep it on it and keep alert, but he wasn't going to move on it straight away, and he was also working in a sector when he could afford not to move on it straight away. It's important to recognise that different parts of industry are mature in different ways with data and AI. There was a similar executive sat next to him who described it all as largely snake oil. It's an eye-opener and it's refreshing because actually both those points of view are valid. It sort of reminds me again that actually change in large businesses can take a long time.

Speaker 3: Ai has the potential to be a major disruptor in many sectors. So it was kind of refreshing to get their points of view. And one thing I've learned over the years and I've worked in multiple different sort of roles, senior roles in data science and AI is there's no point in saying to an executive or your boss actually you know, if we don't do this, we're going to get disrupted and we're going to die. There's no point in that, because they're just like okay and they just kind of they don't sit back, but they could always buy the disruptor if they're large enough.

Speaker 2: Yeah, i'm fascinated as well and I learn more from these panels and they'll ever learn from me, because you then learn where people's thinking is at. I think if I've been on the panel and I'm sure you thought about this is asking them what have you actually asked ChatGPT lately? and they're probably saying, well, i haven't used it. So then how can you experience it? Just like years ago, when I was the first of my friends to get a mobile phone, everyone said why have you spent a thousand dollars of your money as a student on this piece of plastic, andrew? then eventually everyone understands the utility and we all adopt the technology. It's right to be digitally curious. as I said, do you remember the first thing you asked ChatGPT? and also, as a follow-up question, what's the most interesting response you've received from it?

Speaker 3: I started to use ChatGPT in December last year. I wrote a blog about it and I sort of deliberately asked a few varied questions, and the first question I asked it was write me a synopsis of a story about fluffy bunny rabbits that don't like carrots. And Chapit GPT came up with a series of different synopsis of this story, but it was fascinating because all of them were like hey, you know, it's okay not to like carrots, you can eat lettuce instead. They all had a liberal viewpoint in the terms of it's okay to be different, don't worry. And that fascinated me because it showed me the reinforcement learning of Chapit GPT was quite liberal in its views and that's a really good thing. It's likely to lead to less prejudice because there's more diversity in the viewpoint and stuff.

Speaker 3: But at the same time, in the society we live in, we have a variety of different political views, as long as they're not too extreme. Importantly, all of them are valid. You can vote conservative, you can vote Labour, etc. What that got me thinking was wow, i wonder if Chapit GPT is left-wing. And then a few months later I saw a headline in the Daily Mail inferring Chapit GPT was left-wing. That has an implication for a lot of different things. It can be used to synthesize content that will influence views and then run up to elections. There's a lot of stuff in there. We're coming back to the ethics again. So that was the first one, and then the second time I used it, i asked it to code me an app to visualize some data, and then I wrote up and running it in less than 15 minutes on my machine, which I was incredibly impressed with.

Speaker 2: You open a can of worms there about conscious bias, that how it's trained. It's trained by humans. I've always said before Chapit GPT that any eye system is trained by humans and there's an inherent conscious bias in there. You talked about being able to train it or tune it different ways to have a left-wing or a right-wing basis. That's on public data and again it's still a bit of a black box. We don't know what it's trained it on, and if it's trained on a lot of bad stuff, then it's going to lean one way or the other. What uses could we see if company or industry specific data alongside trusted real-time public data was used to train the model You mentioned your work in the NHS? Imagine if we're able to dump ethically a whole lot of information into there that wasn't in the public domain, that could be used sensitively to solve the major challenges of the health crisis in this country. What will we see with unique proprietary data that we are not seeing with public data at the moment? do you think?

Speaker 3: The models that are available for us to use and to play with now have been trained more generally. When these models are trained to be more specific, they can eke out insight and things that really aren't apparent to us now. If that did happen, we could expect some significant breakthroughs in knowledge, in healthcare and such like. I would imagine The lack of certainty we have currently in the ways it works. The outputs we receive from it are enough at the moment to limit its use in that space.

Speaker 2: What's the most interesting or innovative use of general AI that you're looking forward to? If you had a magic wand and you could create this application on this platform, what would it do?

Speaker 3: I'll be happy when it takes out some of the more mundane aspects of my life. My accountant's great. She advises me on much more than just my tax return and stuff. I'm not really sure what I'm completely looking forward to with general AI. It's going to transform. You won't be surprised that I've been seeing this coming for years, you know in the role I've been working in, and actually I really value human creativity and the ways we work together to solve problems and stuff. And actually The thing I think I'm most looking forward to is what I'm hoping for is that humans will recognise their importance in this space And, yes, we will have generative AI and other AI come in and do a lot of the mundane stuff for us. But I really hope that we value our creativity and our ability to problem solve in the ways AI currently just cannot. I hope that we value that more and do it more.

Speaker 2: So let's look at the potential bad uses of AI. Can you talk to me about how cyber criminals are using AI technology to carry out cyber crime and generate disinformation, and what can be done by humans or AI to combat this?

Speaker 3: Because chat GPT can write programming code. You don't actually need programming skills now. If you're a cyber criminal, you don't need coding skills. People without those coding skills are hacking now in ways they couldn't before. In terms of criminality, it's increased the ability of criminals whose first language isn't English, for example, to be able to write more meaningful content in English.

Speaker 2: So I'm almost out of time, but I want to look at what you're doing at UST And what's the most exciting thing you're working on at the moment that you're able to talk about.

Speaker 3: My role is to help our UK clients, which are large enterprises, to leverage their data, and that's a really fulfilling role, in particular, when we get to the point with our clients when they really are succeeding with data science and really achieving the benefits that can be felt with data science and AI in their business. In the generative AI space, we're working with some of our clients at the moment typically to sort of develop advanced chatbots to enable them to do sophisticated search and such like. As we're doing that, we're sort of discussing and exploring other use cases. It is an exciting time to be working in AI And I have a fascination for data and AI that I really relish in my role, so I think I'm very lucky to be doing the job I'm doing.

Speaker 2: Well, this wouldn't be a future as podcast unless I asked you for some predictions. So what can we expect to see in one, three and five years from AI?

Speaker 3: I think we are seeing a step change now, and so within one year we'll see increasing adoption and use of AI across industry. Within three years, that'd be more advanced than five years. There's potential for us to be massively accelerated. After five years of the iPhone, it was truly transformational, really. It's got that potential, i think. As generative AI becomes increasingly mainstream, as it starts to get embedded into office and things and words start to writing documents for us, our emails get written for us We'll look back and think, oh, that didn't used to happen And now it does, and I can imagine how that will work.

Speaker 2: So almost out of time And we're up to my favorite part of the show, the quick fire round. We learn more about our guests, iphone or Android, iphone Window or aisle Window In the room or in the metaverse In the room. Your biggest hope for this year and next.

Speaker 3: A new renaissance.

Speaker 2: I wish that AI could do all of my cleaning and DIY. The app you use most on your phone.

Speaker 3: Well at the moment is WhatsApp, And that's mainly receiving hamster and cat memes from my daughter.

Speaker 2: The best advice you've ever received.

Speaker 3: Seek to understand where other people are coming from.

Speaker 2: What are you reading at the moment?

Speaker 3: The last quarter of the moon by Chai Zahan.

Speaker 2: Who should I invite next onto the podcast?

Speaker 3: My statistical hero David Spiegelhalter.

Speaker 2: Final quick fire question. How do you want to be remembered?

Speaker 3: As someone who loved to innovate and make things and who always wanted to explore her own creativity.

Speaker 2: So, as this is the actionable futures podcast, what three actionable things should our audience do today when it comes to better understanding the opportunities and threats from AI systems?

Speaker 3: It most importantly is to learn about it, as we've been talking about. Really use it, try out for yourself and read about it. Try and get through the hype that's written about it, because there is a lot of hype written about it at the moment. Really seek to imagine what it can do for you and also what the hyperscalers are up to, because they're innovating incredibly fast with it at the moment, talking to others from both within your business sector and other sectors to see what they're doing with AI and what they're planning to do with it.

Speaker 2: Heather, how can people find out more about you and your work?

Speaker 3: LinkedIn is a good spot to find me.

Speaker 2: Heather, a fascinating discussion. I'm really intrigued at what you're doing. This is a discussion that is very timely. Thank you so much for being on the podcast.

Speaker 1: Thank you very much, andrew, thank you for listening to the actionable futurist podcast. You can find all of our previous shows at actionablefuturistcom And if you like what you've heard on the show, please consider subscribing via your favorite podcast app so you never miss an episode. You can find out more about Andrew and how he helps corporates navigate a disruptive digital world with keynote speeches and C-suite workshops delivered in person or virtually at actionablefuturistcom. Until next time, this has been the actionable futurist podcast.

People on this episode