The Actionable Futurist® Podcast

S5 Episode 14: Gaurav Rao from AtScale on Ethical AI

May 07, 2023 Chief Futurist - The Actionable Futurist® Andrew Grill Season 5 Episode 14
The Actionable Futurist® Podcast
S5 Episode 14: Gaurav Rao from AtScale on Ethical AI
Show Notes Transcript Chapter Markers

AI Ethics isn’t just something we should be aware of - it is a movement according to Gaurav Rao from AtScale. Gaurav says “To me AI Ethics is a movement. and the reason why I define it as a movement is I think it has two distinct parts. There's technical frameworks that are continuing to evolve, and there's societal best practices, and both of these are put in conjunction to drive the responsible use of AI.

Gaurav is EVP & GM of Machine Learning and AI at AtScale and has been working in the AI space for a long time, and he really knows this topic. 

This episode is a really interesting one, as it peels away the layers of AI and ethics and challenges you to think about how your internal processes are structured to support the ethical use of AI.

Bad decisions made by AI may have a limited impact when it comes to song choices, but when reviewing who might get a home loan or health insurance, it could have far-reaching societal implications.

Gaurav argues that the issue of AI Ethics goes beyond the risk and legal departments and is something that your marketing and sales teams should be looking at now.

As always, the Actionable Futurist Podcast show also provides actionable advice you can put into place today and next week.

We discussed a range of issues related to AI and ethics including:

  • Ethical considerations when developing AI systems and models
  • The implications of bad decisions when AI is involved
  • The role of regulation and AI ethics
  • The ethical challenges Governments face with AI
  • How can we trust AI systems
  • Involving the Chief Risk Officer in AI discussions
  • Where to go to learn more about ethics AI issues

Many of my clients have set up AI working groups to share best practices across departments because AI is now no longer the domain of the tech teams, it is permeating every area of every company.

With AI in the news on a daily basis, you need to consider the ethical use of AI in your business, so set aside 35 minutes this week and listen to this episode.

More about Gaurav
Gaurav on LinkedIn
AtScale website


Your Host: Actionable Futurist® & Chief Futurist Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Andrew's upcoming book

Speaker 1:

Welcome to the Actionable Futurist podcast, a show all about the near-term future, with practical and actionable advice from a range of global experts to help you stay ahead of the curve. Every episode answers the question what's the future on, with voices and opinions that need to be heard. Your host is international keynote speaker and Actionable Futurist, andrew Grill.

Speaker 2:

Today's guest is Garov Rao, the Executive Vice President and General Manager of Machine Learning with AI leader Atscale. He's responsible for defining and leading the business that extends the company's semantic layer platform to address the rapidly expanding set of enterprise AI and machine learning applications. Previously, he served in a number of executive roles at IBM, spanning product engineering and sales, that were focused on taking cutting edge data science, machine learning and AI products and solutions to market, specializing in model training, serving and trusting AI in the context of driving business outcomes for enterprise applications.

Speaker 3:

Welcome, Garov. Thank you for having me, Andrew.

Speaker 2:

We probably ran into each other in the corridors at IBM. We're at IBM similar times. You were doing things with Watson, i was doing things in consulting. It's nice to have you on the program and have another IBM alumni to talk to.

Speaker 3:

Likewise thanks for taking the time to talk and I look forward to this topic. It's a passionate one of mine.

Speaker 2:

It's mine too, because it's going to impact all of us, and I'm reaching out to experts like you to give me a better perspective, because when I'm on stage talking to my clients, they always say so what should we worry about AI? So the timely and important topic for today is ethics in AI. But before we delve into that, tell me more about what Atscale does for those of us who may not have heard of you.

Speaker 3:

Atscale is the enterprise semantic layer, and what we've defined that to mean is we provide business metrics for both BI and AI as the integration layer between tools and your data resources or data repositories, and within that integration layer, we offer governance, control and performance around the queries we generate. So for us, it's how do we enable better business decisions with the business data?

Speaker 2:

So what drew you to the company after 12 years at IBM and a number of other AI startups?

Speaker 3:

I have been in AI for many years, prior in data management, and I see a shift in the industry as we're transitioning from exploration and what is AI and what can I do with AI to now deploying machine learning models in production, and I think one of the biggest challenges still is the business context, and I've been using the term applied AI and I feel Atscale is the perfect place to deliver upon the promise of applied AI, because we can serve data scientists, machine learning engineers and other AI developers the business context that they desperately need in their ML pipelines with the power of a semantic layer.

Speaker 2:

All the companies I'm aware of that are actively deploying AI solutions. They've now got whole new groups of people and skill sets because they need to apply it. They need to collide this new fantastic technology with all the old, broken processes out there. So you're right, it has to be applied and it has to fit into existing organizations. Is that one of the friction points? that you've got old, clunky systems that are screaming to somehow connect to this new world?

Speaker 3:

Absolutely. This systems its location. We're starting to see the emergence of cloud and cloud processing and distributed networks. We see hybrid coming more and more prevalent because data is being generated everywhere, and the question is how can I actually capture value without having to do a ton of data movement? Can I process data more centrally? and processing also involves intelligence. So how do I bring the intelligence closer to where my data is and, as a result, create faster time to value? And all of this is again in the context of some sort of business application.

Speaker 2:

So the issue here is making the right sort of decisions, because if you're making decisions quickly, humans have a problem sometimes of getting it right. It's fair to say, the world has grown accustomed to the presence of AI in our lives. whether we know it or not, ai is used to recommend you, niche purchase, but also by business looking to speed analysis and make better decisions. So what are the implications that, if AI is delivering bad decisions unknowingly, what should we look out for?

Speaker 3:

Honestly, poor outcomes. Ai is typically being used in a predictive sense, where you're generating predictions or inferences. Those inferences were based off of data, features, attributes, hypotheses that have been made, and the question is if the model or the AI model is making poor outcomes, do you have the processes in place to detect that? And then two based on if the model is making poor decisions, what's the actual impact? You gave some examples. Maybe my real time recommendation of what I'm going to listen to next on Spotify might annoy me. If you know I'm listening to some type of music and it recommends the wrong type of music. But if I'm in loans or underwriting and banking and finance, the difference of a poor outcome could mean potentially not getting a home. As AI, i think, becomes more prevalent in everyday life, the decisions and the outcomes of these AI models become much more important. It's not just about surface level impact. It's down to what we're doing day to day.

Speaker 2:

So these decisions are being made in real time, so there's not someone that can sort of look at something and say, well, that was a great recommendation, that was a great loan. I'm just thinking about it now. Is there a need for quality control, maybe run by another independent AI system, to look at the result of these decisions, to see whether there's some tolerance that's been exceeded?

Speaker 3:

Absolutely, and I think this is where trusted AI products start to come into market, where it's not just about understanding what's happening at training time, but as I deploy models, i need to understand as my models are making decisions. are those decisions correct And if not, how do I remediate it? And I think this is where you start to see more around observability and sort of these new areas and techniques and products, around machine learning operations, or ML ops for short, which is really a way to better understand and visualize what's happening at the runtime when machine learning models are making predictions.

Speaker 2:

So, as I said in the intro, you've built your whole career around AI. Can you maybe share with us some of the good uses you've seen? Then we might move on to areas that listeners should be aware of.

Speaker 3:

I'll start with my favorite, which are these real-time recommendations, right Right within retail, within entertainment, within CPG. We become accustomed based on the types of interactions we have in a digital world where that real-time impact is necessary on what I need to see when I need to see it. But it evolves much more. We see, for example, in computer vision, which is a subset of deep learning, advances in brain connectomics and brain genomics, research based off of image segmentation, deep learning networks. We see advances in predictive maintenance and quality in acid intensive industries like automotive and aerospace. In healthcare, we see patient care insights being driven significantly forward by the advancement of machine learning and AI around unstructured data.

Speaker 3:

And then, one of my favorites has been around language. Everyone probably has maybe some form of a Siri, a Google or Alexa, Even day to day ticket handling, ticket routing. When you go and are complaining about an item or trying to give feedback, typically you're involved in a conversational AI agent or a chatbot. All of these things are simplifying some of the core processes that have been maybe a little bit more manual, to simplify the experience that we as consumers have.

Speaker 2:

So we've seen all the really good things that AI can bring And I think consumers, as you say, they're becoming more and more aware. The example I love giving about an AI system is Deepfakes here in the UK, One of the commercial TV channels, Channel 4, normally just to set up for our international visitors. Every year, the Queen would do her Queen's message And Channel 4 does an alternate message. Now what they did a couple of years ago is set up a Deepfake where they had an actress playing the Queen. The reason they did it they said we want to prove to you that not everything you see on TV is real, And I think it was great because it then raised the issue around AI and ethics to a very broad audience, And I know some of my friends are talking about having seen that, not knowing about AI. So let's just look at the whole ethical question around AI. What is AI ethics and why is it essential to understand for business and consumers?

Speaker 3:

To me, ai ethics is a movement, and the reason why I define it as a movement is, i think it has two distinct parts. There's technical frameworks that are continuing to evolve and there's societal best practices, and both of these are put in conjunction to drive the responsible use of AI. So to me, there's a cultural and there's a technological implication of AI ethics And I think, at the end of the day, it's all about responsible use And why it's important. Now what I would argue is maybe not so much ethics, but we have had longstanding domain practices around privacy, governance, security, around data, and I think what's evolving and where we start to apply this to machine learning and AI is AI is still very difficult and AI is starting to become more ubiquitous. So, as a consequence, based on some of the examples we were talking earlier, the gravity of issues being made within the AI or ML pipelines could mean life, death, loan, homeless, in the more casual sense.

Speaker 3:

Recommendations Maybe I don't get the best recommendation For business leaders. What we need to understand is what are some of the things I need to be more aware of as a result of AI becoming more pervasive, either as a provider and then from the societal standpoint. What do we need to be doing to address some of the concerns?

Speaker 2:

Who should be aware of AI and ethics. Is this now something that should go into risk? The legal department should be aware of it. Marketing should be aware. I want to know what new areas need to be aware of it. And the second thing I want to talk about is where do they get educated about this? They should be digitally curious, as I say, that these things are happening. Where do they go to better understand from a risk point of view? I'm risk manager in a large firm. We could be seriously pulled up by the regulator unless we do this properly. First of all, which new areas need to be aware of it And, secondly, how do they come up to speed?

Speaker 3:

It's a great question And I think the way we answered is by understanding sort of what's changed around AI to make ethics so important, and then you'll see sort of why this is now a multidisciplinary problem. Ml is still very hard. You need algorithms, you need skills, you need compute, you need time. All of these things have been an impact or a factor in whether my machine learning and AI is going to generate high accuracy, good outcomes, and what we're seeing now is more tools, methodologies and platforms to simplify that Auto ML, low code, no code ML, ops And we're starting to see the rate and pace of state of the art techniques like transfer learning, one shot learning. All of these things are coming into play to simplify how I generate machine learning and AI outcomes. Now the challenge with that is you sort of introduce this concept of a black box as you continue to simplify.

Speaker 3:

So to come back to your question on who needs to worry about this, i've got data scientists who are building machine learning models. I have data engineers who are providing and curating an ETLing data. I have machine learning engineers who are now deploying machine learning models. I have business SMEs who have to validate in many senses whether the machine learning model is correct. So think in financial services. Many big banks, for example, have model risk management teams. The sole purpose of that MRM group is to do a case study on the machine learning model before it goes into production, to make sure that it's auditable and transparent And we understand what the outcomes are going to be as the machine learning model is making it.

Speaker 3:

And that's just the technology side. But, as I mentioned, now that you have different personas interacting in this ML life cycle, you have to start thinking from an HR standpoint. Within many of these companies, am I hiring diverse skill? Are my data scientists being educated on some of the challenges that my business application is presenting? that needs and requires AI so that they think more holistically? And this is where now I'm starting to see more, as you mentioned, ai ethicists AI ethicists boards internally within large companies And then now even at a national level, we're starting to see conglomerates and larger movements around AI ethics that are cross company, because people are very quickly realizing that there's no one single owner that's responsible for making sure we continue to build fair and trustworthy AI. It's a very pervasive and multidisciplinary problem that has a very multi dimensional challenge associated with it.

Speaker 2:

So huge issues there. And just on to where do they get the information? I'm sure Wiley has released a AI ethics for dummies, but beyond that, where do they go? Because these are really serious issues? I'll talk about the regulators in a moment. Before we get to being regulated and find, where do those people that have done their normal marketing or HR or risk job then go? oh, now I've got to learn about AI. There's an ethics issue around AI. Where do they go other than listening to this excellent podcast? Where do they go when they're hungry for this information to keep them out of jail?

Speaker 3:

where they start to be honest is internally and understanding what their governance strategies are, before they even tackle what's AI ethics. Start with the base framework of what am I doing to protect my data, because this just becomes an extension of that, and I think, once they better understand their internal data strategies. Now what I would be looking towards is some of the bigger companies. They're doing a nice job of leading efforts, because they're also the same companies that are building some of the next generation machine learning models. They're open, sourcing them, they're making them readily available. So they now also have the onus to make sure that, as they generate and make these widely available, that there is the right protection around it and that they're doing so with the right data sets, the right labels, etc. So I think there's some aspect of learning from what's out there in the wild.

Speaker 3:

And then what I would also argue is there are now, if you're going down the technical path, a number of papers around this, so archive is a great resource if you really want to get your teeth deep into some of the challenges of AI ethics and what's happening in trusted AI. You can very quickly learn how we're doing. Things like bias detection, bias remediation, anomaly detection, drift analysis, explainable AI. There are papers now that explain the complexity of the algorithm behind it. You want to start with, maybe, the business challenge of what you're trying to uncover And then, depending on the level of expertise and depth you need to get into, there are the technical papers associated with some of the common techniques in this space.

Speaker 2:

I love all the terminology just thrown at us as talking to our friends at Canon, and I put up this slide called the scary slide. I had all this jargon on there. It wasn't AI related per se, but I'm going to add some of those to the screen because I want to freak them out that if they don't know what these things are. I want to move now into the regulation aspect, because if they think they're doing everything right, regulators are grappling with all sorts of technology. I spend a lot of my time talking about what's happening in cryptocurrency and digital coins and those sort of things, and regulators and central banks are coming up to speed with that. Give me a sense of what the regulators need to be doing to run as quickly as the AI disruptors so that they can ensure, one, that the regulation is fit for purpose, but, two, it catches out where AI ethics come into play.

Speaker 3:

It's a great question. Also, I think what the regulators need to be doing is bringing the technologists to the table, because I think part of the challenge that they face is the rate and pace of innovation on the machine learning and AI side especially. Algorithms and state of the art techniques are outpacing their ability to keep up on the actual implementation of a guideline. By the time we pass a regulation, chances are two quarters later. One of the large cloud providers have already created a new way to monitor bias and remediate bias without having to intervene as an ML developer.

Speaker 3:

First and foremost, there needs to be a consistent communication vehicle between the regulators and the technologists. I think this is where some of these boards are starting to become more and more important at a national level. The challenge now also here as you're bringing this up is this is going to vary by country. Thinking back to how we've managed data and data privacy, how the EU manages it versus how Japan or Australia or the US manages it is very different. There's very local ordinances, which is again why I think there has to be local technology and technologists representation for these regulations and guidelines to be effective, because they're going to have to change to keep up.

Speaker 2:

GDPR that was launched a few years ago here in the EU was a good test run, because even if you're in the US or Australia or Asia, you need to be aware of GDPR, because if you are selling something that could land in European countries, you need to abide by that. I suppose look at how we had to deal with new regulations in California and in Europe and then look at how that happens. I want to look at conscious bias. The notion of conscious bias creeps into every decision we make, and I'd argue that in AI, this type of bias can skew things exponentially. So, first of all, how would you define conscious bias and what can we do to avoid it creeping into systems that we're designing?

Speaker 3:

Bias is a misrepresented term. So if we think back to what machine learning in AI is, it's all about measuring bias. It's just determining what the right threshold should be. At the end of the day, you're looking at statistical variations between data and at the end of the day, that means you're measuring biases. So, to me, conscious bias is some of the things that you're bringing to the table at a very simple level what you've been presented with, what you've grown up to know as correct, and those are some of the things you're bringing to the table as you evaluate and are performing a wide array of tasks, and what that means in machine learning in AI.

Speaker 3:

It's a challenge in its own regard because of the flows that these models have to go through. Let's take a very simple example underwriting and loan approval. In order to generate a machine learning model to determine the loan outcome, i need to verify did I sample the right data sets? Did I oversample in an area? So perhaps I oversampled on certain age groups? So my machine learning model may inconsistently be approving certain age groups but denying other age groups, all because of the data sample that I was using. Then I need to understand did I have the right represented classes or attributes.

Speaker 3:

As I was training that model, was a validation data set used. Was my data set labeled correctly? As my model hits the runtime in production, do I have the right person to relabel and sort of validate that model And that life cycle? all is by different personas, So that conscious bias that you're talking about is creeping in at every single step. The question is do I have the right tooling at training at runtime to better visualize and understand the machine learning life cycle, the steps that my models are taking and whether or not the thresholds that I'm reaching are acceptable Or I need to start remediating?

Speaker 2:

Explain to me about remediation what you need to do and what that looks like in practice.

Speaker 3:

Remediation typically means retraining a model. I'm in my underwriting application. I quickly realize that I am disproportionately approving loans at a certain age group And now I need to take that model out of production. I need to retrain it with the right sample data, with the right classes, with the right validation data set. And now this is time, money and I have unhappy customers.

Speaker 2:

We've talked about the sort of business uses of AI. Most of it's been good, but there are bad uses of AI. In bad times, technology is used all over the place. In the wake of Russia's invasion of Ukraine, the US Department of Defense's Chief Digital and Artificial Intelligence Office apparently is on a mission to integrate data analytics and AI into every facet of its operations, from logistics to missile technology development a very controversial area. So, as governments worldwide expand their use of AI, what challenges will they face with regard to AI ethics?

Speaker 3:

This is where the regulations, compliance and privacy laws become very increasingly important, and all of those vary by country. You astutely pointed out that as governments start to use AI, the use cases are ones that are typically not, you know, from a morality standpoint, high on many people's radar. You know you're looking at things like computer vision and applying, you know, video feed and video analysis, image analysis right, these are things that we are starting to see people have privacy issues over as rights of violations for privacy, and I think what we're going to have to start looking at is similar to the GDPR concept. How do we start to build better laws, better compliance procedures, better regulatory procedures not necessarily just law, but procedures in place, right to make sure that there's less ambiguity on whether or not this is being used responsibly? And I think it's a very fine line and it's a very hard fine line to define where we are today.

Speaker 2:

Consumers probably have no input into a government system doing that, but if we're dealing with one of our favorite suppliers, whether it be a music recommendation or a video recommendation or a shopping recommendation, what sort of question should be asked of any AI system before it can be trusted to do more than predict your next impulse buy? What should consumers be aware of, and should there be warning stickers placed on websites?

Speaker 3:

I think there has to be some communication to end users when a I would say trusted AI solution is being put in place.

Speaker 3:

As an end user, i want to know what my data is being used for.

Speaker 3:

When I sign up somewhere, when I'm creating or logging into an app, i need to know is my data being used responsibly and how?

Speaker 3:

And I think we're starting to see more of that In terms of what questions we need to be asking the AI system. Unfortunately, based on that prior example I gave, it's a very expansive set of questions that spans my data, how that data was curated and ETL and normalized, how I'm creating and building machine learning models. But then it doesn't stop there. Once I have deployed that machine learning model into production, the lifecycle and the the sort of runtime view of that is equally as important, because that data that live production data is going to cause it to change. So, as a tech provider or a company that's building ML AI platforms to create offerings around it, you have to be conscious in the sense that this is not just a data science problem. And can I, am I tackling sort of the end to end lifecycle challenges that machine learning and AI pose and make sure that every step of the way I feel comfortable that I'm hitting the responsible use of it.

Speaker 2:

I want to look at the role of big tech, And if we look at big tech, I look at the FAANGs. Facebook, Apple, Amazon, Netflix and Google they all have AI deeply baked into what they do or complete teams running AI programs. Who's watching the AI masters and what should consumers and regulators be doing, And can we trust the FAANGs to regulate themselves?

Speaker 3:

We need to be. we enter into these providers, right. We sign up for Facebook, we're actively logging into Netflix. Part of the onus is on us to better understand how they're using their data or our data, and asking And then, i think, twofold we are starting to see more national level laws around data privacy, and I think that's important. I think that surf needs to surface at a national level to create better guidelines. I don't think all governments are going to have an answer, per say, because, as we were saying earlier, they need to have technologists at the table as well, but we need to foster the right level to have that conversation, and it can't be just at the Facebook, netflix, apple level. right, we need to have somewhat of a higher order of conversation that's dealing with basic rights around privacy and what can and can AI be doing to infringe that.

Speaker 2:

At a TED Talk in April 2022, elon Musk suggested that the algorithm that determines how tweets are promoted and demoted could be uploaded to the software hosting platform GitHub, making it available to people outside of the company to look at. He's committed to improving the social network by, amongst other things, making the algorithms open source to increase trust, but is open sourcing code the way to solve an AI ethics issue?

Speaker 3:

Yes and no, And it depends on what's being open source. So, for example, if we start to look at trustworthy AI and some of the pillars around it fairness, bias, explainability, some of these core tenants of building more trustworthy AI if we start to open source that which companies are, this becomes a mechanism to simplify adoption and start to help build standards by which companies can use, so that you have a starting point. Coming back to an earlier question on where and how can people get started, open source is a great way for companies who don't have the skill and house to go figure out how to build trusted AI products or solutions. There are very, very strong offerings in the market Now. that being said, that's a very specific set of open source capabilities.

Speaker 3:

What Elon might be describing is a little bit different. right, Just because I open source an algorithm and make it open and transparent doesn't necessarily mean that equates to more ethical AI. I think if we can open source again some of the core standards and tenants of what makes AI trustworthy, that becomes a great starting point to level the playing field, so that it's not just the Facebooks, the Apples, the IBMs and the Googles who can afford to create and build trustworthy AI, but you lower the barrier of adoption for any company or any organization who's using machine learning and AI to very quickly put something in place.

Speaker 2:

I'd argue that we teach young people at school about the value of money so they can actually go into the world and be proficient with financial issues. Should schools and universities be teaching ethics courses?

Speaker 3:

Absolutely. In fact, i try to talk as much as I can at some of these local universities, and especially all the modders, because of how important it is. I think AI, obviously, as we're starting to see as becoming more prevalent in coursework, ethical and responsible use of AI is equally as important. Those case studies need to be taught early as we start to graduate students into the workforce.

Speaker 2:

I think you're right, because then the one's asking the questions of those higher up. Have we thought about the ethical issues? This is something I learned in my Ethics 101 course. Why don't you do something about that? I think we need to look to the new generation of leaders to actually start challenging some of these decisions that are being made for us by technology. Would you agree?

Speaker 3:

Absolutely Coming back to the rate and pace of change of these algorithms and state of art techniques, probably in the year that's transpired since the last graduating class left college and is now in the workforce. There's probably two to three different techniques on how to run, how to build, how to deploy, how to manage AI responsibly. So we need the rate and pace of change in tech to meet the rate and pace of change of the skilled workforce that's exiting And I think that's just as important.

Speaker 2:

So the question I'm sure my listeners will have today okay, this is now an issue What can I do today to come up to speed? Is there a book, Is there a website? Is there a podcast that I can listen to?

Speaker 3:

to come up to speed quickly I think there are a few books out there. What I would urge is I would start by educating yourself from the big data providers. They're being forced to look at these issues. I would say look at a diverse set of them. Each one has a different take and a different perspective And they're right in their own regards, but create sort of a holistic view by the different positions out there.

Speaker 3:

I urge people to get in deeper into the tech. I think this is one where this black box around AI it's easy to say and it's easy to sort of understand, but if you want to do something about it, you really need to understand some of the core tenants underneath it, like explainable AI biases and why some bias is good, some is bad drift. These exist in paper, so I strongly suggest people look at archive And then, third, look at the national ethic boards that are being formed and then start generating opinions based off of what are coming into the agendas in these meetings which are being published. And then, of course, finally, i would say, look at open source as a great place to try Not just read, not just educate, but get familiar and try by implementing.

Speaker 2:

I always encourage my audience to be digitally curious And I think today those that are hearing that their major issues around ethics should be very uncomfortable and should look at some of those resources. You suggested to really do a mini auditor. Are we taking this seriously? If you're brave out there, go to your chief risk officer and say one are you personally up to speed on the ethical AI risks And, secondly, who's looking at those? Do you think enough boards are asking their risk departments about AI or is it because it's a black box there, blindsided by it all the moment?

Speaker 3:

I think, in financial services and insurance. I think some of the more regulated industries that have to begin with are forcing that conversation, and it's why you see more expansion around you know, for example, mrm teams. I think we need to do more, though I really do And trusted AI products, thankfully, are coming into market, so you have both proprietary and open source tools and products that are available for people to start using. I don't think we're having the conversation enough, even in the big tech companies. We need to force that. We really need to make this front and center, and the good news is, as these products and tools continue to advance, it's making it easier to build and deploy trustworthy AI, but there's always more advancement that we can be making in this space.

Speaker 2:

Where does at scale play in this whole ecosystem? Where are you making AI decisions better and ethical decisions better?

Speaker 3:

So for us, we create sort of this semantic view of all of your data And what we start with is the business vetted actuals. So for me and the products that we build, what we're providing developers, data scientists, ml engineers, data engineers we give them visibility into what the business has defined and reporting on from a BI standpoint. So there's no misinformation, there's no incongruity with what is being developed from a development standpoint. We know what the business actuals are. And then, in addition, what we provide with at scale is the ability to write back and sort of timestamp all of the attributes that developers are creating as part of the ML lifecycle so that now you can publish that to either at scale proper or potentially catalogs, so that you can measure lineage, traceability, auditability, so that everything that's happening in the machine learning and AI pipeline becomes transparent and visible to the business, not just the developer community.

Speaker 2:

So almost out of time, but I want to go to my favorite part of the show. I want to run you through a quick fire round. So quick answer is a good answer. Iphone or Android, iphone Window or aisle, i'll online or in the room. In the room, the app you use most on your phone.

Speaker 3:

My news app.

Speaker 2:

What's the best piece of advice you've ever received? Be yourself. What are you reading at the moment?

Speaker 3:

Re-reading the picture of Dorian Gray, one of my all time favorites. Who should?

Speaker 2:

I invite next onto the podcast.

Speaker 3:

Elon Musk, if you can get him, and final quick fire question.

Speaker 2:

How do you want to be remembered?

Speaker 3:

As a great dad, husband and, from a tech standpoint, someone who helped make an impact.

Speaker 2:

So, as this is the actionable futures podcast, what three actionable things should our audience do today when it comes to learning more about AI ethics?

Speaker 3:

I think the first thing they should do is think about how AI impacts their daily life and find the right reasons to understand why responsible AI matters to them, whether they're an individual, as a consumer, whether they're a VP of data science at a large company or organization, or whether they're a government leader. And then, second, i would say for companies to actually start evaluating trusted AI platforms, start in the open source. Familiarize yourself with the papers that exist out there If you need to go deeper into how this tech is working. And then, finally, i would say, familiarize yourself with the regulations in your country, because if they're not there, they're coming.

Speaker 2:

Clearly someone very passionate about your work. How can people find out more about you and the company?

Speaker 3:

Please visit atscalecom. You'll see a lot of writings that we have here in this space, as well as following me on LinkedIn or Medium.

Speaker 2:

What a fantastic discussion today. Thank you so much for your time.

Speaker 3:

Thank you, andrew, for having me.

Speaker 1:

Thank you for listening to the Actionable Futurist podcast. You can find all of our previous shows at actionablefuturistcom And if you like what you've heard on the show, please consider subscribing via your favorite podcast app So you never miss an episode. You can find out more about Andrew and how he helps corporates navigate a disruptive digital wealth with keynote speeches and C-suite workshops delivered in person or virtually at actionablefuturistcom. Until next time, this has been the Actionable Futurist podcast.

Speaker 2:

Thank you.

What AtScale does
The risk of AI delivering bad decisions
Is there a need for quality control of AI systems?
Positive uses of AI
The rise of deepfakes
Why AI Ethics is a movement
Which departments should be across AI ethics?
Where can you source information on AI ethics?
What do the regulators need to do to keep up with the AI disruptors
The issue of conscious bias in AI models
The ethical challenges Governments face with AI
What questions should consumers ask of AI systems?
The role of Big Tech and AI - can we trust them?
Should we use open source AI models to help with transparency?
Teaching students about AI ethics
Ethical and responsible uses of AI
What can you do today to come up to speed on AI ethics?
Are boards asking their risk teams about AI?
Where AtScale fits in the AI ecosystem
Quickfire round
Three actionable things to learn more about AI ethics
More on Gurav