Digitally Curious
Digitally Curious is a show all about the near-term future with actionable advice from a range of global experts Order the book that showcases these episodes at https://curious.click/order
Your host is leading Futurist and AI Expert Andrew Grill, a dynamic and visionary tech leader with over three decades of experience steering technology companies towards innovative success.
Known for his captivating global keynotes, Andrew offers practical and actionable advice, making him a trusted advisor at the board level for companies such as Vodafone, Adobe, DHL, Nike, Nestle, Bupa, Wella, Mars, Sanofi, Dell Technologies, and the NHS.
His new book “Digitally Curious”, from Wiley delves into how technology intertwines with society’s fabric and provides actionable advice for any audience across a broad range of topics.
A former Global Managing Partner at IBM, five-time TEDx speaker, and someone who has performed more than 550 times on the world stage, he is no stranger to providing strategic advice to senior leaders across multiple industries.
Andrew’s unique blend of an engineering background, digital advocacy, and thought leadership positions him as a pivotal figure in shaping the future of technology.
Find out more about Andrew at actionablefuturist.com
Digitally Curious
S5 Episode 22: Navigating the World of General Artificial Intelligence with Peter Voss from Aigo.ai
Can machines really think like humans? What is the future of General Artificial Intelligence (GAI) when machines more closely resemble human behaviour than ever before?
In this episode, Peter unveils his journey into the realm of General Artificial Intelligence (GAI) and his vision of machines possessing the ability to think, learn, and reason like humans. We look at the intricacies of General AI and how it sets itself apart from narrow AI.
The episode also looks at how companies are tackling the immense challenges associated with crafting machines with general intelligence - from understanding the significance of concept formation in artificial general intelligence to discussing the role of quantum computing and resources in AI development.
Peter also provided his views on the potential of machines developing empathy and the role of AI in ethical and moral debates, and answers the questions I've always wanted to ask - can AI feel empathy and love?
Finally, we take a peek into the future as Peter shares how Aigo.ai is harnessing the power of conversational AI to revolutionize personalised experiences.
More on Peter
Peter on LinkedIn
Peter on Medium
Aigo website
Peter Voss is the world's foremost authority in Artificial General Intelligence. He coined the term 'AGI' in 2001 and published a book on Artificial General Intelligence in 2002.
He is a Serial AI entrepreneur, technology innovator who has for the past 20 years been dedicated to advancing Artificial General Intelligence. Peter Voss' careers include being an entrepreneur, engineer and scientist.
His experience includes founding and growing a technology company from zero to a 400-person IPO. For the past 20 years, his focus has been on developing AGI (artificial general intelligence). In 2008 Peter founded Smart Action Company, which offers the only call automation solution powered by an AGI engine.
He is now CEO & Chief Scientist at Aigo.ai Inc., which is developing and selling increasingly advanced AGI systems for large enterprise customers. Peter also has a keen interest in the inter-relationship between philosophy, psychology, ethics, futurism, and computer science.
Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious
Every episode answers the question what's the future on, With voices and opinions that need to be heard. Your host is international keynote speaker and actionable futurist, Andrew Grill.
Speaker 2:Today's guest is Peter Voss, the founder and CEO of IGO AI. Igo AI created the world's first intelligent cognitive assistant. This assistant currently manages millions of customer service inquiries for household brands. Peter is the world's foremost expert in artificial GI and has been an AI innovator for over 20 years and helped coin the term general artificial intelligence. Welcome, peter, thanks for having me. Andrew, much to cover today on a topic that I'm actually fascinated about and I'm gonna learn a lot from you today. Hopefully, let's learn a bit more about your journey. Tell me more about your background.
Speaker 3:I started off as an electronics engineer. I started my own company. Then I fell in love with software and my company turned into a software company. I developed various frameworks, including programming language and database system and ERP software system. That became quite successful. My company grew very rapidly and we actually did an IPO, so that was super exciting.
Speaker 3:When I was able to exit that company, I had enough time and money on my hands to be able to really pursue something that'd been worrying me or interesting me for a long time, and that is how can we make software more intelligent?
Speaker 3:Because software typically isn't too smart, program it and think of some condition that'll just give you an error, conditional crash or whatever. So I really wanted to figure out how we can build intelligent software. So I took off five years to really deeply study what intelligence entails, starting with philosophy, epistemology, theory of knowledge how do we know anything, what is our relationship to reality, what do IQ test measure? How do children learn, how does our intelligence differ from animal intelligence, and so on, and then, of course, also finding out what else had been done in the field of AI. So after doing this for five years, I basically came up with a design of a cognitive engine or cognitive architecture, and then started an AI company, hired about 12 people, and we spent quite a few years just in R&D mode, turning my ideas into various prototypes and seeing what works and what didn't work, and then, over a number of years, we actually developed a commercial product from that which we initially launched in 2008.
Speaker 2:I'm so glad you spent all that time understanding how we think, because I've always wondered with general artificial intelligence if it's going to be closer to what a human can do and we'll talk about that in a minute. It has to come from a deep understanding of how we think and act and feel. You helped coin the phrase general artificial intelligence and I first came across it on the Gartner hype cycle. Last few years they've started to put it on there. It's on that steep curve where it is a hype. Some people have said we're 50 years away. I'll get your view on that in a minute. What is general AI and how does it differ from the AI that we know at the moment?
Speaker 3:Yes, a good question. So the term artificial intelligence was coined some 60 plus years ago, and the original intent was really to build thinking machine, a system that can think and reason and learn the way humans do, and that turned out to be really, really hard. So over the years, over the decades, ai really morphed into narrow AI, and really what we've seen now for the last 50 years has been narrow AI, and what I mean by that is it's basically, you identify one particular problem that sort of requires some intelligence to solve, but what's really happening is that it's the programmer or the data scientists that use the intelligence to figure out how they can solve this problem programmatically. So it's kind of the external intelligence, the intelligence of the programmer or the data scientist, that really solves the problem. So, for example, one shining example is IBM's Deep Blue, the world chess champion. So it was the ingenuity of the engineers that figured out how they could use a computer to become the world chess champion.
Speaker 3:Now, in 2001, I got together with some other people that thought the time was ripe for us to actually get back to the original dream of AI, the original vision to build thinking machines, and so we actually wanted to publish a book on the topic and put our ideas down, we felt that hardware and software had advanced sufficiently to go back to tackling that problem, so we were looking for a term to describe this general intelligence, and that's how we came up with AGI artificial general intelligence which is a system that by itself has the intelligence embedded in it that it can learn how to solve different problems, to adapt to different circumstances and so on, and that's basically what I've been working on for the last 20 plus years.
Speaker 2:So will we ever get to be as smart as a human, or is an unfair question?
Speaker 3:Well, of course, in certain narrow things, AI is already superhuman, but in terms of general intelligence, yes, absolutely. I see no reason why we couldn't build machines that will have the general thinking, learning, reasoning ability of humans Absolutely.
Speaker 2:But where does it start from? So for a computer to think I? Often when I was at IBM, I'd say to clients who expected IBM Watson to cure cancer the next day with a credit card often the AI that we know about it's like a twelve year old and you have to teach it, and so if it is going to be an oncology expert, you have to have the world's best oncologist to teach it. But does it just generally I have to be taught by humans, or can it then? I just don't know. It's such a foreign concept for me to be able to think like a human. Where do you have to start differently with generally I versus the AI we looking at the moment?
Speaker 3:that's a very good question and I think one of the reasons we haven't seen a lot of progress generally in a, g, I Really having intelligent machines is that most of the people working in the field are mathematicians, statisticians, you know, software engineers, and their approaches is really that mathematical, logical approach where to solve the problem of intelligence you really need to start from cognitive psychology.
Speaker 3:You really need to start from understanding what intelligence is, what it entails, and then to figure out how you can build a machine that has those capabilities. Once you build an, a, g, I like that in principle, it could then hit the books and learn things by itself. Now it may need clarification. I mean the same way that if an intelligent person studies a new field, they might read a lot of books on the topic, but there may be some practical experience that they need or some insights that are not explained in the books or articles they can find on it. So ultimately, that is how I want AI will will learn, but you kind of have a bootstrapping problem. How do you get it to be intelligent enough to be able to hit the books, to be able to learn by itself? And that is kind of. The task we are tackling is to build an initial framework to make it intelligent enough to be able to learn by itself.
Speaker 2:Yeah, that bootstrapping is what I've always worried about. How do you give it that push start like the bobsled? The bobsled is a very fast piece of machinery which slides down the ice ramp. You need to push it to get it going. How far away are we from general AI being able to emulate a human of any sort, even if it's a five year old or a 10 year old or a 50 year old? Are we 50 years away? Are we five years away, and will quantum computing help accelerate that? Because it's just going to program things faster.
Speaker 3:I usually answer this question not in terms of how much time will it take, but more how much money or effort will it take, because I have seen so little support, for the main reason over the last 10 years is that deep learning, machine learning, has been so successful, so if nobody works on it will never get it. You know people try to just continue with deep learning. Machine learning will never get to a GI. So it's really a question of how soon the tide will turn and more people will actually work on approaches, cognitive architectures and I can talk more about what the other calls a third wave of AI. So it's only when we see more resources being thrown at that that will start seeing progress.
Speaker 3:I could think we could have human level intelligence in less than 10 years if enough effort was put into it. I don't think there any inherent hardware or software limitations that can't be overcome with, you know, some significant focused effort. I don't think. I certainly don't think we need quantum computing to solve this problem. Quantum computers by themselves are still very much. You know, I have a big question mark over them in terms of, ultimately, what will they really be able to do? What kind of problems will they be able to solve effectively.
Speaker 2:So here's a question, maybe a meta question why couldn't General AI work out how to make the fastest computer in the world?
Speaker 3:Well, it could, of course. It's like deep minds say their mission is to solve intelligence, and once you do that, it can solve all other problems.
Speaker 2:You say that we've got to throw a lot of resource and money at this. Again, having Google IBM, I've seen firsthand the first rate research teams they have around the world. Ibm are looking at this problem. Google, microsoft who's going to win?
Speaker 3:What I see sort of as a bigger perspective is that humanity is going to win. But yes, of course there's competition between the enterprises. I don't actually believe that any of the big companies are going to win, are going to produce AGI, and the reason I say this is they're like big oil tankers they're not going to turn around quickly and all of the big companies are focused on big data, machine learning, deep learning. That's what they have, that's their strengths. They have a lot of computing power, they have a lot of data. The people they hire, the top management, the whole teams, everyone. They're basically statisticians, logicians, software engineers, and I don't see that they are going to start using the right approaches to solve AI. I think it's going to come from some startup company, in the same way that who would have ever thought that little startup, google could dominate the search space? Or little startup, amazon, the online retail? I mean, there are many examples like that. The existing large companies are often quite blindsided by changes that are required to open up a new market.
Speaker 2:Well, also commercial considerations. I mean, you've got to pay the bills and if you're just doing research, it's hard to get that to market. Let me just go back to something you said in the beginning and it fascinated me that the five years you took off you really deeply studied how humans work and think. You've spent the last 15 years studying what intelligence is, how it develops in humans. What have you learned and what are we getting wrong as a species?
Speaker 3:It's kind of interesting because I've also hired a lot of smart people on my team over the years and the ones that ultimately can really help with AGI are those people who can think about the problem both as a cognitive psychologist, from a sort of cognitive psychology perspective, and then also understand it from a software engineering point of view and put those together. And typically software engineers aren't that comfortable with cognitive psychology and vice versa. It's really a deep understanding of what intelligence entails. What are the essentials of intelligence that you need to do, engineer in artificial journal intelligence and you know there are quite a number of technical things. I'll just mention two of them. One of them is the importance of concept formation and exactly what that entails. Humans are able to form concepts and form concepts of concept, basically abstract concept formation and exactly what those concepts need to look like or what, how they need to function, I think is really important.
Speaker 3:The second point is metacognition. One of the things I discovered in working spent a year helping out develop a new, not really an IQ test, but a cognitive process profile test, and one of the things I learned there was that metacognition is incredibly important. So that's basically thinking about thinking or it's being able to use the right cognitive approach for any given problem. So some problems require that you have a very systematic logical approach. Other problems require that you have a more intuitive, sort of fuzzy view of it. They don't have a specific solution, and so on. So metacognition is really important. So it's a number of technical things like that that I began to understand much better as I was researching this.
Speaker 2:So thinking about thinking. One question I've always wondered is can machines have empathy and could general AI learn to love?
Speaker 3:Very interesting question. Certainly they can have empathy in the sense that a good psychologist can understand other people's emotions very accurately and respond appropriately to them. But the machines won't themselves feel that emotion like we do in our gut or in our raised heart rate or whatever. It won't be visceral to them. So they can certainly understand them and be empathetic in their responses, but it's not something they will feel unless we went to a lot of trouble of actually giving them kind of a body or simulating a body with all of the physiological attributes that we have in our emotional experience.
Speaker 2:The problem, I think, with a lot of AI at the moment. As you pointed out, it's developed by programmers and so there's a conscious bias that's built in. Where do you stand in ethics and conscious bias when it comes to AI, and will this become more of a problem in general AI?
Speaker 3:No, it will be much less of a problem because AGI will be able to learn a much broader perspective and the reasons behind certain instructions or business rules that you might give and be able to help us figure out better ways of being moral, being ethical. So I think it will be a great help for us to think more clearly about these things, to apply bias where bias should be applied and not to apply it where it shouldn't be applied. Yeah, ai will help us in really every aspect of life.
Speaker 2:ultimately, once we get to sort of human level and beyond AI, Can the machines then maybe overrule the humans to say you're not being very fair there? Peter, you need to really think, because you've got your own bias there and I can sense that and I've looked at all of the other stats and you're not being very fair. Could we be overruled by the machines?
Speaker 3:Whether it's overruled. Ultimately, when we design the machines, we'll decide where we want to have the final say or not. Yes, absolutely that an AI should alert us to aspects where we are going against the values that the system has been taught or has learned, or something that is inconsistent. So, absolutely, it will alert us to situations where we are not being fair or rational.
Speaker 2:Let's just move on to what you're doing at the moment. So we've talked about some theory. Now let's talk about the practice. Tell me more about IGO, and what problem are you trying to solve?
Speaker 3:We are trying to sort of bootstrap and get our system smarter and smarter. Obviously, that takes money, and there's actually another good reason for not just doing academic research, and that is that the practical experience that you get by actually having a commercial product is invaluable. The first six years or so we spent pretty much an R&D mode and you kind of create your own problems that you then solve. Once you have a commercial product, you have that really fantastic reality check of what does the system really need to be able to solve in reality. So having a commercial company as well as having our development allows us to basically do both of them.
Speaker 3:Now the commercial product we're focusing on is conversational AI, and there's just a tremendous demand for that in many, many different areas. Really anywhere where you want some kind of intelligent and or hyper personalized conversation, now that could be in customer service, whether it's sales or support, whether it's for a retail company or a financial institution or a cable company or whatever it might be. So all of those kind of customer support, we can really have a hyper personalized experience where the artificial agent will remember what the previous conversations were, what your preferences are. So you're not just a number, you're not a demographic. You are an individual that is getting serviced.
Speaker 3:But there are also many other applications, such as in healthcare, for example, to help people manage diabetes or to give a brain to a robot. If you have a robot in a hospital or hotel, you want to be able to talk to it and you expect it to understand. You Go to the pantry, pick up this order and deliver it to room 43 on the third floor. Or in a hotel, bring me a shower cap and tomorrow morning I want two eggs over easy to be able to have those kind of conversations. The applications in gaming and VR and AR again, anywhere where you actually have a natural language conversation and those are the markets that we are addressing and commercializing.
Speaker 2:Peter, it's a fascinating area. I'm sure we'll hear much more about this and much more about IGO. How can people find out more about you and your work?
Speaker 3:Our website Igoai Also. I've written quite a few articles on these topics. You can find me in mediumcom. Look for my name, peter Boss, on Medium.
Speaker 2:Peter, thank you so much for your time and thanks for being on the show.
Speaker 3:Yeah, thanks for having me. It was great.
Speaker 1:Thank you for listening to the Actionable Futurist podcast. You can find all of our previous shows at actionablefuturistcom and if you like what you've heard on the show, please consider subscribing via your favorite podcast app so you never miss an episode. You can find out more about Andrew and how he helps corporates navigate a disruptive digital world with keynote speeches and C-suite workshops delivered in person or virtually at actionable futuristcom. Until next time, this has been the Actionable Futurist podcast.