What is General Artificial Intelligence

Updated on March 30, 2021
24 min read

Are you still wondering what Artificial Intelligence is and if it can actually be applied in real life? Join the discussion between Doug Lenat,CEO of Cycorp, and Danny Lange, SVP of AI at Unity, at GOTO Chicago 2019 to understand what is the status quo, what are the things for which AI is already being used, what are the struggles, and if we should fear it or not.

We're kicking off 2021 with a new interview series: GOTO Unscripted, with our first round of interviews recorded back when we could still meet in person. GOTO Unscripted takes our conference speakers off the big stage and brings them behind the scenes for an intimate conversation on topics they know best.

Are you still wondering what artificial intelligence is and if it can actually be applied in real life? Join the discussion between Doug Lenat, CEO of Cycorp, and Danny Lange, SVP of AI at Unity, recorded at GOTO Chicago 2019, to understand what the status quo is, where AI is already being used, the struggles, and whether or not we should fear it.

What got you started with artificial intelligence?

Jørn Larsen: My name is Jørn Larsen, and I'm from the GOTO organization. We have two guests here in our little studio. We have Doug, and we have Danny. Please tell a little bit about yourself and your background and your passions in this space, which is, of course, you know, artificial intelligence.

Doug Lenat: Sure. Well, I was a physics major and gotten some bachelor's and master's degrees in physics and got far enough in physics and mathematics to realize that I wouldn't be the world's greatest physicist or mathematician. Then even if I made progress in those areas, it would be a long time if ever until they really had a positive impact on the world. I looked around and tried to think about what would improve the world in a significant way. I realized there were really two paths, one involving molecular biology, genetic engineering, and one involving AI. And at the time, which was the late 1960s, there were already a lot of very smart people working in the biology area. But in AI, it was clear that it was like the Wild West in America, it was an open frontier. And that was the path to increasing human intelligence, that was the path to really leaving a positive mark on the world.

So I went into AI. Having never taken any kind of computer science or AI class, got a degree in computer science, became a professor at Stanford in computer science, did AI work in machine learning and natural language, understanding in expert systems and even a little bit in robotics and so on. I and my colleagues kept hitting this brick wall, which was our programs had the veneer of intelligence, but they wouldn't scale up because they didn't have common sense.

I realized about 1980 that somebody had to do something about this and that it would take thousands of person-years of effort, and I wasn't going to live to see the end of it with five or six graduate students. So I moved to the wilds of Austin, Texas, formed a company where we have 10 times as many people. And now, 35 or 40 years later, we're just at the end of that long journey, and we've actually got something which, I think, is the core of general artificial intelligence.

Jørn Larsen: Thank you. Danny?

Danny Lange: Yes, I'm VP of AI at Unity Technologies. We are a game engine company — we're the most popular game engine in the world, powering 60 percent of all games. AI has a very, very, very interesting role to play with game engines. It can play the role of behavior in NPC, that's non-playable characters, inside games. You can also use it to test games. And then thirdly, you can actually use the game engine to provide simulation data for algorithms, for algorithms to learn from these simulations. 

I think that's a common theme of what I've been doing in my life is always operating at scale. When I was head of machine learning at Google, we built a very scalable cloud platform for machine learning for the company. I ran elastic machine learning when I was a GM for machine learning at Amazon. I also launched the first AWS service for machine learning. It's all about scale

That’s been where I believe that Unity can provide a distinct capability to AI researchers, 1 machine, 10 machines, 1,000 machines generating massive amounts of training data at extremely low cost.

What is artificial intelligence?

Jørn Larsen: Thank you. So we'll kick off the discussion here by trying to define what is AI and see if you agree on that or you have two different versions. Doug, would you mind starting with sharing your view?

Doug Lenat: So it's very slippery to try to define artificial intelligence, but maybe a better way to think about it is, how will you know it when you have it, or how will you know it when it exists in the world? And I think the right way to think of it is a kind of mental amplification or mental prosthesis that will make people smarter, more creative, able to do more things in parallel, able to misunderstand each other less, and so on. As a result, we as individuals and we as a species will get qualitatively smarter. The good analogy is 100 years ago what electrification did for our muscles, where we're able to travel faster than horses’ legs can carry us, we're able to communicate farther than we can shout, and so on. It's that kind of ubiquitous amplification of muscle power that really changed the world.

If you look back when was the last time human intelligence was really amplified? It was all the way back to the development and discovery of language. We look back on pre-linguistic cavemen with sort of a mixture of fear and loathing and essentially say they weren't really quite human, were they? And I think once AI is here, people will look back on us with exactly that same kind of pity and say, "Well, it's nice that they developed AI, but they weren't really human, were they?"

Jørn Larsen: So it's something that can extend us and enhance us rather than something that will stand alone?

Doug Lenat: Absolutely. I think that there will be inevitably, and in fact, there are already standalone AIs. But the combination of human intelligence coupled with and amplified by machine intelligence is really going to define the future of humanity and really going to define the future.

Jørn Larsen: Danny?

Danny Lange: Yes, I have a somewhat similar definition to AI, at least when you start out saying that AI, you know it when you see it. I basically take a step back and say, well, instead of the artificial part, let's just look at intelligence. Intelligence is something that we see around us in nature. We see it not just in animals, but in most of the beings that have some level of intelligence in surviving. When you put the A in front of the artificial part of it, I think it's when we have computer systems that demonstrate some of the same traits, with one very important amplifier, and I use that term too, which is what I call the feedback loop. I think it's very important for artificial intelligence that there's a feedback loop, then it's not just a smart algorithm, implemented, compiled, shipped, whether it is an algorithm that generates data, consumes data, and learns from it and gets better and better and better.

Doug Lenat: I think there's one additional element of intelligence that's important to call out, which is something you might think of as self-awareness or consciousness or modeling. But it doesn't have to be the consciousness that we normally think of it. Just think of it as the AI can ensure that will have a model of its own self and its own capabilities and its own situatedness in the world and its own functioning on things, and it'll be able to correctly answer questions and make decisions about what it was doing and why and when and so on. It's something which, from our point of view, looking at another person or looking at an AI, we might ascribe consciousness to it.

But I think that kind of self-awareness can be built without any kind of mystical input or anything added to the purely mechanistic representation of knowledge, the purely mechanistic and algorithmic capturing of the sorts of processes you're ascribing to almost all living systems and biological systems, namely, to defend and procreate and to explore and to understand that if they're going to accomplish their goals, they're going to have to learn more about their environment, they're going to have to exhibit things, which, again, from our point of view, we might label as curiosity or innovativeness. But, again, there's nothing magical or mystical about that. That can all be represented explicitly, and programs can do those sorts of things.

Jørn Larsen: Ok. So you both received very high ratings at our conference. So thank you for really bringing the quality to our conference. Thank you for that.

Danny Lange: Thank you for having us.

Doug Lenat, Danny Lange & Jørn Larsen interview

Jørn Larsen: Yes, you're welcome. But actually why I'm bringing it up is when the attendees here at GOTO, vote for the sessions, we will actually plant the tree. So this is kind of the reward function that we have for our attendees because we'd like to have the feedback. My quick question there is, or just follow from what you just said, is a tree intelligent?

Doug Lenat: I would say not in an interesting sense, even though it carries out some of the biological...

Jørn Larsen: Feedback.

Doug Lenat: Yes, some of the biological processes of all living systems. Yes. But there was a good book by Miller called "Living Systems" that actually talks about different levels of biological organization from the organelle to the cell, to the organism, to society, and so on, and talks about how at each level, all of these different functions are carried out and how different solutions have been found for offense and defense and getting energy and storing energy and converting energy and sensing the environment and so on. It's a very interesting idea. But from my point of view, while trees do all those things, they have found solutions which are, in some sense, uninteresting because they are very slow, almost like the token ends, where they found solutions that work for them, but they work for them in ways that make them uninteresting and unable to adapt in interesting ways to solve novel problems. The way they solve novel problems is to hope that other trees elsewhere survive.

Jørn Larsen: Yes, Danny?

Danny Lange: I wouldn't use the word uninteresting. I would say that it's hard to understand, it's hard to interpret. I think that trees, plants do have a degree of intelligence. It's just for us animals really, really hard to understand...

Jørn Larsen: To communicate.

Danny Lange: ... the timescale is different, the problems are different. Remember, we can always run, they can't run. So they build a completely different system to stay around and defend themselves, yes? So I think it actually raises another problem, which is when you actually see intelligence or experience intelligence you hadn't really seen before, you know, how do you know it? 

Doug Lenat: I would say that that's actually what I meant by the not being interesting, namely, they may be intelligent in one sense, but it would be boring for me to try to have a conversation with a tree. And the way I recognize intelligence, say, in you too is that I know that we are having a lively conversation. If there were a fourth member of our group, which was a tree, I believe even you would agree that it had the least interesting contribution to the session.

Danny Lange: Yes, I know.

What is the potential of artificial intelligence? 

Jørn Larsen: But they might have seen more and lived longer. Let's just continue. Okay, so we've talked about AI and intelligence. So getting back to the artificial part, so the man-made part of the intelligence. So what is the real-world potential?  

Danny Lange: I think the potential is huge because I think we're gonna have systems that are essentially smarter than we are. We all need that. If you wanna build a self-driving car, you cannot be a robot, you cannot be developed in C, engineered like a robot, engineered for all cases that it just gonna meet in real life. Not happening, right? It has to have cognitive skills. I mean, it has to be able to navigate and understand problems, it has to have seen many, many problems before and learn from them and learn how to generally solve a problem, Jørn. And I think it's an illusion to think that you can sort of hotwire a self-driving car and put it on the street, Jørn. It needs to interact with bicyclists, pedestrians, it needs to understand children's behavior versus adults.

Jørn Larsen: Bags on the road.

Danny Lange: It needs to deal with weather, etc., etc. I don't think that you can sort of take a little machine learning and do that at all, a little C engineering and a little robot programming here and achieve it. It's just impossible. We have to have much more human-level cognitive abilities to have a car drive in the streets. And I could mention many other tools. 

Doug Lenat: Since I completely agree with what Danny said, I'll find some other way to answer your question. So I think that the potential for AI if you look at the potential, was the real impact of computing? Yes, it's true that at the end of World War II and in the next decade, there was some impact of a small number of large computers. But the real impact came when computing became ubiquitous and omnipresent. I think in many ways, the real potential for AI is when it becomes omnipresent and universal and ubiquitous, that it will transform the world we live in into what Allen Newell might have called the land of fairy, a world where every single person growing up in that world, when they walk up to a door, will just assume they could talk to the door if they wanted to. They could ask it questions and the door would know things that a door in that place ought to know. They'll be able to talk to more or less any object in the world and have essentially whenever reasonable conversation they need to have with that object as if it were an animated thing. And growing up and living in that world, especially from the birth of a generation or two from now, I think people will be able to have a completely different experience of life as we do. They'll have a life that's really more like characters in fairy tales.

Danny Lange: So what about when all those things start talking to each other?

Doug Lenat: Well, I think that that's gonna be a good example of why we want to build it from the beginning, rather than letting it evolve. We want to build in a model of what it means to be moral, ethical, what it means to be altruistic, what it means to look out for the common good, what it means to cooperate, what it means to be just and fair, and so on. And that if we let those things evolve, then maybe they'll evolve as well as humans did or better, but maybe they'll evolve as badly as humans did or worse. And, you know, have you met people? So I think that it's one of those things where we have a responsibility to not just invent technology which can become intelligent, we have a responsibility just like any responsible...

Recommended talk: The Promise and Limitations of AI • Doug Lenat

Jørn Larsen: We got some rules...

Doug Lenat: Well, and like any responsible parents, we have a responsibility to bring our AIs upright so that, in fact, they have the best chance of being responsible citizens when they interact with each other and with people.

Jørn Larsen: So we want to give them a moral compass?

Doug Lenat: Yes.

What is the most advanced application of artificial intelligence today? 

Jørn Larsen: Yes. So what can it do right now? What's the most advanced thing that you know of that we use AI for today?

Danny Lange: Playing golf, playing chess, playing computer games. That's what we're looking at today. We shouldn't underestimate that. There's very complex stuff going on in there. But it's mostly what I would call... consider somewhat simplistic single-agent systems. There's no collaboration, there are no collaboration patterns in there, everything is still been... no, it's been engineered. But it's the roots of the core entities that we're going to work with to take AI to the next level. So it is reasonable smart systems, I mean, like, beating the human in playing golf. There's very complex decision-making in there. But what if I have, 100 of those systems, not playing golf, but learning to cooperate, creating their own collaboration mechanisms, creating their own communication patterns so that they can achieve, you know, solving the problem of some kind, that's when you're gonna see the real impact of this. So, today, we are at this very entity level, where we have some very basic intelligence and it's doing impressive things, but it's not changing the work necessarily.

Jørn Larsen: What about in medicine, Doug? You had some examples in your talk from medicine and how you're helping to make maybe better decisions or second opinions?

Doug Lenat: Yes. One of the things that humans suffer from is various sorts of cognitive limitations, things like confirmation bias. Even if you're the world's expert when you come up with some hypothesis, it's very hard to objectively put that aside and think of the second-best answer and the third-best answer, and so on. There are many other sorts of cognitive biases like that that humans suffer from, like the regression to the mean phenomenon and so on. And I could go on and on there, there are many. But the applications which I find the most powerful, most interesting that AI is already able to help with is almost the kind of counseling type of application which essentially says, "That's really nice, but are you sure there isn't another way that this could happen? Let's generate a bunch of scenarios. And even if you don't like any of these, maybe some of these will spark another idea in you and so on."

Doug Lenat, Danny Lange & Jørn Larsen interview

So I think there are AIs out there already which are helping people to be more creative and to be less brittle in terms of their approaches to problems. We see that in, for example, scenario generation for counterterrorism, where you really don't want to just imagine and defend against the most likely attack. Even if it really is the most likely attack that's going to happen next, you really want to defend everywhere. You really need to imagine many different scenarios and think about preparing for all of them. So I think that the best applications of AI today are ones that help scientists and engineers and doctors and intelligence officers and so on to think outside the box a little bit and to come up with solutions that they otherwise would have missed.

Jørn Larsen: I guess also pattern recognition if you just have a lot of data, I mean, not like you have created a lot of data, but like look at the stars, look at the CERN experiment where you collide particles that just you generate a lot of data. So I guess that's also useful applications.

Danny Lange: One of the things we found with deep learning is the ability of a deep neural net to learn very complex functions such as modeling and physics. So, basically, predicting, computing a lot of trajectory information, well, by giving it enough data, it will actually learn physics, it'll actually learn the formulas around the physics. Not that you can go in and look inside your machine learning model and say there's the function, but it learns very complex functions. That has been one of the breakthroughs is to be able to basically through very large amounts of data to learn these what you call patterns...

Jørn Larsen: Yes, patterns.

Danny Lange: ... and turn them into functions, very, very, very complex functions.

What are the current struggles in artificial intelligence? 

Jørn Larsen: Okay. So that was some applications and some potential. So let's talk about where it struggles.

Doug Lenat: Well, I would say following up with what you just said, it struggles in the very thing you identified, which is it doesn't actually understand and it says that we can understand and articulate the intermediate concepts it's forming and the intermediate clusters that it's forming and so on. So, as a result, it has a difficult time articulating in ways that we would understand why it's come up with the answers that it's come up with. 

In some cases, the decisions are so important to us that it's very, very difficult for us to trust an entity that can't explain itself. We've all had the experience, I think, of having gone to some doctors, for example, some physicians, who were very good at explaining their reasoning versus some physicians who basically just essentially said, "Trust me, I'm the expert." And the visceral qualitative reaction to those two different kinds of physicians, at least in my case, is very, very extreme, where possible because of being an engineer at heart, I really want to understand and I have difficulty trusting an agent, even if they have a good track record of correctness if they aren't able to explain in ways that I can understand how they got to some conclusion.

Recommended talk: On the Road to Artificial General Intelligence • Danny Lange

Danny Lange: I think that we should look at machine learning today as we have convolution neural nets, we have recurrent nets, we have some basic mechanisms. I would say that the state of the art today is if I wanna look for analogies may be in the programming languages of the early '60s, yeah, we have spent enormous resources in bending the equivalent of an if statement, the equivalent of an assignment, for loops, while loops. And now, the next 10 to 20 years, we're gonna put all that together.

The machine learning models we see today, are very, very naive, they're very basic, they solve a singular problem. One of the things that I think is going happen is we're going see multi-agent systems, which is essentially large aggregations of machine learning models, where some models will be dealing with collaboration, some models will deal with anticipation, and you will have models competing for control, models suggesting ideas and models explaining what's going on if that's needed. I just think today we are just scratching the surface.

Doug Lenat: I hope you're right. That's going to be very exciting. I look forward to seeing that.

When will a self-driving car be a normal thing?

Jørn Larsen: So let's get back to the self-driving car, because we have been talking about this for many years. And when you're in California, you see these weird-looking cars with all kinds of stuff on them. 

When do you think we will actually see that happening in the real world on a larger scale, and not just as experiments, but actually, when we will have as a normal thing that we will be among the self-driving cars?

Danny Lange: Do you want to give us a year?

Doug Lenat: Well, that's what I'm trying to say. You have a background of having done this for Uber and so... So I'm very interested in your answer to this, within whatever it is, I'll disagree with it.

Danny Lange: Yes, good. I like that. I don't really want to put a year on it. I'm gonna keep that to myself. No, just kidding. I think it's a very difficult question to answer directly. So I will answer it this way, that there are certain things that need to happen before we can put these cars on the street. And there is a technology that needs to be invented to do that.

Jørn Larsen: Such as?

Danny Lange: Such as massive simulation, where the whole simulation technology itself becomes a challenge. We need to simulate an experience that corresponds to the accumulated human experience in driving vehicles around for the last 120, 130, 140 years. 

Jørn Larsen: Would it make sense to actually first build the generation of cars that picks up every information that it can from the driver, from the traffic, from the environment to collect more information?

Danny Lange: No, you can never do this in a real physical world. It has to be a virtual world, it has to be the massive generation of driving data that are all synthetic. And the odd is gonna be in creating synthetic data for taking these vehicles. You have to make sure that these vehicles have seen everything that is to be seen, understood it, and a little more, and learn to generalize that and deal with it. That's what I call cognitive skills.

Jørn Larsen: And that's despite the fact that human beings are actually not very good at driving cars because that actually kills a lot of people every day in traffic, so how perfect does it have to be before we will do it?

Doug Lenat: So I have a chilling story analogous to that. When I was at the Japanese fifth-generation computing effort conference in Tokyo, I believe in 1989, while I was there, a despondent worker at an auto plant committed suicide by crawling under the safety chains and essentially getting himself crushed to death by a machine that they normally operated. And instead of this being a minor story of the form despondent worker commits suicide, it was front-page news with the title "Robot Kills Worker," because in fact, the public, media, the government are just waiting for stories of the form AI kills photogenic family of four, lurid photos, and so on. 

So I believe that even if we built autonomous vehicles that were safer on an annual basis than the cars we have today, even the small number of unnecessary fatal crashes, like, for example, with airplanes, would be what the media and the public are focused on and what the government acted against and so on.

So that's in addition to the difficulty of building autonomous vehicles that are better than people today. I think that there really is a continuum of what we could safely use autonomous vehicles for when to answer your question. In the very near future, like one to 2 years from now, we could see truck convoys on long-haul interstate routes with 10, 18 wheelers where humans are only in the first and last one, and all the intermediate ones have driven completely autonomously and so on. But if you're talking about driving around, say, the streets of Chicago, I think that without programs that truly have common sense, without programs that have experience, not just the driving experience, but the experience of knowing what object is likely to be In that McDonald's bag, oh, what the hell? I'll run it over, oh, I know what object's in that Home Depot bag, I'm not gonna run that over because I know what I buy when I go to Home Depot and so on.

So without something which has the knowledge of what you were calling the simulation knowledge, not just the driving world, but in some sense almost all aspects of our everyday life, I think that there are always going to be this small probability of unlikely novel occurrences that mess those programs up. For that reason, I think that the common stereotype of autonomous vehicles is both metaphorically and literally an accident waiting to happen. And when it happens, the media and government are going to jump on that in a negative way.

Doug Lenat, Danny Lange & Jørn Larsen interview

Jørn Larsen: Sure, sure. That's the case. But I would like to ask you anyway, is that intelligent? Because if we could save a lot of lives, having not 100% perfect system, but we could save hundreds, thousands of lives, but we don't do it because we fear media, is that intelligent?

Doug Lenat: Well, it depends on what kind of short versus long-term view you're taking. If in fact, we try and do that, it has this negative backlash, it ends up there for not saving all those lives because people have essentially stopped using it or prohibited it, and so on. 

If we had just waited 5 more years or 10 more years, then it could have come out in a safe way. In some ways, it's the analog of what happened with Google Glass, right? Google Glass is obviously the right-leaning idea for the future. And if Google had just waited about five more years to come out with it, I think it could have dominated our entire world. But now, it's gonna take an extra 5 or 10 years longer because of the backlash that happened against it.

Jørn Larsen: Danny?

Danny Lange: Yeah, I think I'm convinced we're gonna see self-driving cars. And I really believe that they have to be so much safer than anything we have. I think we're probably gonna expect them to be almost as safe as airplanes. So it's gonna require a lot of innovation. I think simulation is the key because you basically have to surpass where nature got us today, which is not okay. And so I think it's gonna happen, it's gonna happen through simulation.

Doug Lenat: So I would just modify slightly what you're saying. I would say there has to be the equivalent of a model, a model of the world which is sufficient to power those autonomous vehicles, and effectively to do the equivalent of real-time simulation of what could be happening given what I'm able to see in front of me and so on.

So whether that model is completely autonomously built up through machine learning, or partly through what you might think of as actual explicit declarative modeling of things we know to be true about the world, in the end, you have to have this model which is good enough to power these autonomous vehicles, and we are very far from having that today.

Should we be afraid of artificial intelligence? 

Jørn Larsen: Thank you. Just one last question. Should we fear artificial intelligence?

Doug Lenat: Oh, absolutely not because, remember, we work in artificial intelligence, and they'll remember who their friends are.

Jørn Larsen: Good.

Doug Lenat: And the alternative is people doing things and making decisions and having power over us. And have you met people?

Danny Lange: Yes, we don't fear it. We know where the plug is, correct? I think we should respect that. It's gonna be a powerful tool and all powerful tools should be respected. I think that we have many, many more virgin problems. I can mention climate change is one of them that I think we should fear much more. Then we should probably fear AI and a lot of other things for this.

Doug Lenat: To give you just a slightly serious answer too to the question, I think that the answer is, to some extent, yes, we should fear it, but that AI will actually provide more good than harm. It will take away a lot of things that we currently do fear about the world because we don't understand it, we can't control it, we don't know how to cope, we don't know the right things to do. And so if AI makes us smarter, if AI makes us better able to deal with the world, then even though it may cause some problems, it will solve many more problems than it causes, just like electricity did 100 years ago.

Jørn Larsen: Thank you so much for coming.

Danny Lange: Yeah, thank you for having us and thank you for your interest.

Doug Lenat: Yes, and thank you for inviting us to this great conference.

Jørn Larsen: Thank you. You're welcome.

About the interviewers

Doug Lenat, Ph.D., contributor, is a pioneer in artificial intelligence. Dr. Lenat is the founder of the long-standing Cyc project and CEO of Cycorp, a provider of semantic technologies for unprecedented common sense reasoning.

Dr. Danny Lange is vice president of AI and machine learning at Unity Technologies where he leads multiple initiatives in the field of applied artificial intelligence. Unity is the creator of a flexible and high-performance end-to-end development platform used to create rich interactive 2D, 3D, VR and AR experiences. Previously, Danny was head of machine learning at Uber, where he led the efforts to build a highly scalable machine learning platform to support all parts of Uber’s business from the Uber app to self-driving cars. Before joining Uber, Danny was general manager of Amazon Machine Learning providing internal teams with access to machine intelligence.

Recommended talk

Artificial General Intelligence in 6 Minutes • Danny Lange



Related

CONTENT

Reinforcement Learning - ChatGPT, Playing Games, and More
Reinforcement Learning - ChatGPT, Playing Games, and More
GOTO Chicago 2023
Small is the New Big: Designing Compact Deep Learning Models
Small is the New Big: Designing Compact Deep Learning Models
GOTO Chicago 2020
Taking Machine Learning Research to Production: Solving Real Problems
Taking Machine Learning Research to Production: Solving Real Problems
GOTO Copenhagen 2019
Breaking Language Barriers with AI
Breaking Language Barriers with AI
GOTO Berlin 2019