Machine Ethics, Artificial Intelligence & Humanity with Nell Watson & Priyanka Vergadia

Updated on January 19, 2021
Share on:
linkedin facebook
Copied!
11 min read

We're kicking off 2021 with a new interview series: GOTO Unscripted. This new series takes our conference speakers off the big stage and brings them behind the scenes for an intimate conversation on topics they know best. Dive into the first GOTO Unscripted interview on machine ethics, AI and humanity featuring Nell Watson, co-founder of QuantaCorp, and Priyanka Vergadia, developer advocate at Google.

Listen to this episode on:

Apple Podcasts | Google Podcasts | Spotify | Overcast | Pocket Casts


We're kicking off 2021 with a new interview series: GOTO Unscripted, with our first round of interviews recorded back when we could still meet in person. GOTO Unscripted takes our conference speakers off the big stage and brings them behind the scenes for an intimate conversation on topics they know best.

Dive into the first GOTO Unscripted interview on machine ethics, AI and humanity featuring Nell Watson, co-founder of QuantaCorp, and Priyanka Vergadia, developer advocate at Google.

Jørn Larsen: Welcome to Amsterdam. Today we are going to talk about humanity and AI. We have Nell Watson and Priyanka Vergadia in the studio, here in Amsterdam. First, I would like to hear a little bit about you.

Nell Watson: Hi. I'm Nell Watson. I am a machine intelligence engineer that grew up in Northern Ireland. I developed some quite interesting machine vision technology a few years ago for body measurement. And since then I've segued into the realm of machine ethics, so creating rules for naughty machines and also beginning to teach them about our culture and about our values as well.

Priyanka Vergadia: Hi. I'm Priyanka Vergadia. I am a developer advocate at Google currently. I was born in India. So I come from a background of a large number of cultures. Growing up I've seen a large number of languages being spoken across and around me. I moved to the U.S. to do my master's in computer science. And the first job that I got was in the space of conversation AI. At the time I did not know that this is conversation AI. I've determined it has evolved over time, but the job was at an IVR company, interactive voice response as we know it. It falls under conversation AI. So that's the beginning of me getting introduced to it and me working through it, and now at Google, I play a lot with conversation AI products.

Jørn Larsen: We have a few interesting questions coming up here. So are you ready for the first one?

Nell Watson: Sure.

What is artificial intelligence?

Jørn Larsen: Okay. So the standard question, the opening question is, what is AI? So in your personal opinion.

Nell Watson: In my view, I think AI originally was about trying to replicate something as if a biological organism would do it. That was generally hand-coded. In more recent years we've moved toward something that's a bit more like management by objectives. So you kind of illustrate to the machine what you're trying to do and enable it to figure out how to get there for you. That's why it's a little bit like having a genie in your pocket.

Priyanka Vergadia: I agree with how Nell has defined AI. This is just an addition to what Nell was saying, right — to me, it has always been this tool or this machine that can do things that I would not maybe want to do, which is like a repetitive task that I would rather have a machine do and me doing something smarter in that time. That's how I consider AI to be.


Priyanka Vergadia, Nell Watson & Jørn Larsen interview

How should we introduce AI to companies?

Jørn Larsen: Tech companies introduced AI to people without even them asking for it. It comes into companies and it's presented to people. But how should we actually, if we should do this in a correct way, how should we introduce it to humanity, and to people?

Nell Watson: We're seeing a lot of developments recently into areas such as meta-learning — instead of trying to get a machine to do one task very well, we train it to be a novice at a lot of different things and this makes it closer to a more generalizable form of intelligence. We're not quite there yet, but it's in that direction. Machines are getting more sophisticated, but human beings are still very complex creatures. Our culture is very difficult to understand, and of course, different people from different parts of the world often think and act in very different ways. I'm working with an NGO called EthicsNet to try to create a data set of examples of different behaviors and different cultures which are generally prosocial. So kind of how to be a nice person, you know, whatever culture or situation you happen to be in, and I'm hoping that this is one good way of introducing humanity to AI.

Priyanka Vergadia: From a tech perspective, the interactions that I have with companies and users, and people who are going to be consuming AI, when I talk to them or when I introduce them rather, to AI, my way of introduction has always been to understand and help them understand that there is this technology or piece of machine that can help solve a problem that you have. And in most cases, they can understand or identify that this is a problem that I have, and yes, you identified it right, and yes, I need a solution for it, but I never realized it. Most of the good things that we have seen coming out of technology are the ones that were never known that I needed, but once I used it, I realized, "Oh, wow. This is something that I was missing and I should be using it." So that's how I have been introducing AI in general to people, and I've found that to be really a good way for them to understand what this is. And mostly if you don't even talk about AI and just say this is a tool that helps solve an X problem, that goes a long way than just introducing it to be an AI and ML type application.

Morality issues and the future of AI 

Jørn Larsen: So that was kind of a good way to introduce AI to people. If we talk about if there are any morality issues with AI and how you see it introduced in the world today, and what you could fear for the future, would you have any comments on that?

Nell Watson: I think this is one of the greatest questions of our age, and it's going to be a challenge, one that many people from all around the world are going to have to coordinate on and work with very closely to try to solve. I think that as human beings we don't tend to learn morality from morality lessons in school like we don't sit down and, you know, get taught like, don't steal things. We tend to learn morality more from things as simple as Saturday morning cartoons, right? Where you have the goodies and the baddies, and you infer the virtue or the villain hood of one action or group versus another. I think that examples like these are going to be very important for machines to begin to understand what virtue and good behavior, and prosocial actions look like.

Priyanka Vergadia: In terms of the tech companies and the ones who are building the AI applications or platforms, I have seen that it's up to the user to realize how they are going to use it and what they are going to do with it. But what we can do as good human beings is to make sure that we take some ground and standing on where our tech is going to be used. 

There are companies who are doing that. I work for one where we have created AI principles and we stick by them. Every single use case that we try to solve with AI, we make sure that we stand by those principles and ground. So that is just one example, but I feel like as we are getting into this age, the tech companies do have the responsibility to be those who set those precedents as to where AI should be used, and we should be good citizens in terms of making those decisions.

Priyanka Vergadia & Nell Watson interview

Should we fear artificial intelligence?

Jørn Larsen: We have been doing GOTO conferences for almost 25 years now, and we have been following topics like AI, so over a fairly long period of time and, like, three, four times a year we do have like, where are we with AI now? And what I have seen the past one, two years is that it seems like it's speeding up. The capabilities are increasingly speeding up. And so we had a speaker from Unity Studios and we asked him this year and we said, should we fear AI? And I want to ask you the same question.

Nell Watson: From my perspective, I think that the greatest fear from AI is not terminators running in the streets or some utility maximizer turning us all into paperclips. I think, for me, the greatest fear of AI is that it may be used as a weapon. Not in the sense of killing people, but in the sense of messing up people's minds, whether that's creating content which is unbelievably believable, you know, counterfeit evidence, for example, fake news, if you will, or whether it's simply messing with people in simple ways that drives them mad. Because systems just aren't working in ways that they expect, but in a way that you can't really prove that somebody is messing with you. We also are living in a time of intense political polarization, and often the fusion of things like payment processors with online systems and with AI. I think this is quite a dangerous mix for the future. We need to be very clear in creating strong international rules on how to use these technologies in a way that is fair and equitable to all.

Priyanka Vergadia: I totally agree with all of that. The only thing that I would add is the fact that with every technology, there's always going to be...we're always fearful when something happens for the first time, and when it's starting and beginning. Over time we start to develop an understanding of how this is useful to us and how this is helpful to us, and how this can make us better at what we do. And when we pass that time, I think with AI we are at the stage where this is still really nascent and beginning, and we are learning how this is bad and how this is good. We are going to build an understanding of how to use it better over time without having to...without basically seeing some of the...or how to curtail some of the bad aspects of it. 

The NGO that you mentioned now definitely is one of those types of initiatives, right? And the other one that I mentioned was the AI principles that, for example, Google has set out like, we are going to only work with X type of things on AI use cases, right?

I think those are some of the starting points that we are seeing as markers, I would say, in the development of AI and how we are pursuing it. So I definitely don't think there is... We should be fearful of it in the sense that what is going to happen with this technology, is, am I going to lose jobs, or is it going to affect my jobs? Yes, it's going to and you will have to adapt, and you will have to learn some new skills in order to stay relevant. But at the same time, the things that AI would help us do on the positive side is the point that I made around monotonous things that we do, right? So we're going to have to look at all those positives and start to learn about what the negative impacts are, and how we curtail them. That is what I would say. And we shouldn't fear it, but we should learn from it and move forward.

How early in our lives should we know more about artificial intelligence?

Jørn Larsen: One last question to also follow up on this. I mean, so we should debate and we should have open discussions on where to apply AI. And how soon in our life should we know about AI? And how soon in life should we learn about it and start teaching AI what we want it to do for us?

Nell Watson: I think it's never too early to begin. There are some fantastic tools out there such as Scratch and different versions that are more oriented towards different age groups. I think these are fantastic ways of learning the basics of programming, you know, simple things like if-else statements. From this you can build up knowledge that will take you into other areas and eventually into tools like Python which can be readily applied to a number of different data and machine learning capabilities, also, you know, making games and things like that. I think that there's a wonderful role for children as well in teaching machines about the world because often we know that we understand something when we're able to explain it to somebody else. I think it's nice when a child will have an opportunity to maybe explain the world as it sees it to machines, and that can be a wonderful symbiosis I think.

Priyanka Vergadia: Yes. The thing that I would add here is really just along the game side of things when kids and students at a very early age these days are playing with things or games that are designed with AI. So I'll give an example here, Quick, Draw! is a tool that we've built within Google and it's available for kids to play with. And really, all you're doing is you're making strokes, and with machine learning, this game identifies whether this was a cat, or a dog, or a ball, or a bed, right? And any aged person could play this game, but this goes to show that the kids are already using it and they're already enjoying it, and they're trying to learn from it. So introducing this at an early age as to what this means and what it is, and have them comprehend it for themselves as Nell said. And have them see the effects of it firsthand. We can give them an opportunity to decide whether they want to expand their understanding and learning in it. And then at that point, they can make games, they can start to build some of these things themselves, and learn to code through Python and things like that. So that's the way I see it. Wherever you are you're already using AI in some way, shape, or form to start there.

Jørn Larsen: Thank you so much on behalf of GOTO for coming. And thank you for your time in this interview. I think you gave us some brilliant answers. Thank you so much.

Nell Watson: Thank you.

Priyanka Vergadia: Thank you for having us.

Nell Watson: It's been a pleasure.

Care to learn more about Machine Ethics and Artificial Intelligence? Watch the related videos listed below!

Related

CONTENT

Empowering Consumers: Evolution of Software in the Future
Empowering Consumers: Evolution of Software in the Future
GOTO Unscripted
Conversation AI, the new User Experience
Conversation AI, the new User Experience
GOTO Amsterdam 2019
Accelerating Machine Learning DevOps with Kubeflow
Accelerating Machine Learning DevOps with Kubeflow
GOTO Chicago 2019
Exploring StackOverflow Data
Exploring StackOverflow Data
GOTO Chicago 2018