Anthropomorphising AI Is an Impediment to a Stable Society

Anthropomorphising AI Is an Impediment to a Stable Society


>>Good morning everybody. It is my great pleasure to welcome Joanna Bryson to
Microsoft Research today, for her to give her talk, and she’s going to be
spending the day here talking to a bunch of us. She is an Associate Professor
at the University of Perth, and she is a trusted
disciplinary researcher focusing on many
different problems, many problems that are
very interesting to us at the intersection
of AI psychology, autonomy, ethics of robotics and human cooperation
collaboration. So, I’ll welcome her to talk about her fascinating research, and if you have questions during the talk please free
to ask. Thank you.>>Well, thank you. I’m
not using that microphone. Okay. Thanks for having me here. I have as usual a lot of slides. So, I’ll just get into
it, please do ask. But thanks very much, everybody
involved inviting me. Originally Jamie, but it’s
become a very team effort. So, yeah you may think it’s weird that I’m talking about the European Union. But I’m just taking it as another interesting
political entity, and they are these very
specific goals which are human dignity
and flourishing. I don’t know about you, when I first heard about
these goals it sounded like you know like you
know vaporware right. Like what does that even mean. Right? But I’ve come to
realize that sort of what it means is
that everybody needs at least enough stability in their lives so that
they can plan, so that they can
build a business. So, they can decide
whether to have a family. Not that everybody needs
a family but that you can have these kinds of
things as options, right? So, there’s something that we
need in order to flourish, and we’ve been doing a pretty
good job of flourishing. Do I get a laser? I
could just point. So, this is the log
scale population in millions of humans. It started from 10,000 years ago. Does anyone have a laser that
they’re carrying around? Oh, here it is. I found one. So, yeah. So, what happened
10,000 years ago, up until 10,000 years ago, we were being
out-competed by monkeys. Monkeys evolved more
recently than humans. There were more macaques
than hominids, and then all of a
sudden, we got writing, agriculture urbanization and
then dark channel religion as an AI person I’d like to think the writing
is the key thing. But I like to talk
about writing as the first AI because it
was was off-board memory. It allowed us to innovate and then all of a
sudden we get this. Amazing, so we’re flourishing. Right. As we know, sometimes you might
also think about this, an exponential, that’s
super-intelligence, right? As we’ve all heard, my fellow immigrant
colleague in the UK, there is sometimes
unanticipated sub-goals of super-intelligence, and one is the we’re wiping out of all the other large animals. You can see that even before
that 100,00 years ago, when humans got to continent
it was sometimes bad news, but actually, some of that is because there was actually another bit of
global warming right then. So, maybe the reason we
were moving so much was because there was
reasons to go and that was hard. But look at this. There’s actually more biomass now especially since
the industrial revolution we’re actually taking energy out
of the ground and out of the sky and things and
creating more biomass, but we’re converting it entirely into people
and livestock. So, this is another
unanticipated sub-goal. It’s the green boxes there. This is from a few years ago. Are all the wild animals there
left, wild mammals, okay. Of course, another
unanticipated sub-goal is that we have started helping produce even more climate
change and that kind of thing brings all kinds
of things we don’t really necessarily want. All right. I’ve got to say it, all that is not only
because of climate, there’s lots of things
that drive that change. One of the things is the economy. So, sometimes we
have these periods with very high inequality, and that tends to correlate with very high political polarization
and identity politics. So, periods of
very great volatility. So, this is World War One, does anyone want to guess
why we have numbers going back to World War One? There was this crazy thing
where we outlawed alcohol, and when we outlawed alcohol, we needed more tax so
we create income tax. So, since the innovation
of the income tax, we know what proportion of wealth has gone to
the top one percent, and we can see that
that correlates pretty well with the amount of
political polarization. This is measured by whether or not the two
houses of Congress, for both houses of Congress, the two parties, both their
names are on bills. Okay, so that’s
a particular measure due to Nolan McCarty and
actually his PhD supervisor as he’s now professor
at Princeton. Anyway, these are
pretty well correlated and there’s some interesting
things going on there, but we, in the 30s, in both the US and the UK, we managed to get
our act together and reduce both inequality
and the polarization, and we kept it down
until about 1978 and then we’ve
started launching up again on both of those measures. I think this is history. But I think that part
of the reason there’s more of a gap here than there, is because this is measured income inequality but probably wealth inequality is
what does the drive-in. All right. So, we don’t have income for a longer
time but we do have polarization since the Civil War and more broadly we can look at people believe
they have inequality. I don’t know if I
believe the Y axis here and that you can really
match it up this way. Famously, there are some papers here Scheidel claiming
that you have to have war and mayhem or else you don’t get reductions of inequality. But I think that things
like this matters, see that shift in the- ? This was between wars
when we did that. So, I think we can do
things with policy to reduce inequality and
polarization as well. So, yeah, why do we care? One reason we care and this has just come out
in the last year, it’s been a big question. We know there’s a correlation. What the causation is
has been a big question. One of the things we know is
that extreme physicians and politicians are driven by extreme positions in
major single donors. So, one of the things that
inequality does is increase this number of people who can be such major single donors. Right? Because there’s
relatively few people with large amounts of money and this has been changing spectacularly in
the last couple of decades. All right. So, you’ve probably
wondered by this point, “Is she going to talk
about AI or not?” Right? So, mostly I’ve been talking
about right now, so far, is setting the groundwork with the impediment to
a stable society part. Now, I’m not going
to talk about AI, I’m just going to talk
about intelligence. So, I realized this is Microsoft Research,
I don’t mean to pander, but this is a slide
I already had and I was lazy, but also, I think it’s
interesting to see that you can pitch it this way. So, I like to remind people that intelligence is relying on
computation and not math. A lot of people just assume that intelligence
can be spontaneous, it’s absolutely true that AI is necessarily going to be
unbiased and neutral, and they’re mistaking
algorithms for math, and math for computation. So, computation is
a physical process, it’s a transformation
of information. Any physical process takes
time, energy, and space. We all know this is sort
of the Patrick Winston about what not that
there’s anything wrong with that version of AI, but finding the right thing
to do at the right time, which is the fundamental problem
of intelligence, is a form of search. The cause search is the number of options versus number of acts. So, that’s why, again,
you guys know this, that means that if
you have a robot that can do 100 things, and then it gets a two step plan, there’s 10,000 possible
plans already. But believe me, you give this slide to a politicians even, people who are pretty well
informed and they’re stunned. The whole thing about 35 moves
against the chest, not moving the rock
back and forth. People don’t understand this. So, you guys, being at Microsoft, are probably thinking “Hasn’t she heard about the Cloud yet?” The concurrency can
save on real time, but it obviously,
you have the time, you double the amount of
space you require and that energy requirements
are a constant. Weirdly, people aren’t even talking about what’s
going on with quantum. Again, it’s going to
be this magic thing, it’s going to solve our stuff. It does seem to save on space. Once, for some subset
of problems, you can solve things
in a single computer. But so far, it looks
like the energy is more than the space loss. Although there’s not enough
good work done on this. So, this is my personal
captor quantum physicist, but I’ve given
this talk to a lot of places and nobody has told
me I’m wrong about this. But basically, nobody is doing math on the energy
problems of quantum. So, yes. Omniscience is not actually a real threat.
That’s not the problem. Again, I probably don’t need
to tell anyone in this room, but I have been in DC where
people were talking about, “What if one country finds the machine learning algorithm, and suddenly knows
all the things.” Then, that somehow there’d be no way to recover from that. National Intelligence
is not general. Again, I wrote the slide talk yesterday and nobody
said this to me, this might. Nobody has so far and the three people I’ve talked
to has mentioned of AGI. But anyway, I’ll mention it. Early in psychology, we thought that any stimulus could
provoke any response. We were wrong. It turns out pigeons can
learn to peck for food, and deflect their wings
to avoid shock. Incidentally, not
every pigeon can learn this. Most pigeons don’t
peck that much. That’s actually the genius
pigeons who also that were used for all those
experiments you’ve seen. People tell you, “Oh, pigeons.” But anyway, pigeons can
learn to peck for food, they can not learn
to peck in order to avoid shock or to flap
their wings for food. That doesn’t make sense
if you’re a pigeon. Those kinds of things are stuff you would never need to know. So, if those activity
is not there, your brain doesn’t provide you the bias to do
that kind of learning. So, yeah. This is the advantage of science over some other
sources of knowledge. Okay. Now, that we
understand combinatorics, this is one of the big questions I had as an undergraduate. Why is it that the brain, given that there’s
evolution and everything, why hasn’t the brain settled
down on one architecture? Why haven’t we figured
out the best way to represent information? Why is there different activity in different parts of the brain? Well, different parts
of the brain are solving different problems. So, there’s going to be different optimal solutions to
these different problems. So, of course, you’re going
to get different activity in different
representations and things. Nobody really, when you
think about a heart, expects that the whole brain is going like one giant retina. Come on, it’s just
doing different stuff. So, that also explains
why some level of bias, even prejudice, is
a necessary part of AI. Again, I think you guys know about this work,
but I don’t know. How many of you people know
about my science paper, the one about that word
embeddings have prejudice? Okay. How many people know about your guy’s paper that
came out that, well, it hit our archives at
the same time but went NIPS? Okay. So, there was, for those of you don’t know, that was entirely
an overlapping group. But within everyone. All right. So, I will do this a little bit. So, if you just use
word embeddings, which is basically
a giant vector. I’m going to show
you a picture later. So, you basically just sit around and count whether words, or how words co-occur
next to each other, you recover the things that scientists call implicit biases. So, we took every single
study we could find. This is because
the first author was a Princeton postdoc and the final author was up for tenure this year
at Princeton. Every single paper that had the implicit association
test in it and used words, and used them on
the general population. Because we were using the Web, and we couldn’t tell who had written what part of the Web, so, we could only use
general population results. But we matched
every single one of them. So, this for example, is showing, you have a bunch of women’s
names representing women, and men’s names representing men, and family words representing
the domestic sphere, and career words representing
the corporate sphere. Then you find, if the world
was a perfectly fair place, if you give them the task
of matching words up, they should be just as fast at matching up women to
career and men to family, as the other way around, but in fact they’re much faster
at doing women to career, to family, and men to career. So, this is them being
faster like regular people, and d is a measurement in
terms of standard deviation. Because we don’t have a theory
of how long that should take but we just look and say, “Are these two things different?” So, we see the normal
distribution, we say, “How far apart are those
two normal distributions?” These are giants. This is huge. When people say, “Oh, this is the real effect.” What are they talking about? This is one of the biggest
effects in psychology, implicit association test, and people are say
it’s fake news. Because, it doesn’t correlate perfectly with
explicit prejudice. I think that’s because
people don’t know the difference between implicit, and explicit memory
anymore which we got tired if you’re old
as I am in college. Anyway, so the point is
that they were saying this, and I still get people who hammer me when I give that talk. I didn’t think I needed
to give that talk here, saying, “Don’t you know that
you can’t trust psychology?” Totally different method,
doing word counting, large purpose linguistics,
the same stuff, latent, and semantic analysis,
the way that we do search for web pages. We were able to replicate, well, replicate is not
quite the right word. Also find evidence of
bias in the words. The headlines all said AI is prejudice and
the good headline said, “It is because of us.” But that wasn’t
really the finding. The real finding was
that words or word use, what we pass on to other people, as well as to machines,
contains the image, not only of our implicit
biases, but actually, and this is something completely down to that final author, I mentioned Divya
Narayanan, he said, “I want to see what happens
with something that’s real.” So, the Y axis here is the same awful prejudice word
embeddings that gave us that women are closer to the domestic sphere than
to the work sphere. So, those exact same
things, but then, we went out and looked at
the 2015 US labor statistics, which unfortunately,
weren’t all single terms, and this only works
for single term. So, we had to just throw
away the adjectives. So, that’s why there’s
some of the noise, but, 90 percent correlation. Again, if you’re in psychology,
that’s a huge deal. Ninety percent correlation, in predicting how male
or female a word was, this is a programmer
unfortunately. When I was a programmer, we were 13 percent
of the population, women were, and this is the ’80s. It went up it and peaked around 18-19 percent, it’s crashed. It’s four or five percent,
it’s just appalling. I don’t know why and there’s a lots of
women in the audience. Programming is fun.
I love programming. I don’t know what’s happened. Anyway. So, yeah. This is like being a nurse
or something up here. Fortunately, being a doctor,
is over here somewhere. But anyway, I learned
a lot last summer. But the point is, that our implicit biases turn out to reflect the current reality in some cases, not all of them. That’s interesting. It’s cool. Because going back to the
implicit, explicit divide, that means that we can
explicitly decide, “Oh, we want to employ women. We think it’s good for women
to be able to have jobs, and we can try to shift
to something new.” But the implicit biases seem to be bootstrapping off
of observed reality, or historic reality,
or something. So, go ahead and ask
me more questions. But anyway, that wasn’t going to be the point at this talk. Sorry. So, the point is that, when you’re a member of
a cognitive species, then, it makes sense for you to exploit the concurrent computation
that others are doing. Again, I didn’t have time, I put way too many
slides into this, but, I have a beautiful video, ask me if you want it in Q&A, of tortoises learning
from each other, and they’re not very smart. These tortoises are not smart, but when they see
a tortoise solve a problem, they solve
the problem too. It’s a problem they
couldn’t solve for a month. So, it just makes sense. This is the way to think
about what culture is; it’s the exploitation of
computations already been done. So, we can do this explicitly with students
or apprentices, I just mentioned you can have
implicit social learning, even if they’re
really stupid things like reptiles and fish. Sorry I didn’t put
a reference in there, just ask me for
the references if you want. One of the things that’s
going on is we’ve implicit sensitivity
to social examples, and I think that’s
what’s going on with the overactive
anthropomorphism of AI, okay? So, now, I’ll come down
and start actually talking about artificial
intelligence, all right? I think, yeah. Not quite. Almost. Just before AI. I was asked to give a keynote at RoboPhil which you’ll
probably have heard of it, but it’s actually a fairly
big meeting of philosophers and people working
on social robots. So, things that are supposed
to look like people and participate in
human society, okay? This is the fourth or
fifth time that it’s met, I think it is a biannual. But it’s the first
time they met in Vienna and it’s the first time they invited more general
scientists to it. They titled it this year, Politics, Power, and
the Public Space. Okay, speaking of public spaces, this is a statue
by Seward Johnson. He produced a whole lot
of statues in the ’70s and put them
into public spaces. In the ’70s, that
was going a little dicey in some of
these public spaces. It turned out that if
you put something looked like a person into
a public space, you’ve made that space safer. People felt more like there was another person
around, all right? That’s the statue, okay? A lot of philosophers had
this huge thing about intuition like you can tell
that a robot is a person, you can tell that. How could you be
denying something, denying rights to
something that you can feel is definitely a person? You feel the statue is
a person too, okay? You can show that you
feel that because it alters your behavior, all right? So, over scaled statues
fascinate yet diminish us. That’s something
Seward Johnson actually said. So, he got bored of making
human scale once he started taking pieces of art
and employing them up. It’s really cool.
Those are my parents hanging out at the statue. This a place in New Jersey
called Grounds for Sculpture. So, yeah. There’s a concept from
psychology, again, just being this, super
normal stimulus, that if you have something
that you evolved to attend to and that you never go into
an evolutionary history, if it’s something that was
a greater example of that, you can actually over
respond to it and your biology doesn’t shut
you down from doing that. Arguably, we’ve been doing things like this for a long time. So, what happens with AI? This is how I got into AI ethics. So, I have a couple
degrees in psychology but I also a couple
degrees in AI. When I started my PhD, I was on this project, this robot, at the time
that I was on it and at the time this picture
was taken, did not work. This is basically a sculpture
made out of motors, okay? But it turns out there was all these processors
that were supposed to be at sprain and it turned out they
weren’t properly grounded. So, anyway, the
whole time I was on the project, it didn’t work. Yet people would walk by and say, “It would be unethical
to unplug that.” They were proud to know that they had this
certain knowledge that, I couldn’t unplug
this thing, all right? They generalize from feminism and the civil rights movement. They found out that
the most unlikeliest things turn out to be people, right? That they could just
say, “Oh, yeah. Well, obviously, if
a woman could be a person, then this is too, right?” So, anyway. Yeah.
There’s something really strange going on here. They really, really get angry if you say that this
is not a person, and that they say
that we have a right to fall in love
with these things, we have a right to have our
best friends that we can buy; that we can buy and
sell citizens, right? So, I think this is both
sick and dangerous. I say that as somebody who seriously would thought
she was going to dedicate an entire PhD to making this into
a child, all right? So, I realize, I get
it, I’ve been there. But, you can read some
of the papers about this Kathleen Richardson
particular talks about this in
the feminist context. So, this is me back then. If you’re young like
that, now be aware. But that robot on the left, the robot on the right didn’t work at the time, it
did eventually work. The one on the left
already worked, nobody ever said like, “Oh, to be honest, I can unplug
the insect robots.” Even though they actually worked, they were smarter in
some sense, right? So, yeah, and I’ve
got an unplug code. My supervisor at that time, again, I didn’t finish with him. I call a conservative for saying robots might not take over. People were absolutely certain that robots were going
to take over, right? This is now, again,
random people. This was people on
a panel he was on of other scientists and
philosophers and AI builders. Okay. So, because of all this, I wrote with the only
philosopher I knew at the time. We wrote this in 1996. It took us forever to get it into the crappiest possible workshop. Then, finally, we did a rewrite and reboot and put it
into IJCAI more recently. But this is 1998 text. “We believe exaggerated
fears of and hopes for AI are symptomatic
of a larger problem, a general confusion
about the nature of humanity and the role
of ethics in society. Our thesis is that
these exaggerated fears are false concerns which
can distract us from the real dangers of
AI technologies.” Often, I get cast as
somebody who is saying, “Oh, there’s nothing to
worry about with AI.” I don’t know, you’re worrying
about the wrong things with AI, right? “The real dangers are
no different though from those of other artifacts
so potential for misuse, either through carelessness or malevolence by the people
who control them.” All right. So this is a slide that I wrote when
I started giving talks because of the things
people were saying about our AI biased paper
that came out, right? So, a lot of people
said, “Oh, look, this is terrible
because society is bad, therefore AI is going to be bad.” So, yes, you can have
biases in this case. I don’t like to use
biases but anyway. They absorbed automatically by a machine learning
from ordinary culture. That’s true, that’s
one source of them. There’s also
accidental bias that’s being introduced
through ignorance by, for example, the Kate Crawford insufficiently diverse
development team. Yes, that’s also a problem. But the main thing I
worry about is that she could deliberately
introduce bias. Somebody could say, “Hey, I’ve been elected only
by the people from the East Coast or
the east side of town, let’s figure out a way to save money and let’s just
take it on the west side because they weren’t
voting for me anyway so we don’t need to worry
about that,” right? You can come and do that. If you forget that AI is
something somebody built, then nobody is going to guess, nobody is going to notice, right? Nobody was talking about this, they were all sitting there in the New York Times talking about a Silicon Valley’s
problem with lack of diversity which is an issue
but it’s not the issue. It’s an issue. Right. Yeah, by the way, back to
what I was saying. Yeah, this is the most bypass
I had by magnitude ever and the malevolence AI book,
It’s come out since then. But, yeah, malevolence, right? Malevolence, deliberate,
right. So, anyway, ethical instincts
and ethics itself we claimed was rooted
in identity, right? That’s something
a lot of people said. We, humans mistakenly place language mathematics
and reason as the core of humanity because it discriminates us from animals and then we can use
them in stuff, right? We thought back in 1996 as
a civil who wrote this, that once we have
empirical experience of AI this confusion is
probably going to go away. We’re talking about
like, how can we get children the experience
of building AI, young so they can be
smarter about this? Right. Then, maybe even we’d start informing our human ethics. So, the sound is not coming on. So, this is just an undergraduate funding a course on Intelligent Control
and Cognitive Systems. But they didn’t develop
these robots to be social, they had just been doing in
a room but listen to them.>>It’s incredible.>>I like how it does.>>I don’t know if
you’re picking this up, to me it’s pretty
clear to me that these guys are identifying with the robots they have built. They are projecting
themselves into the space and it’s as
if they were there, that they’re falling over. They’re saying these things,
they’re seeing themselves in their robots,
they’re running around. I was talking at
the same to them, those robots were built by them individually for
their projects and then we had this thing called the Beauty
Competition at the end of the year to decide
which robots don’t have to be dismantled but rather can be used to attract freshmen to University of Bath and to
live for another year. The Beauty Competition
space that we bring three naive people who don’t
know anything about AI. One year we used the theory of computation people group to look at the robots and see which one looks most intelligent from the outside. Okay. So, anyway, the point is that the students are seeing their robots doing
something they’ve never done before
which is interact with other robots and they
project themselves into it. We get this final year student at a leading computer science
undergraduate to create, in Britain, who has built and programmed
to run it themselves, talking about loving this robots. Okay. So, I’m sure you can make some kind
of association here. I think that at least
for developers, that we could have like
the movie actor dissociation, but it’s that clear to me in fact you know anyone who
is an actor knows that not everybody can make
the movie character actor dissociation anyway. Right, it’s not clear
what’s happening to people when we put
robots out there that they’re able to tell
that there’s this narrative that we like to play and then there’s the reality
of the robot. Yes.>>I don’t think
it’s just AI though, I’ve done a study of programmers debugging distributed systems
and they say things like, “The server says this” and then “The client says that”, right? I don’t know if they’re
anthropomorphizing it, so much as just it’s
a useful vocabulary, like it’s just a useful metaphor for describing the technology?>>That’s certainly true. That’s actually
a really difficult problem especially if you start
becoming known for this stuff. You will hear, I have
to go back through my proofs and make sure that I haven’t done that all the time, because of course
it was easier to say the car once knew
this or whatever. So, that is ordinary
language. You’re right. But, when it becomes something
that an awful lot of ordinary culture has told
people really is a person, then I think it starts to become confusing and so that’s why I do go back to my proofs and
try to make that change. But, yeah. I remember when we
used to say like multi-agency systems is
going to be a new way to do software engineering because
it helps people think better about especially in
pro-active context, real time pro-active contexts, the responsibilities of the
different part of the system. If they can use
social reasoning on those different aspects and there’s all this
nice anthropology, showing that we’re
better at musing about humans than about abstractions. So anyway, I think this is the way I currently
think, what’s going on. So, I sort of
mentioned this before, intelligence is doing
the right thing at the right time and
Artificial Intelligence. The only difference between that and natural is
that it’s an artefact, it was deliberately made. Right. So, agents are
just a vector of change. We don’t care about agency. What we care about
is moral agents. Okay? The moral agent
is the thing that a society holds responsible
for their actions. Now, a lot of people think, when you get into ethics that there is some absolute ethics that we’re all
converging towards. That somebody has written down a set of rules and
carved it in stone, and we just need
to figure it out. But, I can give you very clear examples of moral agency being different between societies which is that, what age is a child responsible
for their own actions. Okay? Massive disagreement
about a fundamental question, between cultures, even
how old do you need to be to say that you want to be married or to consent to sex. Right? This is something that we spend a lot of time
trying to figure out. So moral agency is something that a society works out for itself. While patients see similarly as the other side is what are the things we
feel responsible for? Now, the interesting thing
is that these together are moral subjects and if
you look at for example, Roman law, they sort of say, “You’re only a moral or a legal patient if
you’re a legal agent”. That the law is about us, but that’s the opposite of
psychological intuitions, so people feel that there’s more patiency where
there’s less agency. So, we all are supposed to be
helping babies and a lot of people don’t feel obliged to adults that have addiction problems or whatever
they’re like, “Hey, come on they’re
an adult they should take care of it themselves.” So, these are the definitions to see if we have
a clear conversation about what the
actual problems are. Okay. Now this is all just true. This is a weird thing
that people hate. Okay? So, my claim is that, ethics is a set of behaviors that creates and
sustains a society, including by defining
its identity. That’s why we get hung up on some of the things like okay, the British don’t eat horses. They don’t. The people in
Eastern Europe they do. But it’s a big deal. Yeah.>>Probably this is not
your main point but do you think visual illusion is
a part of intelligence?>>Sorry, visual illusions?>>Illusions, yeah.>>Well, I think it’s
a part of art. Yeah. Yes. That’s an
interesting question that’s not directly on
the current narrative, so let’s save it for
the Q and A. Okay?>>Thanks.>>All right. So, ethics is something that a society
develops to stabilize itself. There are things that will be, this includes general principles. Right? Like murder and theft and things like that tend to destabilize the society, any society, so societies
will tend to converge on those kinds of rules. But then as I said, one
of the things that helps you keep your society together is by maintaining this identity, deciding who is a
part of your society. That is actually one
of the things by which a society defends itself. I mean you could
think of it as the original part of governance, is that the main thing that
governance does is allocate resources to defining
an identity for a group, to build some kind of defenses around a village or something. Right? That’s that sort of redistribution and
security or the sort of the central things
that are defining a society so it can function
in particular ways.>>Is it fair to
say that ethics are necessary but not sufficient?>>What I’m claiming here, is that all the behaviors by which we
maintain the society, I’m just calling those ethics. So, to extent that, you need behavior and this is
kind of a weird definition, this is an eccentric one for communicating now this is not what you’re going to
find in the dictionary. But to the extent that
you need behavior, to help the society do
you need other things, is that sufficient, while
you need the individuals. So, there has to be a society, society is made of something.>>I will give you very, very tough type for example, you can have two set
of very diverge of those ethics that are collocated. Most states, most people
have very different beliefs. Is there a society
or two societies?>>Right. That’s
a really interesting question, so as you say. So, to some extent there are sub societies within
that and then there is the super society which has presumably some part of
the intersection that in some other roles
too including just being enforced by
the state powers. So, yes. So, okay.>>What is the role of
change in that fix? Like the temporal change. Because this definition
is kind of seeking for something more like static.>>No. No, it’s not. Actually, yes, change
is a big issue. I don’t want to go
into great detail, but the whole reason that I was setting this definition up, which I don’t
actually go into that much in the rest of the talks, so I’ll go ahead
and say this here, it’s because given
that ethics is itself an artifact and the AI is
definitionally an artifact, there’s no fact of
the matter of where we stick AI into society. That’s not a scientifically
discoverable fact. It’s rather something that
we need to be optimizing. So, this comes in to where your and my research
is similar that we keep doing things like that are improving that solution. So, the whole reason I’m saying this is to get it to people. Because while you
think that ethics is a static target that
you’re moving towards, that you’re just perfecting, believe me you can get
professional philosophers that are really angry about the stuff that will get up and
say something like, “We now know better
than people knew in the 1940’s or whatever.” We know some things better. I’m not one of those people whose a complete cultural relativists. I do believe that we are
aggregating more information. But I wouldn’t say that we have the perfect ethics
or that that’s even definable because it depends
on the particular society. The societies have to be agile and able to
deal with change. So, I would expect it to change. So that’s not what
I’m going towards. In fact, that’s what I’m
trying to defend against. Okay. We’ll come to
some more of that. Again, this is not
the kinds of things I think you guys are concerned about but I’ll go ahead and
leave the slide in. When I say that
AI’s place in ethics, it doesn’t have to be a moral
agent or a moral patient. What I’m trying to
say is that we don’t have to take it as
something we have to give that kind of consideration. That doesn’t mean that we
can’t take moral action. All right. So, what
is moral actions? If you have a behavioral
context that affords more than one possible
action and first of all, the society believes that some
or one of those actions is more socially beneficial and the individual is able
to recognize that, then in that case I would
call that a moral action. When you have to
make this decision, whether you make a more
or less moral which is more or less socially
sanctioned action. I’ve deliberately again set
this up in a way to make it totally easy to build and you can talk about it
in things like monkeys. So, monkeys will punish other monkeys that aren’t
doing the right thing, that don’t give food calls, or apparently they’ll even all get a whole troop barking at a dominant monkey that’s
being too abusive of its position of power and beating up another
monkey too much. So, you can talk about that kind of action in
monkeys with a pet. If you come home and
the dog’s pooed in the middle of the rug and you
think the dog knows better than this and
you shout at the dog. The dog probably thinks you knew better than to come home at 12 at night and that’s
why I pooed there. So we are both
sanctioning each other. But nobody outside of
your household is going to consider the dog to be
a moral legal agent or whatever. But you’re acting within
the society of your family, that you thought
the dog knew what its options were and that
it chose the wrong one. So, I’m not saying the AI
has to be a moral when I say that we don’t need
to make AI out to be moral agents or
moral patients. Okay. All right. Yes. So yes. Just to be controversial, monkeys, children, pets, women, slaves and robots can autonomously select moral
actions without being held to be moral or legal agents
by their wider society. I’m not saying I support
all those solutions. I’m not saying that I
think it’s a good idea to consider some people to not be responsible or to own
their own actions or property. But I’m saying that those
are the kinds of decisions societies have previously made. I think it’s very
important to realize that when we talk about human justice, it’s not about like maintain
some kind of balance, a lot of it is about dissuasion. So, one of the things
that I’m going here with this set of things, I’m realizing that
I’ve put too many interesting talks together and not drawn quite enough threads, but where I’m going with this is it doesn’t
make sense for the AI to be set up as a legal person, because we’re not going to
be able to sanction it. So, we can still have it taking legal actions but not
be the legal agent. So, monkeys and pets are easily dissuadable by things
like social isolation. Being isolated is torture
for a social animal, like fish that like to school, they die of stress
when you take them to an isolated situation. It is torture for humans. So, monkeys and pets are dissuaded by the kinds
of things we do. They may not care if they
lose their money but they care about being isolated
from their families. AI and Shell companies, you can make it care about
money and things like that, but they aren’t dissuadable. The whole point of
a Shell company is that the people who are taking the fall are not
the ones who mind. That you’ve set
something up that can go bankrupt and nobody
actually cares. So, only makes sense
for a company to be a legal person if someone cares if
that company goes bankrupt. I mean, if the people
who are making the executive decisions of that company care if it goes bankrupt. So AI is the ultimate
Shell company, and that’s what I’m
trying to defend against IN this slide. If that was a little
too fast and loose, we could go into
this really nice formal paper. These two guys are
law professors. This is just alphabetical. He did most of
the good decisions. He did most of the good writing. But this came out last year. So, my conjecture
here is that none of things like consciousness,
language, and ethics, are the things that mandate
moral subjectivity, that we need to worry about AI as being a moral agent
or moral patient. Oops, I’m going to turn
down the sound on this. So yes, for billions of years
intelligence, consciousness, agency, language,
morality, and divinity have all been at least
seen as correlated. But as we all know, correlation is not equal to causation. It drives me crazy when
people sit here and worry about when is AI
going to become conscious. We can build it to express the different definitions
we have of consciousness. That doesn’t change
its moral status. So for example, one of
the things that makes a human incredibly valuable is the fact that they’re
not replaceable. Once someone is dead
they’re just gone. AI you can backup. So, even if you had
something and it looks exactly like a person, if you decided to
have one copy of it then you don’t have
the same kinds of ethical obligations towards it. So, yes, a substantial objective. So, this is a complain
about philosophers who confound these concepts
around a single term. I do actually do robots too. I’m just talking a lot here. It’s Aude Billard’s robot,
that’s my PhD student. Anyway, what’s interesting to me about consciousness is why only part of what we do is accessible to
explicit description. I think we understand that now. As I mentioned,
the combinatorics problem, you can’t be worrying about
all the things all the time. But also I think there’s a philogony that
language evolved from the scope of the module
for planning action sequences and
things like that, and then it became
the means by which we negotiate coalition
agreements with others. This is why we can
do things like say, “Who cares that women
didn’t used to have jobs?” Now we think it would be more fair if women could have jobs. The part of us that
used to do planning, we also use it for negotiate and coalitions and that’s part
of creating fairness. So, that’s cool and we could
use AI to do that but again, doesn’t mean necessarily that that has to be a moral agent. So, what we experience phenomenologically
as consciousness, seems to be a point when we update important models
and the time that we are appropriate to that is proportional to how
certain we are about that. Again, all of us who work in Machine Learning could
imagine doing this if we have bottlenecks for
Machine Learning and we allocate it to the problems
we’re currently working on. It’s not necessarily
about choosing, actually I don’t know why
I threw this slide in because this is
another great paper. I really like it, but
it’s a longer talk. So, ask me to come back
to that if you want. But my point is that
when we are conscious, when we’re able to
describe what we’ve done, we seem to be in
this particular position of updating our current models. The same thing, we can do
exactly the same actions and not remember having done them in a different kind of context. So, it’s not just about
action selection. I think it’s about learning. But again, I’m trying to
demystify consciousness and say, “Look, that’s not
the point either.” The point is moral agency
and moral patience, and those are things that
we’re collectively defining. So we’re not just trying
to discover whether AI requires that motivation. Going back to this, remember I mentioned this before, all of this stuff this
is what we’re doing. So, for those of you who
didn’t raise your hands, this is sort of what
a word vector is. These are the words you’re
trying to understand what they mean, and
then you just say, “How often did ‘hat’ occur within some Window of maybe 10 words of rock or paper or scholar, and how often did
the word obscure.” Occur within these words
and, of course, you do that with two million
other words or something, but I can only draw
two dimensions, and then you have vectors that come from just these counts. So, it’s a very simple thing. It’s like a giant Excel
spreadsheet or something. Then, you just measure
the cosine between these angles. It’s semantics. That’s saying how similar
in meaning are these words. How related are they? So,
that’s all that’s going on and this is how
search engines work. That’s why you can type in
four words and get a page and it’s not all that’s going on as you guys probably
know better than me, but it’s the basic
underlying technology that first gets that
working for you. For the science paper, we
used two different corpuses. We used the Stanford one which is the common crawl because we needed to get all those names, so it was like a lot of
relatively low frequency names and we only got this from having 840 billion tokens of which
2.2 million were unique. So, we got that from Stanford, but we also did the same thing using Word2vec and Google News. Apologies, I don’t
know if Microsoft also has, I’m sure you do, word embeddings that you
distribute and corpora. It was the same except
for that they didn’t have enough African-American names in the corpora silicon put
as the main article. Anyway, the first thing
we did was this test. This is the first implicit
association test paper. The first one about finding this way of recording
implicit bias in individuals and they knew this was going to
be really controversial, so they thrive with something. The first thing they
at least report about is looking at the names of flowers and names of insects and pleasant things and
unpleasant things, and they say, “Can we find
out that there’s implicit or implicit associations at least if you want to count
biases or not?” The flowers are more
pleasant than insects and insects are more unpleasant
compared to each other. Yes, she can find that
with corpus linguistics. All right? I think this guy
really overlooked. This is basically saying
counting words returns viscera. So, even if you’re talking about phenomenological
consciousness, we sort of have that. So, even things like, “When will AI become empathetic?” Well, when the Amazon started doing
recommendation services, that was one when it was
becoming empathetic. It’s not empathy like we have, where we generalize
from the models in ourselves to figure out
what each other is doing, which incidentally
is biased because it makes us more able to understand people
more like ourselves, so it’s actually one
of the mechanisms by which we humans can distance. I really like Paul Bloom’s book
about against empathy. It’s like empathy shouldn’t be the foundation of
morality anyway. But, the point is
maybe we can even do better with AI because it’s not generalizing from itself from an internal model based
on some experience. It’s generalizing from models that we’ve built on other people, so we can actually go out
and find out people all over the world. What
were they doing? Which ones most like you and what are you’re probably feeling if you’re doing that if the other person tell
us how they felt? So, what about phenomenological
consciousness? What if we build something that’s so
amazingly smart that it starts suffering like
we would and it just can’t tell us like a
person is in isolation? People really worry
about this thing. This is weird because we’re never going to
build something in chips and wires that have
the same visual experience as much as cows do and we eat cows. This is a lucky cow
that’s out in the field. So, we’re obliged to build AI. We aren’t obliged to and that was the thing that I
first started writing about in the papers
people actually started reading back in 2007. So, I think what actually
matters is the AI is already superhuman and I’m sorry I spent so long on the stuff I
didn’t think mattered. But, that is the stuff I
usually ask questions about. So, I think it really matters that we already
have superhuman AI. People like oh but it’s not just to learn generally
like humans do. Remember the thing
about the pigeons? We have all the stuff that
were set up culturally and biologically to learn quickly and what a surprise we
can learn it quickly. But, there’s
no more general purpose learning than Bayes rule. I mean, come on you know that so. So, yeah, AI’s
already superhuman, but people aren’t
recognizing it as being so, and then we can and saw
these kinds of hazards. So, like the economic hazard, I mentioned this before. I just want to talk about
where does this come from. Why are we having
this surging inequality. I think part of the problem. Well this is again this is
we have a paper next slide. We have a paper
account about this, but we think what’s
going on is when you introduce technology that
reduces the cost of distance, then you’re just going
to have fewer winners. So, in the late 19th
century that was like rail, it was telegraph. So, the newspapers were
getting this and oil. Oil was lighter and cheaper. So, it’s not that it eliminated
the cost of distance, but massively reduced it. So, if, say we’re going to
go for beer after this, and say we were actually
in a town not on a campus. We wouldn’t go to the best pub
in the entire town. We would go to
some combination of a good pub and one
that’s near to us. So, that’s one of
the ways you can have lots of pubs and lots of people can get money But, with all the sudden you can get your newspaper across
the entire country then there’s
fewer newspapers because everybody wants to go get the New York Times,
or whatever. Not my favorite newspaper
and so, whatever. So, when the great
corpora happens, some people say, “Oh,
there was just a war. That’s just a war.
It’s not just the war. As I mentioned before the war, it was a massive shift
in spending. The war was part of that. The war effort was part of that, but it continued beyond that
into through the Cold War, a shift in Spain that
was redistribution. Maybe it was when we were given incredibly high amounts of wages to uneducated white males. But, somehow we were
getting money out into the economy and we were
getting less inequality. Literally in 1945, I
mean so this is FDR. So, after one World War and
the financial crash of ’28, there was a coalition
between the elite and the proletariat in
the Democrat Party in America, and that was the New Deal and that was
massive redistribution. At the end of 1945, globally we’re sitting around in Europe agreeing that
there would no longer be wealth extraction and wealth and the banking laws by which national wealth
extraction occurred were eliminated for about 20
years and then we innovated. Because they figured out that
the countries in Europe, they found into fascism
were the ones that had the highest inequality
that the wealth is being extracted
because they’d sort of arbitrarily been part of the set to have lost
the First World War. So, we need to get everybody to realize that we want to
get more into this area, not that the ’60s and ’70s
were completely smooth either, but it wasn’t like a World War I. Empirically, it turns out that Gini coefficient is around 0.7. We’re not talking about
completely egalitarianism here. We’re not saying like socialism is the same of everything. You do want to get more money to people that are
doing good work. We’re just talking about how
steep is the inequality. This is actually people
who are got into archive within the last week. Again, I’m not going to
talk about it too much but I was just on slide but we started
from the assumption, as I mentioned,
the speed question. Why is inequality and political
polarization correlated? What we assume in the model is that in-group cooperation is less likely to pay off well
but also less likely to fail. What we showed was that if
the economy is getting worse, it makes sense after
a certain point to give up the
opportunity of having a higher expected outcome in order to make
sure that you have at least enough outcome so that you maintain
your solvency. The trick here is that you can’t have a society without the individuals being
able to survive. So, you can only ask individuals to give so much to society. However, the interesting
thing is that if the economy keeps
getting even worse, so that you can’t be solvent just by doing that
in-group cooperation, then you need to actually
desperately do something and going to working with the out-group works
makes sense again. So, that’s the suspect down here. Then again in some context, and I really don’t like go into this too much before we
get through peer review, but we’ve shown this on a couple of models that polarization, it’s much easier to
get gradual increase than to get gradual decrease. Again because that non-linearity
at that one point. So, that might explain
why you tend to see these big corrections
like the new deal. Rather than like everything to start slowly, relaxing again. Anyway, I already
mentioned the legal one. The assigning responsibility or personhood to artifacts
or trying to tax them. I’m sorry I know Bill Gates
thought we should tax robots. Robots are not countable. They’re not like people. If you set up a tax regime
on the individual robots, then we’re just going
to reprogram and redesign the robots.
We all know that. I’m actually one
of the people that supports redistribution based on the shift in the amount
of money a company is worth from one company
one year to the next. I mean actually I didn’t
put the slide in. So I’ll just say this
relatively quickly. I don’t believe that we’re having free Internet
interactions. I think that we’re
bartering data. We get data, entertainment because we
like going on the Internet. Companies get information
about us from that exchange. Because it’s not denominated, it’s not taxed so income tax
is not working right there. But it can’t even be denominated. It’s often years later
that you suddenly find new value in the old data. It doesn’t make sense. Income tax is not making
sense right now in the context of Internet services. That’s why I think we have
to go back to thinking about wealth tax which
incidentally is a lot easier to have
a nice stable regulation of wealth tax and income tax. Income tax makes
all kinds of problems. I think we really need to
think about major shifts. But anyway, this is
just talking about. In Europe, they’re talking about e-personhood
for two reasons, partly because there’s people who want to make AI that will be their friends and
lovers that they own. So, that’s the guys who are
going to go on television. But then quietly there’s things like car companies
that are worried about creating driverless cars and how the liability
is going to work, when Google and Apple are
saying, We’ll be liable. Of course, we’ll take
all the liability on.” They have more money than
they can legally spend. If you’re like
a car company that’s being paying pensions and
everything for decades, then that sounds scary. But making them into legal
persons so that you can say, “Hey company, I have an idea, why don’t you fully
automate part of your business process and
then we’ll allow you to cap your tax and legal liabilities by whatever you pushed off into the robot. This is why I first got into, not first but why I got increased into ramped up in
2007 well from people who started talking
about ethical robots in battle and I didn’t want people to push the command decisions onto a robot and say, “Oh bad robot.” But now I’m thinking
about this in terms of legal liability. Anyway, the European Parliament asked the European Commission to seriously consider
making laws of e-personhood so that’s why
we rushed this paper out. We really dropped
our other projects and got this paper out
to combat that, and it turned out that the commission is obliged to take seriously
the Parliament tells it, the EU is sort of a democracy but they shut this thing
down in like four months. But anyway, there are still
people talking about it. For example Estonia,
and if Estonia makes legal personhood
for any artifact, all legal persons in Europe have to be recognized
by all of Europe. So that’s a problem. We
already have this problem. There’s already
shell organizations which some people if
you’re into flossmoor you might want to read [inaudible] talk
about companies as AI already. My personal nightmare, and
this slide again is from before both decides
to spent 30 billion, something dollars a year
or something building AI and also became
president for life. But this is a concern that people so don’t want
to die that they either they will believe that this chatbot
that’s trained on their e-mail is themselves or even if they don’t believe it, they just like
the idea of extending their will as we do already
with legal wills about like how this land can be
used or my wife will lose all her wealth if she
remarries, things like that. So people are already
trying to extend their will after they die. I really worry about
people doing this with data “trusts” or
something or legal persons. Anyway, closer to home,
in software development, I used to be really angry
at several developers at Microsoft that were coming
to policy events and saying, “You can’t held us accountable
because that would mean we can’t use deep learning and
you lose the magic shoes.” But that all changed. That changed sort
of midway in 2017. Everybody got a message about accountability
and transparency being fundamental and human standardness being
fundamental values. I think Microsoft is absolutely being
the biggest adult in the room right now about this. I have been very
clear about this. But anyway, in the worst case, the AI is inscrutable as humans. We don’t know what
the rates are doing in human brains either but
we still audit companies. We don’t look at
the accountants synapses. You can disagree. You could say but with
at least an accountant, we can put them on
the stand we just saw that. An accountant goes on
the stand and says, “I absolutely believe
that Manafort knew what Gates told me
to do or something.” So, you can still
talk to someone. But again, we’re
really discussing whether we believe
that person or not. In technology, we can
actually do better. The stuff we’re doing
is facilitating, mandate transparently on
this accounting mechanism. We could be doing
things like keeping our software and visual logs on Git or at least in-house and be able to
show what we changed? Why we changed it? When did we train?
What library did we train? Are system on? Why don’t we do it in? We have the black boxes. I mean this is great. There hasn’t been
a fatal car accident in autonomous driving that
we haven’t found out exactly what happened
within a few days, that we knew what the robot
saw and what the robot did. It may take longer to find
out why it did that but we have incredible accounts unlike when humans hit people. So, where are we going with that? We can keep that stuff
and that’s a standard because the current industry has tons of liability and
they’re very good at this. But the rest of our industry
could do this too. Wish we had approved
due diligence. This is what people have
done already with AI. In Germany, there was a car case where somebody had a stroke. I don’t want to go through,
this is a long story. That’s what you want on Q&A. But the point was
the car company is able to show that there was no way they could have told the
person that had a stroke. It was just incredibly bad luck. How much of software industry could do that if something
went wrong with AI? That they could prove
that they’ve done everything that they
could possibly have done. That was a state of
the expectations. There’s no reason.
I think we need to know which code we linked
to on [inaudible] products. Which data we trained
with and where that data providence is.
We can do that. I just felt obliged to say that actually
transparency is what we work on at Bath it’s for real time domestic systems like new game characters
and little robots. So, this is because we
don’t have enough money, we had build to
this thing by hand. But this is the priorities
of the system. The system has
hierarchical priorities, basically behavior trees. This came out of my PHD. You can see from
the lining up in real time what is doing and when we expose, not quite this much detail
just usually that top level, the priorities to people, even ordinary users get better what’s really
going on with the robot. It’s great. One of the things
we’re looking at now is if we get these robots
to work at all, it won’t give us
the new version of Android. It’s like three years
behind Android. But once we get
the rights to work, we’re going to see
whether the human format interferes with people’s
ability to understand. Because we do have a little
money now but meanwhile, my student couldn’t wait
because he just graduate. So, he put a little bee costume
on that robot and that already changed how people responded to the
transparence information. Anyway, active research
with my students right now, but anyway back to moral.
You guys know this right? Do I need to explain what happened with
Cambridge Analytica? I assume everybody has heard
about this at this point. This is something that
unites my two countries. It doesn’t matter, it’s just
politics. Well, it matters. I mean this matters. This is the scariest thing
I actually read in 2017 which is really
saying something. I’m going to go ahead
and do I have the sound turn on again? We’ll see.>>As a general, as
a retired general, we have an army of
digital soldiers. What we are now,
what we call them, I call them because this
was an insurgency folks. This was run like an insurgency. This was irregular warfare
at its finest in politics, and that story will
continue to be told here but we have what we
call citizen journalists.>>Yeah, so I think the stuff matters. That’s right, yeah. So, I think it’s really essential and this
is something I’m telling people that are in government that we hold
people to the standards. You can’t magically know how people are going
to use your software, but you can say that
there are standards that most people in that system said, for example, “We
won’t let you see the personal data
that we’ve collected, but if you tell us, maybe I’ll give advertisements
to a person like a kind of person you’re looking
for but we’re not going to just
give you their data.” Then, other people
say, “Yeah, sure. Here, have their data.” If you can show that
80 percent of people knew that was wrong and 20 percent
people did it anyway, well, those 20 percent
should be held to account. Despite the obvious
reference here, I don’t know for sure and I trust the courts
of law to do this. But if they find that
something is wrong, I hope they get really hammered. I’m not saying eliminate it, but there’s a lot of
hammering that could be done and could still be afforded. I mentioned this earlier, this
slide you might recognize. The second half of that slide is for the automatic
kind of bias. One thing you can do is
compensate with design, I think that’s what humans are. Like I said, explicit
versus implicit memory. So, you just know not to
extrapolate the idea. Ignorance though, this is
why I threw this slide in. It’s because, look,
the conversation we had earlier. Diversified test and iterate and improve. That’s the thing. When something comes
out of ignorance, we can fix it, we can
improve it and that’s great. Yeah. Then, the only way to handle deliberate is
by audits and regulation. So we’ve got to expose
our stuff out to help people be able to
prove that stuff. Okay. So, this is
at the last slide. I know you guys are
dying for that by now. ICT systems have designs
and have architecture; and architects, as
undergraduates, learn about laws, they learn about policy, they go on and get licensed, they learn how to work
with governments and urban planners because at some point in the past,
we could decide. It used to be an average person could build a building and
sometimes they fell on people or they stuck
it somewhere and it clogged up traffic We
don’t accept that anymore, and I just think we’ve
gotten to the stage now. We, as computer scientists, have matured what our buildings are falling over on people
and affecting everyone. So we need to start teaching our undergraduates how to
work with government and we need to be collaborate through government in order to make the world stable enough
that we can flourish. The rate of successful, sustainable innovation
is what matters, not just the speed to market. So, thank you. I’m sorry that ran a little
longer than I meant.>>[inaudible] for questions?>>Believe it or not, there’s still 20 minutes left.>>Maybe I’ll start with one. You had a slide, I’m probably going to be making
mistakes and reiterating.>>Well, please.>>You have a kind of definition for moral behavior
that was talking about selecting
actions that optimize the societal benefits
or societal value. Do you see this as a prescriptive way of
coding up AI agents?>>This is such a cool question. I have another one
hour long talk about this question because with humans even, one of the things is, that’s why I talked about it,
is it said that a society considers an action
to be moral when it does a thing that society
decides that it wants. So, I was doing something for
about a decade before now, starting about 2008, working on understanding why there’s
cultural variation in public goods investment. Why? Because of that stuff
I told you guys before about culture
being a public good, so I was trying to understand why only one species has language. I was working in public goods, and then actually
the US military asked me to work on cultural variations so I put these two things together. It turns out that
different people treat people that
invest in the public. Different regions
of the world have people who tend to treat people differently than
invest in the public good, and some of them
actually punish those who give more than they
do to the public good. If they’re allowed to
because this is all secret like
a rat lab experiments. So, if they could
figure out the sneaky way like vandalism or something to punish the people who have been giving
to the public good, they will and actually
have more tendency to do that in places that have
lower rule of law and lower GDP, and unfortunately those are
two correlated things, too. The economy is always
complicated like that. So, the point is that you have
to take care of yourself. There’s actually an expression
in Eastern Europe in Slavic languages that
if you don’t steal, you steal from your family. So, there’s a baseline of making sure that
you stay alive and that you need to be able
to do individual investment. So, I think part of what
society tends to do is try to get you to invest in society and so our culture
tends to do that, and so it’s probably
slightly overshooting the level at which you should be investing in the social level. So to give an immediate
example in AI, when Google had
their driverless cars going around in Mountain View, they had the strict rules of the road about what
you’re supposed to do at a four-way stop and the Google cars were
never going to go, and they had to build into
it what people always do, which is that at some point
you take the initiative, you don’t wait till
every other car has come to a full stop
and then go. You have to say, “Okay, I’ve been here long enough,”
and start going. I don’t think we necessarily have to
build our AI up to the paragon of ideal morality, but I do think we
need to, in fact, I think you necessarily have to equilibrate that
a little bit more. That’s one of the weird
interesting things that all this is going to bring
up. We actually are. This interacts badly, accountability interacts
badly with privacy, but I think it’s
a fixable problem. I’m just saying that it was
something we have to defend, but I think we are going
to become much more explicitly aware of
a lot of the things we had previously not been aware of because again explicitly
we were doing things, we’re being encouraged to do things where in some sets are natural but turned out
to be good and useful. Does that answer the question?>>Yeah, and I actually
have a follow-up question which is that kind of assumes that there is a common understanding of
what socially good is. I’m sure you thought about there could be cases
where a solution could be better for the majority and there could be
another solution that is worse for
the majority but definitely significantly
better for the minority, and this is really
real with all of the perceptive systems
where people talk about trading off percentage
they can receive for something in
return and so forth.>>Yeah. What this
means and this is, I actually wrote a paper
about this while I was still a PhD student too, that the closer AI gets
to human intelligence, the more what we’re
talking about it. That is fundamentally what political scientists talk about when they talk about democracy, defending the rights
of the minority. Half of us dropped out of undergraduate or whatever
and blindly go and just do a lot of programming
and not realizing these entire disciplines are
working on these problems. So now, we are society. We have to deal with
all those problems, the defending the rights of
the minority and the fact that there isn’t a particular
absolute correct solution. That solutions may be different
in different contexts. That’s the kind of
complexities that law and politics have had to
deal with for a millennia. Yeah, absolutely.>>It’s an ambiguous question
but bear with me.>>That’s all right.>>Should or could a moral agent is such without
being part of a society? How can we be a moral agent without being part
of that society?>>Okay.>>Or should you?>>Right. First of
all, we need to disconfirm these two terms. So, there’s an agent being
moral taking us moral actions, and there’s the moral agent,
as I just described, which is the thing
that’s being held accountable for its actions.>>Then backtracking, can
you be a moral agent?>>I don’t think, for either of those two
definitions actually, you can be without in society because the society is one that’s declaring the rules
that the person being moral is following, and for the moral agent that’s been defined
by the society, it’s basically the side of responsible actors
where obviously the society what defines it. So for both of those, I think you need a society to get the definitions for the morality. I don’t think that morality
exist in a vacuum, but that’s okay
because you’re not going to get intelligence
in a vacuum either. The microbial will
totally take it out. All of us are intelligent
cognitive species. We all have a social. If nothing else, we
come from parents. We’re all the products
of sexual reproduction. So, yeah.>>I’m really curious
about your last piece, about how to control
that third threat, I’m just like malicious AI
and specifically regulation. So, what are some of the government regulations that you foresee and what would you recommend for those of us
in the technical community to stay up-to-date on
those and get to that>>Well, the main thing
that we’re trying to do in the UK at least
which is one of the most- I’ve been in
academics since 2002. So, I’ve been an adult
since I don’t know when, but basically I’ve
been a professional in this particular
discipline since 2002, so I’m just more
familiar in Britain. But at least in Britain, we don’t feel like we
need a lot of new laws. We just need to
start applying them into ICT and into software, which is something
we hadn’t been doing before because people
didn’t understand it. So, I think the
main thing is if we maintain the current standards
of accountability, then that will encourage-
so if I just say we. If the government, which I
do feel we’re all part of, if the government part
of us maintains accountability into
the software industry and starts understanding it, and actually McCrock had a great interviewing
Wyatt about this. Of course we’re always watching new industry for a while for a few decades and then we
get involved. But anyway. So, if they start
pursuing accountability, then we, the other part of we, the software industry,
are going to be highly motivated to be able
to have transparency. At least sufficient transparency so that we can prove that
we’ve done due diligence. Right? So, because you’re not going to have
perfect transparency. I already said this
to one of you. I think it was you, today. But there’s this thing called the map of Germany problem and
I’ve no idea why it’s Germany. But the point is that a map
of Germany that was perfect, that had every detail, would be the same size as
Germany and it would be no use. This is actually
another re-framing of the whole box thing
about all models are rhymed but some
are useful, right? So, there is of course a trade-off about how much transparency you can have, how much explainability you can have, those kinds of things. But we can motivate companies to make the
trade-off themselves internally rather than
mandating the amount of transparency by just
maintaining accountability, and accountability is
actually already in the law. It just hasn’t been pursued into the technology
companies before, and now the governments are realizing technology
companies matter. So, maybe they will
just start pursuing it. So, what we’re trying
to say is that we don’t really need
so much new laws and so we need new practice and just better education in
the prosecutor’s offices. Actually, again,
talking to the people, I’ve only talked to
three people so far, I don’t crap up names but it sounds like Microsoft is already realizing this is
a big corporate entity. You can cast in the positive way, which is that you
want the software to be doing what’s
best for society. You can also say that
you want to avoid big corporate
liability cost, right? Those come for free together.>>I have a question, there’s this interesting- let’s quote the [inaudible] I
like to hear your thoughts. Does the opinion or the school
of thought that says, “Let’s create Artificial
Intelligence” and the school of
thought that says, “No, that’s fool’s errand let’s create assist of intelligence.” Something that is always
partnering with a human, cannot have
independent behaviors, it’s always at the service of [inaudible] prosthesis
if you may of a human. What do you think things should
go if you have a choice?>>Okay. So, I think that that’s a great question,
I’m glad you asked it. You’re not quite
using the language I presented here but that’s okay. Because you’re using for
Artificial Intelligence, so I briefly mentioned
a lot of people are using for Artificial
General Intelligence. By which they mean all kinds of contradictory things because
it’s not a coherent thing, and I was mostly talking
about the omniscient AGI. But you’re talking about
the human-like AGI, that there’s something
that’s really is human unconscious and all those
other kinds of things. So, a lot of this was about that and
it didn’t come together, so now I have a chance to bring it together,
so that’s great. Thank you very much
for your question. The point is that we can’t
create something out of silicone that is an agent in our society as it’s
currently run. Because the sanctions that
we do to keep each other from doing the wrong things aren’t going to
work on a machine. Right? Machines are not
going to have- I mean, you could build into a robot a bomb and a timer and the timer is set when there’s
no other person in the room and the bomb is going to
go off if there’s no person around
for five minutes. Okay. You can say, well that’s kind of like humans
that they don’t like to be isolated because- but it’s not liking to be isolated to know that you’re
just going to blow up. Right? Watch your investment
not being blown up. In humans and in fish
and everything else, there’s a systemic stress. Right? If we have well-designed AI that we can be responsible and accountable
for and all those things, you’re going to have things
like systemic stress. You are going to have modularity and you’re
going to have ways of knowing it’s going to
be more separable. So, I don’t think there’s a safe way to build
a system- basically, I think there’s no way that
we can build a system that we would regulate like we
regulate other people. That was why I just said I think even legal personhood has
been overextended right now, which is why we’re having
these problems with the Shell corporations
as long as people are building buildings
deliberately to make them go bankrupt so they
can launder money. Right? So, that’s
the kind of thing that I think that we see when
we’re going out and building these other people. I mean, I understand that,
I want to do it too. Why do you think I got into AI? I understand the desire
to procreate, that’s a very fundamental part of life that we have
a desire to procreate. But I think it’s better. I mean there’s a lot
of things you can want that aren’t necessarily
good things for you. Right? I think it’s one of those things that it’s
better for us to figure out how to procreate in terms
of making humans at the core as more as
the things that are the responsibility or
the sources of the creativity. Right? So, you can
project your legacy in other ways through your writings or even through your AI chart
thought thing, right. But you don’t make
that the legal person or the moral person
or the moral patient, sorry in this case,
that’s competing with your biological legacy. I don’t see that’s coherent. Now, I understand that
people think well, humanity is only going
to be around for four billion years
because that’s how long the sun is going to last. But actually, there’s been
no hominids that have lasted more than a couple of
million years as a species. But even with technology, there is like
how many software formats or data formats have been
around for 80 years. Right? Humans make it for 80 years and yet we think that AI is going
extend ourselves, this is this mathematical fallacy that I was talking before we. It’s not math, it’s not going
to be perfect and eternal. So, I think that we’re never, that was the cow slide, we’re never going to reproduce
something that’s going to have the same phenomenologies we have or even as much
as I could have thought of fruit fly let alone a
cow or another mammal. Right? But we can
build things that say, don’t shut me off or whatever. This is my earliest work so people looking at
my earliest works, it’s just like, just
build in the back up. Even if your best friend does turn out to be a robot and then somehow you have some way of freeing it, you don’t have to. You should still if
you’re in a fire, you save the Shakespeare. Well, it’s probably
backed up too, so you can save
a toddler or something, leave your best friend there
and your best friend comes walking up the hands you the
new build for its new body. Right? It’s not a problem. So, I think we need
to understand. Believe it or not, there were a lot of slides cut. We have to understand
that it’s okay, that our our society is based around the fact
that we’re a bunch of apes, and that our morality and our social intuitions and all those things are shared with a lot of other chimpanzees. Right? You’re not going to have something that has
the same motivations, whether social or ambitions or whatever that chimpanzees have that you build out- I mean, you could absorb it from like
I said from, from our text. Right? So, you could probably
get a blueprint that way, but it’s not going
to be empathically from the inside
motivated the same way. So, I think it’s a mistake to lead people down that direction because they do want it so badly. You said this is going
to be the last question, I thought there are seven
minutes left, that’s not true. But my answer just
took six minutes. There’s only one minute left.>>Thank you very much.>>Okay.

Leave a Response

Your email address will not be published. Required fields are marked *