-
34c3 intro
-
Herald: ... used Anja Dahlmann, a
political scientist and researcher at
-
Stiftung Wissenschaft und Politik, a
berlin-based think-tank. Here we go.
-
applause
-
Anja Dahlmann: Yeah, Thanks for being
here. I probably neither cut myself nor
-
proposed but I hope it's still
interesting. I'm going to talk about
-
preventive arms control and international
humanitarian law and doing in this
-
international debate around autonomous
weapons. This type of weapon is also
-
referred to as Lethal Autonomous Weapons
System, short LAWS, or also killer robots.
-
So if I say LAWS, I mostly mean these
weapons and not like legal laws, just to
-
confuse you a bit. Okay. I will discuss
this topic along three questions. First of
-
all, what are we actually talking about
here, what are autonomous weapons? Second,
-
why should we even care about this? Why's
it important? And third, how could this
-
issue be addressed on international level?
So. I'll go through my slides, anyway,
-
what are we talking about here? Well,
during the international negotiations, so
-
far no real, no common definition has been
found. So States, Parties try to find
-
something or not and for my presentation I
will just use a very broad definition of
-
autonomous weapons, which is: Weapons that
can once activated execute a broad range
-
of tasks or selecting to engage targets
without further human intervention. And
-
it's just a very broad spectrum of weapons
that might fall under this definition.
-
Actually, some existing ones are there as
well which you can't see here. That would
-
be the Phalanx system for example. It's
been around since the 1970s. Sorry...
-
Herald: Man kann nichts hören
auf der Bühne. Mach mal weiter.
-
Dahlmann: Sorry. So, Phalanx system has
been around since the 1970s, a US system,
-
air defense system, based on ships and
it's been to - just yeah, defend the ship
-
against incoming objects from the air. So
that's around, has been around for quite a
-
long time and it might be even part of
this LAWS definition or not but just to
-
give you an impression how broad this
range is: Today, we've got for example
-
demonstrators like the Taranis drone, a UK
system, or the x74b which can, for
-
example, autonomously land
applause
-
land on aircraft carriers and can be air-
refueled and stuff like that which is
-
apparently quite impressive if you don't
need a human to do that and in the future
-
there might be even, or there probably
will be even more, autonomous functions,
-
so navigation, landing, refueling, all
that stuff. That's, you know, old but at
-
some point there might, be weapons might
be able to choose their own ammunition
-
according to the situation. They might be
able to choose their target and decide
-
when to engage with the target without any
human intervention at some point. And
-
that's quite problematic, I will tell you
why that's in a minute. Overall, you can
-
see that there's a gradual decline of
human control over weapons systems or over
-
weapons and the use of force. So that's a
very short and broad impression of what
-
we're talking about here. And talking
about definitions, it's always interesting
-
what you're not talking about and that's
why I want to address some misconceptions
-
in the public debate. First of all, when
we talk about machine autonomy, also
-
artificial intelligence, with intelligence
which is the technology behind this,
-
people - not you probably - in the media
and the broader public often get the idea
-
that these machines might have some kind
of real intelligence or intention or an
-
entity on own right and they're just not.
It's just statistical methods, it's just
-
math and you know way more about this than
I do so I will leave it with this and just
-
say that or highlight that they have these
machines, these weapons have certain
-
competences for specific tasks. They are
not entities on their own right, they are
-
not intentional.And that's important when
we talk about ethical and legal challenges
-
afterwards. Sorry. There it is. And the
other, in connection with this, there's
-
another one, which is the plethora of
Terminator references in the media as soon
-
as you talk about autonomous weapons,
mostly referred to as killer robots in
-
this context. And just in case you tend to
write an article about this: don't use a
-
Terminator picture, please. Don't, because
it's really unhelpful to understand where
-
the problems are. With this kind of thing,
people assume that we have problems is
-
when we have machines with a human-like
intelligence which look like the
-
Terminator or something like this. And the
problem is that really way before that
-
they start when you use assisting systems
when you have men or human-machine teaming
-
or when you accumulate a couple of
autonomous functions through the targeting
-
cycle. So through this, the military steps
are lead to the use of force or lead to
-
the killing of people. And that's not,
this is really not our problem at the
-
moment. So please keep this in mind
because it's not just semantics, semantics
-
to differentiate between these two things.
It's really manages the expectations of
-
political and military decision-makers.
Ok, so now you've got kind of an
-
impression what I'm talking about here so
why should we actually talk about this?
-
What's all the fuss about? Actually,
autonomous weapons have or would have
-
quite a few military advantages: They
might be, in some cases, faster or even
-
more precise than humans. And you don't
need a constant communication link. So you
-
don't have, you don't have to worry about
instable communication links, you don't
-
have to worry about latency or detection
or a vulnerability of this specific link.
-
So yay! And a lot of, let's say very
interesting, military options come from
-
that. People talk about stealthy
operations and shallow waters for example.
-
Or you know remote missions and secluded
areas, things like that. And you can get
-
very creative with tiniest robots and
swarms for example. So shiny new options.
-
But, and of course there's a "but", it
comes at a prize because you have at least
-
three dimensions of challenges in this
regard. First of all, the legal ones. When
-
we talk about these weapons, they might
be, they will be applied in conflict where
-
international humanitarian law IHL
applies. And IHL consists of quite a few
-
very abstract principles. For example:
principle of distinction between
-
combatants and civilians, principle of
proportionality or a military necessity.
-
They are very abstract and I'm pretty sure
they really always need a human judgment
-
to interpret this, these principles, and
apply them to dynamic situations. Feel
-
free to correct me if I'm wrong later. So
that's one thing. So if you remove the
-
human from the targeting cycle, this human
judgment might be missing and therefore
-
military decision makers have to evaluate
very carefully the quality of human
-
control and human judgement within the
targeting cycle. So that's law. Second
-
dimension of challenges are security
issues. When you look at these new systems
-
they are cool and shiny and as most new
types of weapons they are, they have the
-
potential to stir an arms race between
between states. So they actually might
-
make conflicts more likely just because
they are there and states want to have
-
them and feel threatened by them. Second
aspect is proliferation. Autonomy is based
-
on software, so software can be easily
transferred it's really hard to control
-
and all the other components, or most of
the other components you will need, are
-
available on the civilian market so you
can build this stuff on your own if you're
-
smart enough. So we have might have more
conflicts from these types of weapons and
-
it's might get, well, more difficult to
control the application of this
-
technology. And the third one which is it
especially worrying for me is the as
-
potential for escalation within the
conflict, especially when you have, when
-
both or more sites use these autonomous
weapons, you have these very complex
-
adversary systems and it will become very
hard to predict how they are going to
-
interact. They will increase the speed of
the of the conflict and the human might
-
not even have a chance to process
what's going on there.
-
So that's really worrying and we can see
for example in high-frequency trading at
-
the stock markets where problems arise
there and how are difficult is for humans
-
to understand what's going on there. So
that, that are of some of these security
-
issues there. And the last and maybe maybe
most important one are ethics. As I
-
mentioned before, when you use autonomy
and weapons or machines you have
-
artificial intelligence so you don't have
real intention, a real entity that's
-
behind this. So the killing decision might
at some point be based on statistical
-
methods and no one will be involved there
and that's, well, worrying for a lot of
-
reasons but also it could constitute a
violation of human dignity. You can argue
-
that humans have, well, you can kill
humans in in war but they at least have
-
the right to be killed by another human or
at least by the decision of another human,
-
but we can discuss this later.
So at least on this regard it would be
-
highly unethical and that really just
scratches the surface of problems and
-
challenges that would arise from the use
of these autonomous weapons. I haven't
-
even touched on the problems with training
data, with accountability, with
-
verification and all that funny stuff
because I only have 20 minutes. So, sounds
-
pretty bad, doesn't it? So how can this
issue be addressed? Luckily, states have,
-
thanks to a huge campaign of NGOs, noticed
that there might be some problems and
-
there might be a necessity to address
that, this issue. They're currently doing
-
this in the UN Convention on certain
conventional weapons, CCW, where they
-
discuss a potential ban of the development
and use of these lethal weapons or weapons
-
that lack meaningful human control over
the use of force. There are several ideas
-
around there. And such a ban would be
really the maximum goal of the NGOs there
-
but it becomes increasingly unlikely that
this happens. Most states do not agree
-
with a complete ban, they want to regulate
it a bit here, a bit there, and they
-
really can't find a common common
definition as I mentioned before because
-
if you have a broad definition as just as
I used it you will notice that you have
-
existing systems in there that might be
not that problematic or that you just
-
don't want to ben and you might stop
civilian or commercial developments which
-
you also don't want to do. So states are
stuck on this regard and they also really
-
challenge the notion that we need a
preventive arms control here, so that we
-
need to act before these systems are
applied on the battlefield. So at the
-
moment, this is the fourth year or
something of these negotiations and we
-
will see how it goes this year and if
states can't find a common ground there it
-
becomes increasingly like or yeah becomes
likely that it will change to another
-
forum just like with anti-personnel mines
for example which where the the treaty was
-
found outside of the United Nations. But
yeah, the window of opportunity really
-
closes and states and NGOs have to act
there and yeah keep on track there. Just
-
as a side note, probably quite a few
people are members of NGOs so if you look
-
at the campaign to stop killer robots with
a big campaign behind this, this process,
-
there's only one German NGO which is
facing finance, so if you're especially if
-
you're German NGO and are interest that in
AI it might be worthwhile to look into the
-
military dimension as well. We really need
some expertise on that regard, especially
-
on AI and these technologies. They're...
Okay, so just in case you fell asleep in
-
the last 15 minutes I want you to take
away three key messages: Please be aware
-
of the trends and internal logic that lead
to autonomy in weapons. Do not
-
overestimate the abilities of autonomy, of
autonomous machines like intent and these
-
things and because you probably all knew
this already, please tell people about
-
this, tell other people about this,
educate them about this type of
-
technology. And third, don't underestimate
the potential dangers for security and
-
human dignity that comes from this type of
weapon. I hope that I could interest you a
-
bit more in this in this particular issue
if you want to learn more you can find
-
really interesting sources on the website
of the CCW at the campaign to stuff killer
-
robots and from a research project that I
happen to work in, the International Panel
-
on the Regulation of Autonomous Weapons,
we do have a few studies on that regard
-
and we're going to publish a few more. So
please, check this out and thank you for
-
your attention.
-
Applause
-
Questions?
-
Herald: Sorry. So we have some time for
questions answers now. Okay, first of all
-
I have to apologize that we had a hiccup
with the signing language, the acoustics
-
over here on the stage was so bad that she
didn't could do her job so I'm
-
terrible sorry about that. We fixed it in
the talk and my apologies for that. We are
-
queuing here on the microphones already so
we start with microphone number one, your
-
question please.
Mic 1: Thanks for your talk Anja. Don't
-
you think there is a possibility to reduce
war crimes as well by taking away the
-
decision from humans and by having
algorithms who decide which are actually
-
auditable?
Dahlmann: Yeah that's, actually, that's
-
something I just discussed in the
international debate as well, that there
-
might, that machines might be more ethical
than humans could be. And well, of course
-
they won't just start raping women because
they want to but you can program them to
-
do this. So you just you shift the
problems really. And also maybe these
-
machines don't get angry but they don't
show compassion either so if you are there
-
and your potential target they just won't
stop they will just kill you and do not
-
think once think about this. So you have
to really look at both sides there I guess.
-
Herald: Thanks. So we switch over
to microphone 3, please.
-
Mic 3: Thanks for the talk. Regarding
autonomous cars, self-driving cars,
-
there's a similar discussion going on
regarding the ethics. How should a car
-
react in a case of an accident? Should it
protect people outside people, inside,
-
what are the laws? So there is another
discussion there. Do you work with people
-
in this area or is this is there any
collaboration?
-
Dahlmann: Maybe there's less collaboration
than one might think there is. I think
-
there is. Of course, we we monitor this
debate as well and yeah we think about the
-
possible applications of the outcomes for
example from this German ethical
-
commission on self-driving cars for our
work. But I'm a bit torn there because
-
when you talk about weapons, they are
designed to kill people and cars mostly
-
are not. So with this ethical committee
you want to avoid killing people or decide
-
what happens when this accident occurs. So
they are a bit different but of course
-
yeah you can learn a lot from both
discussions and we aware of that.
-
Herald: Thanks. Then we're gonna go over
in the back, microphone number 2, please.
-
Mic 2: Also from me thanks again for this
talk and infusing all this professionalism
-
into the debate because some of the
surroundings of our, so to say ours
-
scenery, they like to protest against very
specific things like for example the
-
Rammstein air base and in my view that's a
bit misguided if you just go out and
-
protest in a populistic way without
involving these points of expertise that
-
you offer. And so, thanks again for that.
And then my question: How would you
-
propose that protests progress and develop
themselves to a higher level to be on the
-
one hand more effective and on the other
hand more considerate of what is at stake
-
on all the levels and on
all sides involved?
-
Dahlmann: Yeah well, first, the Rammstein
issue is completely, actually a completely
-
different topic. It's drone warfare,
remotely piloted drones, so there are a
-
lot of a lot of problems with this and
we're starting killings but it's not about
-
lethal autonomous weapons in particular.
Well if you want to be a part of this
-
international debate, there's of course
this campaign to stop killer robots and
-
they have a lot of really good people and
a lot of resources, sources, literature
-
and things like that to really educate
yourself what's going on there, so that
-
would be a starting point. And then yeah
just keep talking to scientists about
-
this and find out where we see the
problems and I mean it's always helpful
-
for scientists to to talk to people in the
field, so to say. So yeah, keep talking.
-
Herald: Thanks for that. And the
signal angel signaled that we have
-
something from the internet.
Signal Angel: Thank you. Question from
-
IRC: Aren't we already in a killer robot
world? The bot net can attack a nuclear
-
power plant for example. What do you think?
Dahlmann: I really didn't understand a
-
word, I'm sorry.
Herald: I didn't understand that as well,
-
so can you speak closer to
the microphone, please?
-
Signal Angel: Yes. Aren't we already in a
killer robot world?
-
Herald: Sorry, that doesn't work. Sorry.
Sorry, we stop that here, we can't hear it
-
over here. Sorry.
Signal Angel: Okay.
-
Herald: We're gonna switch over to
microphone two now, please.
-
Mic 2: I have one little question. So in
your talk, you were focusing on the
-
ethical questions related to lethal
weapons. Are you aware of ongoing
-
discussions regarding the ethical aspects
of the design and implementation of less
-
than lethal autonomous weapons for crowd
control and similar purposes?
-
Dahlmann: Yeah actually within the CCW,
every term of this Lethal Autonomous
-
Weapon Systems is disputed also the
"lethal" aspect and for the regulation
-
that might be easier to focus on this for
now because less than lethal weapons come
-
with their own problems and the question
if they are ethical and if they can, if
-
IHL applies to them but I'm not really
deep into this discussion. So I'll just
-
have to leave it there.
Herald: Thanks and back here to microphone
-
one, please.
Mic 1: Hi. Thank you for the talk very
-
much. My question is in the context of the
decreasing cost of both, the hardware and
-
software, over the next say 20, 40 years.
Outside of a nation-state context like
-
private forces or non nation-state actors
gaining use of these weapons, do things
-
like the UN convention or the campaign to
stop killer robots apply are they
-
considering private individuals trying to
leverage these against others?
-
Dahlmann: Not sure what the campaign says
about this, I'm not a member there. The
-
the CCW mostly focuses on international
humanitarian law which is important but I
-
think it's it's not broad enough. So
questions like proliferation and all this
-
is connected to your question and not
really or probably won't be part of
-
regulation there. It's discussed only on
the edges of the of the debates and
-
negotiations there but it doesn't seem to
be a really issue there.
-
Mic 1: Thanks.
Herald: And over to microphone six,
-
please.
Mic 6: Thank you. I have a question as a
-
researcher: Do you know how far the
development has gone already? So how
-
transparent or intransparent is your look
into what is being developed and
-
researched on the side of militaria
working, military people working with
-
autonomous weapons and developing them?
Dahlmann: Well, for me it's quite
-
intransparent because I only have only
access to public publicly available
-
sources so I don't really know what what's
going on behind closed doors in the
-
military or in the industry there. Of
course you can you can monitor the
-
civilian applications or developments
which can tell a lot about the the state
-
of the art and for example the DARPA
the American Development Agency, they
-
published sometimes a call for papers,
that's not the term, but there you can see
-
where in which areas they are interested
in then for example they really like this
-
idea of autonomous killer bug that can
act in swarms and monitor or even kill
-
people and things like that. So yeah we
try to piece it, piece it together in
-
our work.
Herald: We do have a little bit more time,
-
are you okay to answer more questions?
Dahlmann: Sure.
-
Herald: Then we're gonna switch over to
microphone three, please.
-
Mic 3: Yes, hello. I think we are living
already in a world of Leathal Autonomous
-
Weapon Systems if you think about these
millions of landmines which are operating.
-
And so the question is: Shouldn't it be
possible to ban these weapon systems the
-
same way as land mines that are already
banned by several countries so just
-
include them in that definition? And
because the arguments should be very
-
similar.
Dahlmann: Yeah it does, it does come to
-
mind of course because these mines are
just lying around there and no one's
-
interacting when you step on them and
boom! But they are, well it depends, it
-
depends first of all a bit of your
definition of autonomy. So some say
-
autonomous is when you act in dynamic
situations and the other ones would be
-
automated and things like that and I think
this autonomy aspect, I really don't want
-
to find, don't want to find define
autonomy here really but this this action
-
in more dynamic spaces and the aspect of
machine learning and all these things,
-
they are way more complex and they bring
different problems than just land mines.
-
Landmines are problematic, anti-personnel
mines are banned for good reasons but they
-
don't have the same problems I think. So
it won't be, I don't think it won't be
-
sufficient to just put the LAWS in there,
the Lethal Autonomous Weapons.
-
Herald: Thank you very much. I can't see
anyone else queuing up so therefore, Anja,
-
thank you very much it's your applause!
-
applause
-
and once again my apologies that
that didn't work
-
34c3 outro
-
subtitles created by c3subtitles.de
in the year 2018. Join, and help us!