< Return to Video

34C3 - Regulating Autonomous Weapons

  • 0:00 - 0:15
    34c3 intro
  • 0:15 - 0:18
    Herald: ... used Anja Dahlmann, a
    political scientist and researcher at
  • 0:18 - 0:23
    Stiftung Wissenschaft und Politik, a
    berlin-based think-tank. Here we go.
  • 0:23 - 0:30
    applause
  • 0:30 - 0:40
    Anja Dahlmann: Yeah, Thanks for being
    here. I probably neither cut myself nor
  • 0:40 - 0:44
    proposed but I hope it's still
    interesting. I'm going to talk about
  • 0:44 - 0:49
    preventive arms control and international
    humanitarian law and doing in this
  • 0:49 - 0:53
    international debate around autonomous
    weapons. This type of weapon is also
  • 0:53 - 0:59
    referred to as Lethal Autonomous Weapons
    System, short LAWS, or also killer robots.
  • 0:59 - 1:04
    So if I say LAWS, I mostly mean these
    weapons and not like legal laws, just to
  • 1:04 - 1:12
    confuse you a bit. Okay. I will discuss
    this topic along three questions. First of
  • 1:12 - 1:18
    all, what are we actually talking about
    here, what are autonomous weapons? Second,
  • 1:18 - 1:22
    why should we even care about this? Why's
    it important? And third, how could this
  • 1:22 - 1:31
    issue be addressed on international level?
    So. I'll go through my slides, anyway,
  • 1:31 - 1:38
    what are we talking about here? Well,
    during the international negotiations, so
  • 1:38 - 1:45
    far no real, no common definition has been
    found. So States, Parties try to find
  • 1:45 - 1:50
    something or not and for my presentation I
    will just use a very broad definition of
  • 1:50 - 1:57
    autonomous weapons, which is: Weapons that
    can once activated execute a broad range
  • 1:57 - 2:02
    of tasks or selecting to engage targets
    without further human intervention. And
  • 2:02 - 2:08
    it's just a very broad spectrum of weapons
    that might fall under this definition.
  • 2:08 - 2:13
    Actually, some existing ones are there as
    well which you can't see here. That would
  • 2:13 - 2:20
    be the Phalanx system for example. It's
    been around since the 1970s. Sorry...
  • 2:20 - 2:23
    Herald: Man kann nichts hören
    auf der Bühne. Mach mal weiter.
  • 2:23 - 2:27
    Dahlmann: Sorry. So, Phalanx system has
    been around since the 1970s, a US system,
  • 2:27 - 2:33
    air defense system, based on ships and
    it's been to - just yeah, defend the ship
  • 2:33 - 2:39
    against incoming objects from the air. So
    that's around, has been around for quite a
  • 2:39 - 2:45
    long time and it might be even part of
    this LAWS definition or not but just to
  • 2:45 - 2:49
    give you an impression how broad this
    range is: Today, we've got for example
  • 2:49 - 2:58
    demonstrators like the Taranis drone, a UK
    system, or the x74b which can, for
  • 2:58 - 3:05
    example, autonomously land
    applause
  • 3:05 - 3:09
    land on aircraft carriers and can be air-
    refueled and stuff like that which is
  • 3:09 - 3:14
    apparently quite impressive if you don't
    need a human to do that and in the future
  • 3:14 - 3:19
    there might be even, or there probably
    will be even more, autonomous functions,
  • 3:19 - 3:26
    so navigation, landing, refueling, all
    that stuff. That's, you know, old but at
  • 3:26 - 3:30
    some point there might, be weapons might
    be able to choose their own ammunition
  • 3:30 - 3:35
    according to the situation. They might be
    able to choose their target and decide
  • 3:35 - 3:42
    when to engage with the target without any
    human intervention at some point. And
  • 3:42 - 3:45
    that's quite problematic, I will tell you
    why that's in a minute. Overall, you can
  • 3:45 - 3:52
    see that there's a gradual decline of
    human control over weapons systems or over
  • 3:52 - 3:58
    weapons and the use of force. So that's a
    very short and broad impression of what
  • 3:58 - 4:01
    we're talking about here. And talking
    about definitions, it's always interesting
  • 4:01 - 4:06
    what you're not talking about and that's
    why I want to address some misconceptions
  • 4:06 - 4:13
    in the public debate. First of all, when
    we talk about machine autonomy, also
  • 4:13 - 4:20
    artificial intelligence, with intelligence
    which is the technology behind this,
  • 4:20 - 4:25
    people - not you probably - in the media
    and the broader public often get the idea
  • 4:25 - 4:31
    that these machines might have some kind
    of real intelligence or intention or an
  • 4:31 - 4:36
    entity on own right and they're just not.
    It's just statistical methods, it's just
  • 4:36 - 4:41
    math and you know way more about this than
    I do so I will leave it with this and just
  • 4:41 - 4:46
    say that or highlight that they have these
    machines, these weapons have certain
  • 4:46 - 4:50
    competences for specific tasks. They are
    not entities on their own right, they are
  • 4:50 - 4:55
    not intentional.And that's important when
    we talk about ethical and legal challenges
  • 4:55 - 5:07
    afterwards. Sorry. There it is. And the
    other, in connection with this, there's
  • 5:07 - 5:11
    another one, which is the plethora of
    Terminator references in the media as soon
  • 5:11 - 5:15
    as you talk about autonomous weapons,
    mostly referred to as killer robots in
  • 5:15 - 5:20
    this context. And just in case you tend to
    write an article about this: don't use a
  • 5:20 - 5:24
    Terminator picture, please. Don't, because
    it's really unhelpful to understand where
  • 5:24 - 5:30
    the problems are. With this kind of thing,
    people assume that we have problems is
  • 5:30 - 5:34
    when we have machines with a human-like
    intelligence which look like the
  • 5:34 - 5:40
    Terminator or something like this. And the
    problem is that really way before that
  • 5:40 - 5:47
    they start when you use assisting systems
    when you have men or human-machine teaming
  • 5:47 - 5:51
    or when you accumulate a couple of
    autonomous functions through the targeting
  • 5:51 - 5:58
    cycle. So through this, the military steps
    are lead to the use of force or lead to
  • 5:58 - 6:04
    the killing of people. And that's not,
    this is really not our problem at the
  • 6:04 - 6:08
    moment. So please keep this in mind
    because it's not just semantics, semantics
  • 6:08 - 6:15
    to differentiate between these two things.
    It's really manages the expectations of
  • 6:15 - 6:21
    political and military decision-makers.
    Ok, so now you've got kind of an
  • 6:21 - 6:24
    impression what I'm talking about here so
    why should we actually talk about this?
  • 6:24 - 6:30
    What's all the fuss about? Actually,
    autonomous weapons have or would have
  • 6:30 - 6:34
    quite a few military advantages: They
    might be, in some cases, faster or even
  • 6:34 - 6:40
    more precise than humans. And you don't
    need a constant communication link. So you
  • 6:40 - 6:44
    don't have, you don't have to worry about
    instable communication links, you don't
  • 6:44 - 6:50
    have to worry about latency or detection
    or a vulnerability of this specific link.
  • 6:50 - 6:57
    So yay! And a lot of, let's say very
    interesting, military options come from
  • 6:57 - 7:03
    that. People talk about stealthy
    operations and shallow waters for example.
  • 7:03 - 7:07
    Or you know remote missions and secluded
    areas, things like that. And you can get
  • 7:07 - 7:14
    very creative with tiniest robots and
    swarms for example. So shiny new options.
  • 7:14 - 7:20
    But, and of course there's a "but", it
    comes at a prize because you have at least
  • 7:20 - 7:27
    three dimensions of challenges in this
    regard. First of all, the legal ones. When
  • 7:27 - 7:31
    we talk about these weapons, they might
    be, they will be applied in conflict where
  • 7:31 - 7:38
    international humanitarian law IHL
    applies. And IHL consists of quite a few
  • 7:38 - 7:45
    very abstract principles. For example:
    principle of distinction between
  • 7:45 - 7:51
    combatants and civilians, principle of
    proportionality or a military necessity.
  • 7:51 - 7:58
    They are very abstract and I'm pretty sure
    they really always need a human judgment
  • 7:58 - 8:06
    to interpret this, these principles, and
    apply them to dynamic situations. Feel
  • 8:06 - 8:15
    free to correct me if I'm wrong later. So
    that's one thing. So if you remove the
  • 8:15 - 8:19
    human from the targeting cycle, this human
    judgment might be missing and therefore
  • 8:19 - 8:25
    military decision makers have to evaluate
    very carefully the quality of human
  • 8:25 - 8:32
    control and human judgement within the
    targeting cycle. So that's law. Second
  • 8:32 - 8:39
    dimension of challenges are security
    issues. When you look at these new systems
  • 8:39 - 8:44
    they are cool and shiny and as most new
    types of weapons they are, they have the
  • 8:44 - 8:49
    potential to stir an arms race between
    between states. So they actually might
  • 8:49 - 8:54
    make conflicts more likely just because
    they are there and states want to have
  • 8:54 - 9:01
    them and feel threatened by them. Second
    aspect is proliferation. Autonomy is based
  • 9:01 - 9:05
    on software, so software can be easily
    transferred it's really hard to control
  • 9:05 - 9:09
    and all the other components, or most of
    the other components you will need, are
  • 9:09 - 9:12
    available on the civilian market so you
    can build this stuff on your own if you're
  • 9:12 - 9:20
    smart enough. So we have might have more
    conflicts from these types of weapons and
  • 9:20 - 9:24
    it's might get, well, more difficult to
    control the application of this
  • 9:24 - 9:30
    technology. And the third one which is it
    especially worrying for me is the as
  • 9:30 - 9:34
    potential for escalation within the
    conflict, especially when you have, when
  • 9:34 - 9:40
    both or more sites use these autonomous
    weapons, you have these very complex
  • 9:40 - 9:46
    adversary systems and it will become very
    hard to predict how they are going to
  • 9:46 - 9:52
    interact. They will increase the speed of
    the of the conflict and the human might
  • 9:52 - 9:57
    not even have a chance to process
    what's going on there.
  • 9:57 - 10:02
    So that's really worrying and we can see
    for example in high-frequency trading at
  • 10:02 - 10:06
    the stock markets where problems arise
    there and how are difficult is for humans
  • 10:06 - 10:13
    to understand what's going on there. So
    that, that are of some of these security
  • 10:13 - 10:23
    issues there. And the last and maybe maybe
    most important one are ethics. As I
  • 10:23 - 10:29
    mentioned before, when you use autonomy
    and weapons or machines you have
  • 10:29 - 10:33
    artificial intelligence so you don't have
    real intention, a real entity that's
  • 10:33 - 10:38
    behind this. So the killing decision might
    at some point be based on statistical
  • 10:38 - 10:43
    methods and no one will be involved there
    and that's, well, worrying for a lot of
  • 10:43 - 10:48
    reasons but also it could constitute a
    violation of human dignity. You can argue
  • 10:48 - 10:54
    that humans have, well, you can kill
    humans in in war but they at least have
  • 10:54 - 10:59
    the right to be killed by another human or
    at least by the decision of another human,
  • 10:59 - 11:03
    but we can discuss this later.
    So at least on this regard it would be
  • 11:03 - 11:08
    highly unethical and that really just
    scratches the surface of problems and
  • 11:08 - 11:13
    challenges that would arise from the use
    of these autonomous weapons. I haven't
  • 11:13 - 11:17
    even touched on the problems with training
    data, with accountability, with
  • 11:17 - 11:23
    verification and all that funny stuff
    because I only have 20 minutes. So, sounds
  • 11:23 - 11:33
    pretty bad, doesn't it? So how can this
    issue be addressed? Luckily, states have,
  • 11:33 - 11:37
    thanks to a huge campaign of NGOs, noticed
    that there might be some problems and
  • 11:37 - 11:40
    there might be a necessity to address
    that, this issue. They're currently doing
  • 11:40 - 11:45
    this in the UN Convention on certain
    conventional weapons, CCW, where they
  • 11:45 - 11:53
    discuss a potential ban of the development
    and use of these lethal weapons or weapons
  • 11:53 - 11:58
    that lack meaningful human control over
    the use of force. There are several ideas
  • 11:58 - 12:04
    around there. And such a ban would be
    really the maximum goal of the NGOs there
  • 12:04 - 12:09
    but it becomes increasingly unlikely that
    this happens. Most states do not agree
  • 12:09 - 12:13
    with a complete ban, they want to regulate
    it a bit here, a bit there, and they
  • 12:13 - 12:18
    really can't find a common common
    definition as I mentioned before because
  • 12:18 - 12:23
    if you have a broad definition as just as
    I used it you will notice that you have
  • 12:23 - 12:26
    existing systems in there that might be
    not that problematic or that you just
  • 12:26 - 12:32
    don't want to ben and you might stop
    civilian or commercial developments which
  • 12:32 - 12:39
    you also don't want to do. So states are
    stuck on this regard and they also really
  • 12:39 - 12:42
    challenge the notion that we need a
    preventive arms control here, so that we
  • 12:42 - 12:51
    need to act before these systems are
    applied on the battlefield. So at the
  • 12:51 - 12:56
    moment, this is the fourth year or
    something of these negotiations and we
  • 12:56 - 13:00
    will see how it goes this year and if
    states can't find a common ground there it
  • 13:00 - 13:05
    becomes increasingly like or yeah becomes
    likely that it will change to another
  • 13:05 - 13:11
    forum just like with anti-personnel mines
    for example which where the the treaty was
  • 13:11 - 13:18
    found outside of the United Nations. But
    yeah, the window of opportunity really
  • 13:18 - 13:25
    closes and states and NGOs have to act
    there and yeah keep on track there. Just
  • 13:25 - 13:32
    as a side note, probably quite a few
    people are members of NGOs so if you look
  • 13:32 - 13:39
    at the campaign to stop killer robots with
    a big campaign behind this, this process,
  • 13:39 - 13:43
    there's only one German NGO which is
    facing finance, so if you're especially if
  • 13:43 - 13:49
    you're German NGO and are interest that in
    AI it might be worthwhile to look into the
  • 13:49 - 13:55
    military dimension as well. We really need
    some expertise on that regard, especially
  • 13:55 - 14:00
    on AI and these technologies. They're...
    Okay, so just in case you fell asleep in
  • 14:00 - 14:06
    the last 15 minutes I want you to take
    away three key messages: Please be aware
  • 14:06 - 14:11
    of the trends and internal logic that lead
    to autonomy in weapons. Do not
  • 14:11 - 14:16
    overestimate the abilities of autonomy, of
    autonomous machines like intent and these
  • 14:16 - 14:20
    things and because you probably all knew
    this already, please tell people about
  • 14:20 - 14:24
    this, tell other people about this,
    educate them about this type of
  • 14:24 - 14:31
    technology. And third, don't underestimate
    the potential dangers for security and
  • 14:31 - 14:37
    human dignity that comes from this type of
    weapon. I hope that I could interest you a
  • 14:37 - 14:41
    bit more in this in this particular issue
    if you want to learn more you can find
  • 14:41 - 14:47
    really interesting sources on the website
    of the CCW at the campaign to stuff killer
  • 14:47 - 14:53
    robots and from a research project that I
    happen to work in, the International Panel
  • 14:53 - 14:57
    on the Regulation of Autonomous Weapons,
    we do have a few studies on that regard
  • 14:57 - 15:02
    and we're going to publish a few more. So
    please, check this out and thank you for
  • 15:02 - 15:03
    your attention.
  • 15:03 - 15:14
    Applause
  • 15:14 - 15:16
    Questions?
  • 15:21 - 15:24
    Herald: Sorry. So we have some time for
    questions answers now. Okay, first of all
  • 15:24 - 15:28
    I have to apologize that we had a hiccup
    with the signing language, the acoustics
  • 15:28 - 15:32
    over here on the stage was so bad that she
    didn't could do her job so I'm
  • 15:32 - 15:39
    terrible sorry about that. We fixed it in
    the talk and my apologies for that. We are
  • 15:39 - 15:42
    queuing here on the microphones already so
    we start with microphone number one, your
  • 15:42 - 15:45
    question please.
    Mic 1: Thanks for your talk Anja. Don't
  • 15:45 - 15:49
    you think there is a possibility to reduce
    war crimes as well by taking away the
  • 15:49 - 15:54
    decision from humans and by having
    algorithms who decide which are actually
  • 15:54 - 15:57
    auditable?
    Dahlmann: Yeah that's, actually, that's
  • 15:57 - 16:00
    something I just discussed in the
    international debate as well, that there
  • 16:00 - 16:05
    might, that machines might be more ethical
    than humans could be. And well, of course
  • 16:05 - 16:12
    they won't just start raping women because
    they want to but you can program them to
  • 16:12 - 16:18
    do this. So you just you shift the
    problems really. And also maybe these
  • 16:18 - 16:23
    machines don't get angry but they don't
    show compassion either so if you are there
  • 16:23 - 16:26
    and your potential target they just won't
    stop they will just kill you and do not
  • 16:26 - 16:33
    think once think about this. So you have
    to really look at both sides there I guess.
  • 16:33 - 16:38
    Herald: Thanks. So we switch over
    to microphone 3, please.
  • 16:38 - 16:45
    Mic 3: Thanks for the talk. Regarding
    autonomous cars, self-driving cars,
  • 16:45 - 16:49
    there's a similar discussion going on
    regarding the ethics. How should a car
  • 16:49 - 16:54
    react in a case of an accident? Should it
    protect people outside people, inside,
  • 16:54 - 17:02
    what are the laws? So there is another
    discussion there. Do you work with people
  • 17:02 - 17:07
    in this area or is this is there any
    collaboration?
  • 17:07 - 17:10
    Dahlmann: Maybe there's less collaboration
    than one might think there is. I think
  • 17:10 - 17:17
    there is. Of course, we we monitor this
    debate as well and yeah we think about the
  • 17:17 - 17:21
    possible applications of the outcomes for
    example from this German ethical
  • 17:21 - 17:27
    commission on self-driving cars for our
    work. But I'm a bit torn there because
  • 17:27 - 17:31
    when you talk about weapons, they are
    designed to kill people and cars mostly
  • 17:31 - 17:36
    are not. So with this ethical committee
    you want to avoid killing people or decide
  • 17:36 - 17:42
    what happens when this accident occurs. So
    they are a bit different but of course
  • 17:42 - 17:49
    yeah you can learn a lot from both
    discussions and we aware of that.
  • 17:49 - 17:54
    Herald: Thanks. Then we're gonna go over
    in the back, microphone number 2, please.
  • 17:54 - 18:00
    Mic 2: Also from me thanks again for this
    talk and infusing all this professionalism
  • 18:00 - 18:10
    into the debate because some of the
    surroundings of our, so to say ours
  • 18:10 - 18:18
    scenery, they like to protest against very
    specific things like for example the
  • 18:18 - 18:24
    Rammstein air base and in my view that's a
    bit misguided if you just go out and
  • 18:24 - 18:31
    protest in a populistic way without
    involving these points of expertise that
  • 18:31 - 18:38
    you offer. And so, thanks again for that.
    And then my question: How would you
  • 18:38 - 18:47
    propose that protests progress and develop
    themselves to a higher level to be on the
  • 18:47 - 18:55
    one hand more effective and on the other
    hand more considerate of what is at stake
  • 18:55 - 19:00
    on all the levels and on
    all sides involved?
  • 19:00 - 19:06
    Dahlmann: Yeah well, first, the Rammstein
    issue is completely, actually a completely
  • 19:06 - 19:10
    different topic. It's drone warfare,
    remotely piloted drones, so there are a
  • 19:10 - 19:14
    lot of a lot of problems with this and
    we're starting killings but it's not about
  • 19:14 - 19:22
    lethal autonomous weapons in particular.
    Well if you want to be a part of this
  • 19:22 - 19:25
    international debate, there's of course
    this campaign to stop killer robots and
  • 19:25 - 19:30
    they have a lot of really good people and
    a lot of resources, sources, literature
  • 19:30 - 19:35
    and things like that to really educate
    yourself what's going on there, so that
  • 19:35 - 19:39
    would be a starting point. And then yeah
    just keep talking to scientists about
  • 19:39 - 19:45
    this and find out where we see the
    problems and I mean it's always helpful
  • 19:45 - 19:53
    for scientists to to talk to people in the
    field, so to say. So yeah, keep talking.
  • 19:53 - 19:56
    Herald: Thanks for that. And the
    signal angel signaled that we have
  • 19:56 - 19:59
    something from the internet.
    Signal Angel: Thank you. Question from
  • 19:59 - 20:04
    IRC: Aren't we already in a killer robot
    world? The bot net can attack a nuclear
  • 20:04 - 20:08
    power plant for example. What do you think?
    Dahlmann: I really didn't understand a
  • 20:08 - 20:10
    word, I'm sorry.
    Herald: I didn't understand that as well,
  • 20:10 - 20:13
    so can you speak closer to
    the microphone, please?
  • 20:13 - 20:16
    Signal Angel: Yes. Aren't we already in a
    killer robot world?
  • 20:16 - 20:20
    Herald: Sorry, that doesn't work. Sorry.
    Sorry, we stop that here, we can't hear it
  • 20:20 - 20:22
    over here. Sorry.
    Signal Angel: Okay.
  • 20:22 - 20:26
    Herald: We're gonna switch over to
    microphone two now, please.
  • 20:26 - 20:33
    Mic 2: I have one little question. So in
    your talk, you were focusing on the
  • 20:33 - 20:39
    ethical questions related to lethal
    weapons. Are you aware of ongoing
  • 20:39 - 20:45
    discussions regarding the ethical aspects
    of the design and implementation of less
  • 20:45 - 20:53
    than lethal autonomous weapons for crowd
    control and similar purposes?
  • 20:53 - 20:57
    Dahlmann: Yeah actually within the CCW,
    every term of this Lethal Autonomous
  • 20:57 - 21:03
    Weapon Systems is disputed also the
    "lethal" aspect and for the regulation
  • 21:03 - 21:08
    that might be easier to focus on this for
    now because less than lethal weapons come
  • 21:08 - 21:14
    with their own problems and the question
    if they are ethical and if they can, if
  • 21:14 - 21:19
    IHL applies to them but I'm not really
    deep into this discussion. So I'll just
  • 21:19 - 21:23
    have to leave it there.
    Herald: Thanks and back here to microphone
  • 21:23 - 21:25
    one, please.
    Mic 1: Hi. Thank you for the talk very
  • 21:25 - 21:32
    much. My question is in the context of the
    decreasing cost of both, the hardware and
  • 21:32 - 21:37
    software, over the next say 20, 40 years.
    Outside of a nation-state context like
  • 21:37 - 21:42
    private forces or non nation-state actors
    gaining use of these weapons, do things
  • 21:42 - 21:46
    like the UN convention or the campaign to
    stop killer robots apply are they
  • 21:46 - 21:52
    considering private individuals trying to
    leverage these against others?
  • 21:52 - 22:00
    Dahlmann: Not sure what the campaign says
    about this, I'm not a member there. The
  • 22:00 - 22:07
    the CCW mostly focuses on international
    humanitarian law which is important but I
  • 22:07 - 22:11
    think it's it's not broad enough. So
    questions like proliferation and all this
  • 22:11 - 22:16
    is connected to your question and not
    really or probably won't be part of
  • 22:16 - 22:21
    regulation there. It's discussed only on
    the edges of the of the debates and
  • 22:21 - 22:27
    negotiations there but it doesn't seem to
    be a really issue there.
  • 22:27 - 22:30
    Mic 1: Thanks.
    Herald: And over to microphone six,
  • 22:30 - 22:33
    please.
    Mic 6: Thank you. I have a question as a
  • 22:33 - 22:39
    researcher: Do you know how far the
    development has gone already? So how
  • 22:39 - 22:45
    transparent or intransparent is your look
    into what is being developed and
  • 22:45 - 22:51
    researched on the side of militaria
    working, military people working with
  • 22:51 - 22:55
    autonomous weapons and developing them?
    Dahlmann: Well, for me it's quite
  • 22:55 - 23:00
    intransparent because I only have only
    access to public publicly available
  • 23:00 - 23:05
    sources so I don't really know what what's
    going on behind closed doors in the
  • 23:05 - 23:10
    military or in the industry there. Of
    course you can you can monitor the
  • 23:10 - 23:15
    civilian applications or developments
    which can tell a lot about the the state
  • 23:15 - 23:24
    of the art and for example the DARPA
    the American Development Agency, they
  • 23:24 - 23:30
    published sometimes a call for papers,
    that's not the term, but there you can see
  • 23:30 - 23:34
    where in which areas they are interested
    in then for example they really like this
  • 23:34 - 23:41
    idea of autonomous killer bug that can
    act in swarms and monitor or even kill
  • 23:41 - 23:46
    people and things like that. So yeah we
    try to piece it, piece it together in
  • 23:46 - 23:49
    our work.
    Herald: We do have a little bit more time,
  • 23:49 - 23:51
    are you okay to answer more questions?
    Dahlmann: Sure.
  • 23:51 - 23:53
    Herald: Then we're gonna switch over to
    microphone three, please.
  • 23:53 - 24:00
    Mic 3: Yes, hello. I think we are living
    already in a world of Leathal Autonomous
  • 24:00 - 24:05
    Weapon Systems if you think about these
    millions of landmines which are operating.
  • 24:05 - 24:09
    And so the question is: Shouldn't it be
    possible to ban these weapon systems the
  • 24:09 - 24:14
    same way as land mines that are already
    banned by several countries so just
  • 24:14 - 24:19
    include them in that definition? And
    because the arguments should be very
  • 24:19 - 24:23
    similar.
    Dahlmann: Yeah it does, it does come to
  • 24:23 - 24:27
    mind of course because these mines are
    just lying around there and no one's
  • 24:27 - 24:33
    interacting when you step on them and
    boom! But they are, well it depends, it
  • 24:33 - 24:39
    depends first of all a bit of your
    definition of autonomy. So some say
  • 24:39 - 24:43
    autonomous is when you act in dynamic
    situations and the other ones would be
  • 24:43 - 24:48
    automated and things like that and I think
    this autonomy aspect, I really don't want
  • 24:48 - 24:56
    to find, don't want to find define
    autonomy here really but this this action
  • 24:56 - 25:01
    in more dynamic spaces and the aspect of
    machine learning and all these things,
  • 25:01 - 25:06
    they are way more complex and they bring
    different problems than just land mines.
  • 25:06 - 25:11
    Landmines are problematic, anti-personnel
    mines are banned for good reasons but they
  • 25:11 - 25:15
    don't have the same problems I think. So
    it won't be, I don't think it won't be
  • 25:15 - 25:22
    sufficient to just put the LAWS in there,
    the Lethal Autonomous Weapons.
  • 25:22 - 25:26
    Herald: Thank you very much. I can't see
    anyone else queuing up so therefore, Anja,
  • 25:26 - 25:29
    thank you very much it's your applause!
  • 25:29 - 25:32
    applause
  • 25:32 - 25:35
    and once again my apologies that
    that didn't work
  • 25:35 - 25:40
    34c3 outro
  • 25:40 - 25:57
    subtitles created by c3subtitles.de
    in the year 2018. Join, and help us!
Title:
34C3 - Regulating Autonomous Weapons
Description:

more » « less
Video Language:
English
Duration:
25:56

English subtitles

Revisions