Return to Video

35C3 - What is Good Technology?

  • 0:06 - 0:18
    35C3 Intro music
  • 0:19 - 0:25
    Herald Angel: We at the Congress, we not
    only talk about technology, we also talk
  • 0:25 - 0:31
    about social and ethical responsibility.
    About how we can change the world for
  • 0:31 - 0:36
    good. The Good Technology Collective
    supports the development guidelines...
  • 0:36 - 0:41
    sorry, it supports the development process
    of new technology with ethical engineering
  • 0:41 - 0:48
    guidelines, that offer a practical way to
    take ethic and social impact into account.
  • 0:48 - 0:54
    Yannick Leretaille - and I hope this was
    okay - will tell you more about it.
  • 0:54 - 0:58
    Please welcome on stage with a very warm applause
    Yann Leretaille.
  • 0:58 - 1:03
    applause
  • 1:03 - 1:07
    Yannick Leretaille: Hi, thanks for the
    introduction. So before we start, can you
  • 1:07 - 1:11
    kind of show me your hand if you, like,
    work in tech building products as
  • 1:11 - 1:18
    designers, engineers, coders, product
    management? OK, so it's like 95 percent,
  • 1:18 - 1:28
    90 percent. Great. Yeah. So, today we kind
    of try to answer the question: What is
  • 1:28 - 1:34
    good technology and how can we build
    better technology. Before that, shortly
  • 1:34 - 1:40
    something of me. So I am Yann. I'm French-
    German. Kind of a hacker, among the CCC
  • 1:40 - 1:44
    for a long time, entrepreneur, like, co-
    founder of a startup in Berlin. And I'm
  • 1:44 - 1:48
    also founding member of the Good
    Technology Collective. The Good
  • 1:48 - 1:56
    Technology Collective was founded about a
    year ago or almost over a year now actually
  • 1:56 - 2:02
    by a very diverse expert council
    and we kinda have like 3 areas of work.
  • 2:02 - 2:08
    The first one is trying to educate the
    public about current issues with
  • 2:08 - 2:12
    technology, then, to educate engineers or
    to build better technology, and then
  • 2:12 - 2:20
    long-term hopefully one day we'll be
    able to work like in legislation as well.
  • 2:20 - 2:27
    Here, it's a bit of what we achieved so
    far. We've like 27 council members now. We
  • 2:27 - 2:31
    have several media partnerships and
    published around 20 articles, that's kind
  • 2:31 - 2:36
    of the public education part. Then we
    organized or participated in roughly 15
  • 2:36 - 2:45
    events already. And we are now publishing
    one standard, well, kind of today
  • 2:45 - 2:49
    actually, and
    applause
  • 2:49 - 2:54
    and if you're interested in what we do,
    then, yeah, sign up for the newsletter and
  • 2:54 - 3:00
    we keep you up to date and you can join
    events. So as I said the Expert Council is
  • 3:00 - 3:08
    really, really diverse. We have everything
    from people in academia, to people in
  • 3:08 - 3:13
    government, to technology makers, to
    philosophers, all sorts, journalists.
  • 3:13 - 3:22
    And the reason that is the case is that a year
    ago we kind of noticed that in our own
  • 3:22 - 3:28
    circles, like, as technology makers or
    academics, we were all talking about a lot
  • 3:28 - 3:33
    of, kind of, voice on development and
    technology, but no one was really kind of
  • 3:33 - 3:37
    getting together and looking at it from
    all angles. And there have been a lot of
  • 3:37 - 3:44
    very weird and troublesome developments in
    the last two years. I think we really
  • 3:44 - 3:49
    finally feel, you know like, the impact of
    filter bubbles. Something we have talked
  • 3:49 - 3:54
    for like five years, but now it's like,
    really like, you know, deciding over
  • 3:54 - 4:01
    elections and people become politically
    radicalized and society is, kind of,
  • 4:01 - 4:06
    polarized more because they only see a
    certain opinion. And we have situations
  • 4:06 - 4:11
    that we only knew, like, from science
    fiction, just kind of, you know, pre-crime,
  • 4:11 - 4:17
    like, governments, kind of, over-arching
    and trying to use machine learning to make
  • 4:17 - 4:23
    decisions on whether or not you should go
    to jail. We have more and more machine
  • 4:23 - 4:27
    learning and big data and automization
    going into basically every single aspect
  • 4:27 - 4:32
    of our lives and not all of it been has
    been positive. You know, like, literally
  • 4:32 - 4:37
    everything from e-commerce to banking to
    navigating to moving to the vault now goes
  • 4:37 - 4:46
    through these interfaces. That present us
    the data and a slice of the world at a time.
  • 4:46 - 4:50
    And then at the same time we have
    really positive developments. Right? We have
  • 4:50 - 4:54
    things like this, you know, like space
    travel, finally something's happening.
  • 4:54 - 5:01
    And we have huge advances in medicine. Maybe
    soon we'll have, like, self-driving cars
  • 5:01 - 5:11
    and great renewable technology. And it kind
    of begs the question: How can it be that
  • 5:11 - 5:17
    good and bad use of technology are kind of
    showing up at such an increasing rate in
  • 5:17 - 5:25
    this, like, on such extremes, right? And
    maybe the reason is just that everything
  • 5:25 - 5:30
    got so complicated, right? Data is
    basically doubling every couple of years,
  • 5:30 - 5:36
    so no human can possibly process anymore.
    So we had to build more and more complex
  • 5:36 - 5:41
    algorithms to process it, connecting more
    and more parts together. And no one really
  • 5:41 - 5:46
    seems to understand it anymore, it seems.
    And that leads to unintended consequences.
  • 5:46 - 5:50
    I've an example here: So, Google Photos –
    this is actually only two years ago –
  • 5:50 - 5:57
    launched a classifier to automatically go
    through all of your pictures and tell you
  • 5:57 - 6:01
    what it is. You could say "Show me the
    picture of the bird in summer at this
  • 6:01 - 6:06
    location" and it would find it for you.
    Kind of really cool technology, and they
  • 6:06 - 6:11
    released it to, like, a planetary user
    base until someone figured out that people
  • 6:11 - 6:17
    of color were always marked as gorillas.
    Of course it was a huge PR disaster, why
  • 6:17 - 6:22
    somehow no one found out about this before
    it came out... But now the interesting thing
  • 6:22 - 6:27
    is: In two years they didn't even manage
    to fix it! Their solution was to just
  • 6:27 - 6:33
    block all kind of apes, so they're just
    not found anymore. And that's how they
  • 6:33 - 6:38
    solved it, right? But if even Google can't
    solve this... what does it mean?
  • 6:39 - 6:42
    And then, at the same time, you know,
    sometimes we seem to have, kind of,
  • 6:42 - 6:44

    intended consequences?
  • 6:46 - 6:50
    I have an example... another example here:
    Uber Greyball. I don't know if anyone
  • 6:50 - 6:58
    heard about it. So Uber was very eager to
    change regulation and push the services
  • 6:58 - 7:03
    globally as much as possible, and kind of
    starting a fight with, you know, all the
  • 7:03 - 7:07
    taxi laws and regulation, and taxi
    drivers in the various countries around the
  • 7:07 - 7:12
    world. And what they realized, of course,
    is that they didn't really want people to
  • 7:12 - 7:19
    be able to, like, investigate what they
    were doing or, like, finding individual
  • 7:19 - 7:22
    drivers. So they built this absolutely
    massive operation which was like following
  • 7:22 - 7:27
    data in social media profiles, linking,
    like, your credit card and location data
  • 7:27 - 7:31
    to find out if you were working for the
    government. And if you did, you would just
  • 7:31 - 7:37
    never find a car. It would just not show
    up, right? And that was clearly
  • 7:37 - 7:41
    intentional, all right. So at the same
    time they were pushing, like, on, like,
  • 7:41 - 7:46
    the lobbyism, political side to change
    regulation, while heavily manipulating the
  • 7:46 - 7:52
    people that were pushing to change the
    regulation, right? Which is really not a
  • 7:52 - 7:55
    very nice thing to do, I would say.
  • 7:56 - 7:57
    And...
  • 7:58 - 8:03
    The thing that I find, kind of...
    worrisome about this:
  • 8:03 - 8:08
    No matter if it's intended or unintended,
    is that it actually gets worse, right?
  • 8:08 - 8:13
    The more and more systems we
    interconnect, the worse these consequences
  • 8:13 - 8:17
    can get. And I've an example here: So this
    is a screenshot I took of Google Maps
  • 8:17 - 8:24
    yesterday and you notice there are, like,
    certain locations... So they're kind of
  • 8:24 - 8:28
    highlighted on this map and I don't know
    if you knew it but this map and the
  • 8:28 - 8:33
    locations that Google highlight look
    different for every single person.
  • 8:33 - 8:38
    Actually, I went again and looked today
    and it looked different again. So, Google
  • 8:38 - 8:43
    is already heavily filtering and kind of
    highlighting certain places, like, maybe
  • 8:43 - 8:49
    this restaurant over there, if you can see
    it. And I would say, like, from just
  • 8:49 - 8:54
    opening the map, that's not obvious to you
    that it's doing that. Or that it's trying
  • 8:54 - 8:59
    to decide for you which place is
    interesting for you. However, that's
  • 8:59 - 9:06
    probably not such a big issue. But the
    same company, Google with Waymo, is also
  • 9:06 - 9:11
    developing this – and they just started
    deploying them: self-driving cars. They're...
  • 9:13 - 9:19
    ...still a good couple of years away from
    actually making it reality, but they are
  • 9:19 - 9:23
    really – in terms of, like, all the others
    trying it at the moment – the farthest, I
  • 9:23 - 9:29
    would say, and in some cities they started
    deploying self-driving cars. So now, just
  • 9:29 - 9:34
    think like 5, 10 years into the future
    and you have signed up in your Google...
  • 9:35 - 9:39
    ...self-driving car. Probably you don't
    have your own car, right? And you go in
  • 9:39 - 9:44
    the car and you are like: "Hey, Yann, where
    do you want to go?" Do you want to go to
  • 9:44 - 9:50
    work? Because, I mean obviously that's why
    I probably go most of the time. Do you
  • 9:50 - 9:53
    want to go to your favorite Asian
    restaurant, like the one we just saw on the
  • 9:53 - 9:57
    map? Which is actually not my favorite,
    but the first one I went to. So Google
  • 9:57 - 10:00
    just assumed it was. Do you want to go to
    another Asian restaurant? Because,
  • 10:00 - 10:07
    obviously, that's all I like. And then
    McDonald's. Because, everyone goes there.
  • 10:07 - 10:11
    And maybe the fifth entry is an
    advertisement. And you would say: Well,
  • 10:11 - 10:18
    Yann, you know, that's still kind of fine,
    but it's OK because I can still click on:
  • 10:18 - 10:25
    'No, I don't want these 5 options, give me,
    like, the full map.' But now, we went back
  • 10:25 - 10:31
    here. So, even though you are seeing the
    map, you're not actually not seeing all
  • 10:31 - 10:36
    the choices, right? Google is actually
    filtering for you where it thinks you want
  • 10:36 - 10:43
    to go. So now we have, you know, the car
    like this symbol of mobility and freedom.
  • 10:43 - 10:50
    It enables so much change in our society
    that it's actually reducing the part of
  • 10:50 - 10:54
    the world that you see. And because, I
    mean these days they call it AI, I think
  • 10:54 - 10:59
    it's just machine learning, because these
    machine learning algorithms all do pattern
  • 10:59 - 11:05
    matching and basically just can recognize
    similarities. When you open the map and
  • 11:05 - 11:09
    you zoom in and you select a random place,
    it would only suggest places to you where
  • 11:09 - 11:15
    other people have been before. So now the
    restaurant that opened around the corner
  • 11:15 - 11:19
    you'll probably not even discover it
    anymore. And no one will. And it will
  • 11:19 - 11:23
    probably close. And the only ones that
    will stay are the ones that are already
  • 11:23 - 11:32
    established now. And all of that without
    being really obvious to anyone who would
  • 11:32 - 11:40
    use the technology. Because it has become
    like kind of a black box. So, I do want
  • 11:40 - 11:48
    self-driving cars, I really do. I don't
    want a future like this. Right. And if we
  • 11:48 - 11:53
    want to prevent that future, I think we
    have to first ask a very simple question,
  • 11:53 - 12:00
    which is: Who is responsible for designing
    these products? So, do you know the
  • 12:00 - 12:02
    answer?
    audience: inaudible
  • 12:02 - 12:05
    Yann: Say it louder.
    audience: We are.
  • 12:05 - 12:10
    Yann: Yeah, we are. Right. That's a really
    frustrating thing about it that actually
  • 12:10 - 12:15
    gets us, right, as engineers and
    developers. You know we are always driven
  • 12:15 - 12:20
    by perfection. We want to create, like,
    the perfect code sources. One problem,
  • 12:20 - 12:25
    really, really nice. You know. Chasing the
    next challenge over and over trying to be
  • 12:25 - 12:32
    first. But we have to realize that at the
    same time we are kind of working on
  • 12:32 - 12:37
    frontier technologies, right, on things,
    technology, that are really kind of on the
  • 12:37 - 12:42
    edge of values and norms we have in
    society. And if we are not careful and
  • 12:42 - 12:46
    just, like, focus on our small problem and
    don't look at the big picture, then we
  • 12:46 - 12:52
    have no say in on which side of the coin
    the technology will fall. And probably it
  • 12:52 - 12:59
    will take a couple of years, or by that
    time we alreaday moved on, I guess. So.
  • 12:59 - 13:07
    It's just that technology has become so
    powerful and interconnected and impactful,
  • 13:07 - 13:11
    because we are not building stuff that
    it's not affecting like 10 or 100 people
  • 13:11 - 13:15
    or a city but literally millions of
    people, that we really have to take a step
  • 13:15 - 13:21
    back and not only look at the individual
    problem as the challenge but also the big
  • 13:21 - 13:27
    picture. And I think if you want to do
    that we have to start by asking the right
  • 13:27 - 13:34
    questions. And the first question of
    course is: What is good technology? So,
  • 13:34 - 13:39
    that's also the name of the talk.
    Unfortunately, I don't have a perfect
  • 13:39 - 13:46
    answer for that. And probably we will
    never find a perfect answer for that. So,
  • 13:46 - 13:53
    what I would like to propose is to
    establish some guidelines and engineering
  • 13:53 - 13:58
    processes that help us to build better
    technology. To kind of ensure the same
  • 13:58 - 14:04
    where we have quality insurance and
    project management systems and processes
  • 14:04 - 14:09
    to, like, kind of, this you were tasked
    with. And companies that what we build is
  • 14:09 - 14:16
    actually, has a net positive outcome for
    society. And we call it the good
  • 14:16 - 14:22
    technology standard. We've kind of been
    working that over, the last year, and we
  • 14:22 - 14:27
    really wanted to make it really practical.
    And what we kind of realized is that if you
  • 14:27 - 14:32
    want to make it practical you have to make
    it very easy to use and also mostly,
  • 14:32 - 14:39
    actually what was surprising, just ask the
    right questions. So, what is important
  • 14:39 - 14:46
    though, is that if you adapt the standard,
    it has to be in all project phases. It has
  • 14:46 - 14:50
    to involve everyone. So, from, like, the
    CTO to, like, the product managers to
  • 14:50 - 14:56
    actually legal. Today, legal has this
    interesting role, where you develop
  • 14:56 - 15:00
    something and then you're like: Okay, now,
    legal, make sure that we can actually ship it.
  • 15:00 - 15:06
    And that's what usually happens. And,
    yeah, down to the individual engineer. And
  • 15:06 - 15:10
    if it's not applied globally and people
    start making exceptions then of course it
  • 15:10 - 15:18
    won't be worth very much. Generally, we
    kind of identified four main areas that we
  • 15:18 - 15:23
    think are important, kind of defining,
    kind of an abstract way, if a product is
  • 15:23 - 15:30
    good. And the first one is empowerment. A
    good product should empower its users. And
  • 15:30 - 15:36
    that's kind of a tricky thing. So, as
    humans we have very limited decision
  • 15:36 - 15:40
    power. Right? And we are faced with, as I
    said before, like, this huge amount of
  • 15:40 - 15:46
    data and choices. So it seems very natural
    to build machines and interfaces that try
  • 15:46 - 15:51
    to make a lot of decisions for us. Like
    the Google Maps one we saw before. But we
  • 15:51 - 15:56
    have to be careful because if we do that
    too much then the machine ends up making
  • 15:56 - 16:03
    all decisions for us. So often, when you
    develop something you should really ask
  • 16:03 - 16:07
    yourself, like, in the end if I take
    everything together am I actually
  • 16:07 - 16:13
    empowering users, or am I taking
    responsibility away from them? Do I
  • 16:13 - 16:18
    respect the individual choice? Why does he
    say: I don't want this, or they give you
  • 16:18 - 16:24
    their preference, do we actually respect
    it or do we still try to, you know, just
  • 16:24 - 16:30
    figure out what is better for them. Do my
    users actually feel like they benefit from
  • 16:30 - 16:34
    using the product? So, I couldn't,
    actually not a lot of people ask themselves,
  • 16:34 - 16:40
    because usually you think like in terms
    of: Are you benefiting your company? And I
  • 16:40 - 16:46
    think what's really pressing in that
    aspect: does it help the users, the humans
  • 16:46 - 16:54
    behind it, to grow in any way. If it helps
    them to be more effective or faster or do
  • 16:54 - 16:58
    more things or be more relaxed or more
    healthy, right, then it's probably positive.
  • 16:58 - 17:02
    But if you can't identify any of these,
    then you really have to think about it.
  • 17:02 - 17:09
    And then, in terms of AI, in machine
    learning, are we actually kind of
  • 17:09 - 17:17
    impacting their own reasoning so that they
    can't make proper decisions anymore. The
  • 17:17 - 17:23
    second one is Purposeful Product Design.
    That one is one that, it's been kind of a
  • 17:23 - 17:27
    pet peeve for me for a really long time.
    So these days we have a lot of products
  • 17:27 - 17:32
    that are kind of like this. I don't have
    something specifically against Philips
  • 17:32 - 17:38
    Hue, but there seems to be, like, this
    trend that is kind of, making smart
  • 17:38 - 17:43
    things, right? You take a product, put a
    Wi-Fi chip on it, just slap it on there.
  • 17:43 - 17:48
    Label it "smart", and then you make tons
    of profit, right? And a lot of these new
  • 17:48 - 17:50
    products we've been seeing around us,
    like, everyone is saying, like, oh yeah,
  • 17:50 - 17:55
    we will have this great interconnected
    feature, but most of them are actually not
  • 17:55 - 17:58
    changing the actual product, right, like,
    the Wi-Fi connected washing machine today
  • 17:58 - 18:03
    is still a boring washing machine that
    breaks down after two years. But it has
  • 18:03 - 18:09
    Wi-Fi, so you can see what it's doing when
    you're in the park. And we think we should
  • 18:09 - 18:16
    really think more in terms of intelligent
    design. How can we design it in the first
  • 18:16 - 18:22
    place so it's intelligent, not smart. That
    the different components interact in a
  • 18:22 - 18:27
    way, that it serves a purpose well, and
    the kind of intelligent by design
  • 18:27 - 18:34
    philosophy is, when you start using your
    product you kind of try to identify the
  • 18:34 - 18:41
    core purpose of it. And based on that, you
    just use all the technologies available to
  • 18:41 - 18:44
    rebuild it from scratch. So, instead of
    building a Wi-Fi connect washing machine
  • 18:44 - 18:47
    would actually try to build a better
    washing machine. And if it ends up having
  • 18:47 - 18:51
    Wi-Fi, then that's good, but it doesn't
    has to. And along each step actually try
  • 18:51 - 18:58
    to ask yourself: Am I actually improving
    washing machines here? Or am I just
  • 18:58 - 19:06
    creating another data point? And yeah, a
    good example for that is, kind of, a
  • 19:06 - 19:10
    watch. Of course it's very old and old
    technology, it was invented a long time
  • 19:10 - 19:14
    ago. But back when it was invented it was
    for something you could have on your arm
  • 19:14 - 19:20
    or in your pocket in the beginning and it
    was kind of a natural extension of
  • 19:20 - 19:25
    yourself, right, that kind of enhances
    your senses because it's never there, you
  • 19:25 - 19:30
    don't really feel it. But when you need it
    it's always there and then you can just
  • 19:30 - 19:34
    look at it and you know the time. And that
    profoundly changed how, like, we humans
  • 19:34 - 19:38
    actually worked in society because we
    couldn't meet in the same place at the
  • 19:38 - 19:43
    same time. So, when you build a new
    product try to ask yourself what is the
  • 19:43 - 19:47
    purpose of the product, who is it for.
    Often I talk to people and they talk to me
  • 19:47 - 19:51
    for one hour, what like, literally the
    details of how they solved the problem but
  • 19:51 - 19:55
    they can't tell me who their customer is.
    Then does this product actually make
  • 19:55 - 20:00
    sense? Do I have features, and these
    distract my users, that I maybe just don't
  • 20:00 - 20:04
    need. And can I find more intelligent
    solutions by kind of thinking outside of
  • 20:04 - 20:10
    the box and focusing on the purpose of it.
    And then of course what is the long term
  • 20:10 - 20:13
    product vision like, where do we want this
    to go? This kind of technology I'm
  • 20:13 - 20:20
    developing in the next years. The next one
    is kind of, Societal Impact, that goes
  • 20:20 - 20:28
    into what I talked about in the beginning
    with all the negative consequences we have
  • 20:28 - 20:31
    seen. A lot of people these days don't
    realize that even if you're, like, in a
  • 20:31 - 20:35
    small start up and you're working on, I
    don't know, a technology, or robots, or
  • 20:35 - 20:40
    whatever. You don't know if your
    algorithm, or your mechanism, or whatever
  • 20:40 - 20:45
    you build, will be used by 100 million
    people in five years. Because this has
  • 20:45 - 20:50
    happened a lot, right? So, only when
    starting to build it you have to think: If
  • 20:50 - 20:54
    this product would be used by 10 million,
    100, maybe even a billion people, like
  • 20:54 - 20:58
    Facebook, would it have negative
    consequences? Right, because then you get
  • 20:58 - 21:03
    completely different effects in society,
    completely different engagement cycles and
  • 21:03 - 21:09
    so on. Then, are we taking advantage of
    human weaknesses? So this is arguably
  • 21:09 - 21:16
    something that's just their technology. A
    lot of products these days kind of try to
  • 21:16 - 21:20
    hack your brain, what, we understand
    really well how, like, engagement works
  • 21:20 - 21:25
    and addiction. So a lot of things, like
    social networks, actually have been
  • 21:25 - 21:28
    focusing, you know, and also built by
    engineers, you know, trying to get a
  • 21:28 - 21:35
    little number from 0.1% to 0.2%, can mean
    that you just do extensive A/B testing,
  • 21:35 - 21:38
    create an interface that no one can stop
    looking at. You just continue scrolling,
  • 21:38 - 21:42
    right? You just continue, and two hours
    have passed and you haven't actually
  • 21:42 - 21:49
    talked to anyone. And this attention
    grabbing is kind of an issue and we can
  • 21:49 - 21:54
    see that Apple actually now implemented
    screen time and they actually tell you how
  • 21:54 - 21:57
    much time you spend on your phone. So
    there's definitely ways to build
  • 21:57 - 22:02
    technology that even helps you to get away
    from these. And then for everything that
  • 22:02 - 22:06
    involves AI and machine learning, you
    really have to take a really deep look at
  • 22:06 - 22:11
    your data sets and your algorithms because
    it's very, very easy to build in biases
  • 22:11 - 22:17
    and discrimination. And again, if you it
    applied to all of society many people who
  • 22:17 - 22:20
    are less fortunate, or more fortunate, or
    they're just different, you know they just
  • 22:20 - 22:25
    do different things, kind of fall out of
    the grid and now suddenly they can't,
  • 22:25 - 22:31
    like, [unintelligible] anymore. Or use
    Uber, or Air B'n'B, or just live a normal
  • 22:31 - 22:35
    life, or do financial transactions. And
    then, kind of what I said in the
  • 22:35 - 22:40
    beginning, not only look at your product
    but also, if you combine it with other
  • 22:40 - 22:44
    technologies that are upcoming, are there
    certain combinations that are dangerous?
  • 22:44 - 22:49
    And for that I kind of recommend to do,
    like, some techno or litmus test to just
  • 22:49 - 22:59
    try to come up with the craziest scenario
    that your technology could entail. And if
  • 22:59 - 23:05
    it's not too bad then, probably good. The
    next thing is, kind of, sustainability. I
  • 23:05 - 23:11
    think in today's world it really should be
    part of a good product, right. The first
  • 23:11 - 23:17
    question is of course kind of obvious. Are
    we limiting product lifetime? Do we maybe
  • 23:17 - 23:20
    have planned obsolescence, or if we
    build something that is so dependent on so
  • 23:20 - 23:24
    many services and we're not only going to
    support it for one year anyways, that
  • 23:24 - 23:29
    basically it will have to be thrown in the
    trash afterwards. Maybe it would be
  • 23:29 - 23:34
    possible to add a standalone node or a
    very basic fallback feature so that at
  • 23:34 - 23:38
    least the products continues to work.
    Especially if you talk about things like
  • 23:38 - 23:44
    home appliances. Then, what is the
    environmental impact? A good example here
  • 23:44 - 23:49
    would be, you know, crypto currencies who
    are now using as much energy as certain
  • 23:49 - 23:57
    countries. And when you consider that just
    think like is there maybe an alternative solution
  • 23:57 - 24:01
    that doesn't have such a big impact. And
    of course we are still capitalism, it has
  • 24:01 - 24:05
    to be economically viable, but often there
    aren't, often it's again just really small
  • 24:05 - 24:13
    tweaks. Then of course: Which other
    services are you working with? But for
  • 24:13 - 24:18
    example I would say, like, as european
    companies, we're in Europe here, maybe try
  • 24:18 - 24:22
    to work mostly with suppliers from Europe,
    right, because you know they follow GDPR
  • 24:22 - 24:28
    and strict rules, and in a sense the US.
    Or check your supply chain if you build
  • 24:28 - 24:33
    hardware. And then for hardware
    specifically that's because also I have,
  • 24:33 - 24:38
    like, we also do hardware in my company, I
    always found that interesting. We're kind
  • 24:38 - 24:42
    of in a world where everyone tries to
    save, like, the last little bit of money
  • 24:42 - 24:46
    out of every device that is built and
    often makes the difference between plastic
  • 24:46 - 24:52
    and metal screws like half a cent, right.
    And at that point it doesn't really change
  • 24:52 - 24:57
    your margins much. And maybe as an
    engineer, you know, just say no and say:
  • 24:57 - 25:01
    You know, we don't have to do that. The
    savings are too small to redesign
  • 25:01 - 25:06
    everything and it will impact upon our
    quality so much that it just breaks
  • 25:06 - 25:13
    earlier. These are kind of the main four
    points. I hope that makes sense. Then we
  • 25:13 - 25:17
    have two more, kind of, additional
    checklists. The first one is data
  • 25:17 - 25:24
    collection. So really, just if, especially
    like in terms of like IOT, you know,
  • 25:24 - 25:29
    everyone focuses on kind of collecting as
    much data as possible without actually
  • 25:29 - 25:34
    having an application. And I think we
    really have to start seeing that as a
  • 25:34 - 25:40
    liability. And instead try to really
    define the application first, define which
  • 25:40 - 25:44
    data we need for it, and then really just
    collect that. And we can start collecting
  • 25:44 - 25:49
    more data later on. And that can really
    prevent a lot of these negative cycles we
  • 25:49 - 25:53
    have seen. By just having machine learning
    organisms run on of it kind of
  • 25:53 - 25:59
    unsupervised and seeing what comes out.
    Then also kind of really interesting I
  • 25:59 - 26:03
    found that, many times, like, a lot of
    people are so fascinated by the amount of
  • 26:03 - 26:09
    data, right, just try to have as many data
    points as possible. But very often you can
  • 26:09 - 26:14
    realize exactly the same application with a
    fraction of data points. Because what you
  • 26:14 - 26:18
    really need is, like, trends. And that
    usually also makes the product more
  • 26:18 - 26:24
    efficient. Then how privacy intrusive is
    the data we collect? Right. There's a big
  • 26:24 - 26:28
    difference between, let's say, the
    temperature in this building and
  • 26:28 - 26:32
    everyone's individual movements here. And
    if it is privacy intrusive then we should
  • 26:32 - 26:36
    really, really think hard if we want to
    collect it. Because we don't know how it
  • 26:36 - 26:44
    might be used at a later point. And then,
    are we actually collecting data without
  • 26:44 - 26:48
    people realizing that they do it, right,
    especially if we look at Facebook and
  • 26:48 - 26:53
    Google. They're collecting a lot of data
    without really implicit consent. But of
  • 26:53 - 26:59
    course at some point you like all agreed
    to the privacy policy. But it's often not
  • 26:59 - 27:04
    clear to you when and which data is
    collected. And that's kind of dangerous
  • 27:04 - 27:12
    and kind of in the same way if you kind of
    build dark patterns into your app. They
  • 27:12 - 27:18
    kind of fool you into sharing even more
    data. I had, like, an example that someone
  • 27:18 - 27:25
    told me yesterday. I don't if you know
    Venmo which is this American system where
  • 27:25 - 27:27
    you pay each other with your smartphone.
    Basically to split the bill in a
  • 27:27 - 27:33
    restaurant. By default, all transactions
    are public. So, like 200 million public
  • 27:33 - 27:39
    transactions which everyone can see,
    including the description of it. So for
  • 27:39 - 27:45
    some of the more maybe not so legal
    payments that was also very obvious,
  • 27:45 - 27:51
    right? And it's totally un-obvious when
    you use the app that that is happening. So
  • 27:51 - 27:57
    that's definitely a dark pattern that
    they're employing here. And then the next
  • 27:57 - 28:03
    point is User Product Education and
    Transparency. Is a user able to understand
  • 28:03 - 28:08
    how the product works? And, of course, we
    can't really ever have a perfect
  • 28:08 - 28:16
    explanation of all the intricacies of the
    technology. But these days for most people
  • 28:16 - 28:21
    almost all of the apps, the interfaces,
    the building technology and tech. This is
  • 28:21 - 28:25
    a complete black box and no one is really
    doing an effort to explain it to them why
  • 28:25 - 28:30
    most companies advertise it like this
    magical thing. But that just leads to kind
  • 28:30 - 28:36
    of this immunization where you just look at
    it and you don't even try to understand
  • 28:36 - 28:46
    it. I'm pretty sure that no one ever,
    like, these days is still opening up a PC
  • 28:46 - 28:50
    and trying looking at the components,
    right, because everything is in tablet and
  • 28:50 - 28:57
    it's integrated and it's sold to us like
    this magical media consumption machine.
  • 28:57 - 29:02
    Then, are users informed when decisions
    are made for them? So we had that in
  • 29:02 - 29:08
    Empowerment, that we should try to reduce
    the amount of decisions we make for the
  • 29:08 - 29:12
    user. But sometimes, that's a good thing
    to do. But then, is it transparently
  • 29:12 - 29:18
    communicated? I would be totally fine with
    Google Maps filtering out for me the
  • 29:18 - 29:22
    points of interest if it would actually
    tell me that it's doing that. And if you
  • 29:22 - 29:26
    can't understand why it made that decision
    and why it showed me this place. And maybe
  • 29:26 - 29:30
    also have a way to switch it off if I
    want. But today we seem to kind of assume
  • 29:30 - 29:34
    that we know better for the people why
    it's, so we found the perfect algorithm
  • 29:34 - 29:38
    that has a perfect answer. So we don't
    even have to explain how it works, right?
  • 29:38 - 29:42
    We just do it and people will be happy.
    But then we end up with is very negative
  • 29:42 - 29:49
    consequences. And then, that's more like a
    marketing thing, how is it actually
  • 29:49 - 29:55
    advertised? I find it, for example, quite
    worrisome that things like Siri and
  • 29:55 - 30:00
    Alexa and Google home are, like, sold as
    these magical AI machines that make your
  • 30:00 - 30:03
    life better, and are you personal
    assistant. When in reality they are
  • 30:03 - 30:10
    actually still pretty dumb, pattern
    matching. And that also creates a big
  • 30:10 - 30:14
    disconnect. Because now we have children
    growing up who actually think that Alexa
  • 30:14 - 30:21
    is a person. And that's kind of dangerous.
    And I think we should try to prevent that
  • 30:21 - 30:27
    because for these children, basically, it
    kind of creates this veil and it's
  • 30:27 - 30:33
    humanized. And that's especially dangerous
    if then the machine starts to make
  • 30:33 - 30:37
    decisions for them. And suggestions
    because they will take them as if a human
  • 30:37 - 30:49
    did it for them. So, what is that? So,
    these are kind of the main areas. Of course
  • 30:49 - 30:55
    it's a bit more complicated. So we just
    published the standard today in the first
  • 30:55 - 31:01
    draft version. And it's basically three
    parts of science introduction, kind of the
  • 31:01 - 31:05
    questions and checklists that you just saw.
    And then actually how to implement it in
  • 31:05 - 31:10
    your company, which processes to have, at
    which point you basically should have
  • 31:10 - 31:16
    kind of a feature gate. And I would kind of
    ask everyone to go there, look at it,
  • 31:16 - 31:23
    contribute, shared it with people. We hope
    that we'll have a final version ready kind
  • 31:23 - 31:40
    of in Q1 and that by then people can start
    to implement it. Oh, yeah. So, even though
  • 31:40 - 31:45
    we have this standard, right, I want to
    make it clear having such a standard and
  • 31:45 - 31:51
    implementing it in your organization or
    for yourself or your product is great. It
  • 31:51 - 31:56
    actually doesn't remove your
    responsibility, right? This can only be
  • 31:56 - 32:02
    successful if we actually all accept that
    we are responsible. Right? If today I
  • 32:02 - 32:07
    build a bridge as a structural engineer
    and the bridge breaks down because I
  • 32:07 - 32:10
    miscalculated, I am responsible. And I
    think, equally, we have to accept that if
  • 32:10 - 32:19
    we build technology like this we also have
    to, kind of, assume that responsibility.
  • 32:19 - 32:25
    And before we kind of move to Q&A, I'd
    like to kind of take the citations. This
  • 32:25 - 32:31
    is Chamath Palihapitiya, former Facebook
    executive, from the really early times.
  • 32:31 - 32:35
    And also, around a year ago when we
    actually saw the GTC he said this in a
  • 32:35 - 32:40
    conference: "I feel tremendous guilt. I
    think in the back in the deep restlessness
  • 32:40 - 32:44
    of our mind we knew something bad could
    happen. But I think the way we defined it
  • 32:44 - 32:48
    is not like this. It is now literally at a
    point where I think we have created
  • 32:48 - 32:54
    tools that are ripping apart the social
    fabric of how society works." And
  • 32:54 - 33:03
    personally, and I hope the same for you, I
    do not want to be that person that five
  • 33:03 - 33:08
    years down the line realizes that they
    built that technology. So if there is one
  • 33:08 - 33:14
    take-away that you can take home from this
    talk, then to just start asking yourself:
  • 33:14 - 33:19
    What is good technology, what does it mean
    for you? What does it mean for the
  • 33:19 - 33:25
    products you build and what does it mean
    for your organization? Thanks.
  • 33:25 - 33:30
    applause
  • 33:30 - 33:38
    Herald: Thank you. Yann Leretaille. Do we
    have questions in the room? There are
  • 33:38 - 33:45
    microphones, microphones number 1, 2, 3,
    4, 5. If you have a question please speak
  • 33:45 - 33:49
    loud into the microphone, as the people in
    the stream want to hear you as well.
  • 33:49 - 33:53
    I think microphone number 1 was the fastest.
    So please.
  • 33:53 - 33:58
    Question: Thank you for your talk. I just
    want to make a short comment first and
  • 33:58 - 34:02
    then ask a question. I think this last
    thing you mentioned about offering users
  • 34:02 - 34:07
    the options to have more control of the
    interface there is also a problem that
  • 34:07 - 34:11
    users don't want it. Because when you look
    at the statistics of how people use online
  • 34:11 - 34:17
    web tools, only maybe 5 percent of them
    actually use that option. So companies
  • 34:17 - 34:22
    remove them because for them it seems like
    it's something not so efficient for user
  • 34:22 - 34:26
    experience. This was just one thing to
    mention and maybe you can respond to that.
  • 34:26 - 34:34
    But what I wanted to ask you was, that all
    these principles that you presented, they
  • 34:34 - 34:40
    seem to be very sound and interesting and
    good. We can all accept them as
  • 34:40 - 34:45
    developers. But how would you propose to
    actually sell them to companies. Because
  • 34:45 - 34:50
    if you adopt a principle like this as an
    individual based on your ideology or the
  • 34:50 - 34:54
    way that you think, okay, it's great it
    will work, but how would you convince a
  • 34:54 - 34:59
    company which is driven by profits to
    adopt these practices? Have you thought of
  • 34:59 - 35:05
    this and what's your idea about this?
    Thank you.
  • 35:05 - 35:11
    Yann: Yeah. Maybe to the first part.
    First, that giving people choice is
  • 35:11 - 35:17
    something that people do not want and
    that's why companies removed it. I think
  • 35:17 - 35:22
    if you look at the development process
    it's basically like a huge cycle of
  • 35:22 - 35:26
    optimization and user testing geared
    towards a very specific goal, right, which
  • 35:26 - 35:31
    is usually set by leadership which is,
    like, bringing engagement up or increase
  • 35:31 - 35:38
    user amount by 200 percent. So I would say
    the goals were, or are today, mostly
  • 35:38 - 35:42
    misaligned. And that's why we end up with
    interfaces that are in a very certain way,
  • 35:42 - 35:46
    right? If we set the goals
    differently, and I mean that's why we have
  • 35:46 - 35:51
    like UI and UX research. I'm very sure we
    can find ways to build interfaces that are
  • 35:51 - 35:59
    just different. And still engaging, but
    also give that choice. To the second
  • 35:59 - 36:06
    question. I mean it's kind of interesting.
    So I wouldn't expect a company like Google
  • 36:06 - 36:11
    to implement something like this, because
    it's a bit against that. This is more by
  • 36:11 - 36:16
    that point probably but I've met a lot of,
    like, also high level executives already,
  • 36:16 - 36:23
    who were actually very aware of kind of
    the issues of technology that they built.
  • 36:23 - 36:28
    And there is definitely interest there.
    Also, more like industrial side, and so
  • 36:28 - 36:34
    on, especially, it seems like self-driving
    cars to actually adopt that. And in the
  • 36:34 - 36:40
    end I think, you know, if everyone
    actually demands it, then there's a pretty
  • 36:40 - 36:44
    high probability that it might actually
    happen. Especially, as workers in the tech
  • 36:44 - 36:51
    field, we are quite flexible in the
    selection of our employer. So I think if
  • 36:51 - 36:56
    you give it some time, that's definitely
    something that's very possible. The second
  • 36:56 - 37:02
    aspect is that, actually, if we looked at
    something like Facebook, I think they
  • 37:02 - 37:09
    overdid it. Say, optimize that so far and
    push the engagement machine and kind of
  • 37:09 - 37:13
    triggering like your brain cells to
    never stop being on the site and keeps
  • 37:13 - 37:18
    scrolling, that people got too much of it.
    And now they're leaving the platform in
  • 37:18 - 37:22
    droves. And of course Facebook would not
    go down, they own all these other social
  • 37:22 - 37:27
    networks. But for the product itself. as
    you can see, that, long term it's not even
  • 37:27 - 37:34
    necessarily a positive business outcome.
    And everything we are advertising here
  • 37:34 - 37:39
    still also to have very profitable businesses,
    right, just tweaking the right screws.
  • 37:39 - 37:43
    Herald: Thank you. We have a question from
    the interweb.
  • 37:43 - 37:48
    Signal Angel: Yes. Fly asks a question
  • 37:48 - 37:55
    that goes into a similar direction. In
    recent months we had numerous reports
  • 37:55 - 37:59
    about social media executives forbidding
    their children to use the products they
  • 37:59 - 38:05
    create at work. I think these people know
    that their products are made addictive
  • 38:05 - 38:11
    deliberately. Do you think your work is
    somewhat superfluous because big companies
  • 38:11 - 38:16
    are doing the opposite on purpose.
    Yann: Right. I think that's why you have
  • 38:16 - 38:23
    to draw the line between intentional and
    unintentional. If we go to intentional
  • 38:23 - 38:27
    things like what Uber did and so on. At
    some point it should probably become a
  • 38:27 - 38:32
    legal issue. Unfortunately we are not
    there yet and usually regulation is kind
  • 38:32 - 38:39
    of lagging way behind. So I think for now
    we should focus on, you know, the more
  • 38:39 - 38:45
    unintentional consequences, of which there
    are plentiful and kind of appeal to the
  • 38:45 - 38:52
    good in humans.
    Herald: Okay. Microphone number 2 please.
  • 38:52 - 39:00
    Q: Thank you for sharing your ideas about
    educating the engineer. What about
  • 39:00 - 39:05
    educating the customer, the consumer who
    purchases the product.
  • 39:05 - 39:12
    Yann: Yeah. So that's a really valid
    point. Right. As I said I think
  • 39:12 - 39:20
    [unintelligible] like part of your product
    development. And the way you build a
  • 39:20 - 39:25
    product should also be how you educate
    your users on how it works. Generally, we
  • 39:25 - 39:31
    have a really big kind of technology
    illiteracy problem. Things have been
  • 39:31 - 39:35
    moving so fast in the last year that most
    people haven't really caught up and they
  • 39:35 - 39:40
    just don't understand things anymore. And
    I think again that's like a shared
  • 39:40 - 39:44
    responsibility, right? You can't just do
    that in the tech field. You have to talk
  • 39:44 - 39:49
    to your relatives, to people. That's why
    we're doing, like, this series of articles
  • 39:49 - 39:55
    and media partnerships to kind of explain
    and make these things transparent. One
  • 39:55 - 40:02
    thing we just started working on is a
    children's book. Because for children,
  • 40:02 - 40:07
    like, the entire world just exists with
    this shiny glass surfaces and they don't
  • 40:07 - 40:11
    understand at all what is happening. But
    it's also primetime to explain to them,
  • 40:11 - 40:15
    like, really simple machine learning
    algorithms. How they work, how like,
  • 40:15 - 40:19
    filterbubbles work, how decisions are
    made. And if you understand that from an
  • 40:19 - 40:25
    early age on, then maybe you'll be able to
    deal with what is happening. In a way
  • 40:25 - 40:32
    better, an educated way. But I do think
    that is a very long process and so only if
  • 40:32 - 40:37
    we start and the more work we invest in
    that, the earlier people will be better
  • 40:37 - 40:41
    educated.
    Herald: Thank you. Microphone number 1
  • 40:41 - 40:45
    please.
    Q: Thanks for sharing your insights. I
  • 40:45 - 40:51
    feel like, while you presented these rules
    along with their meaning, the specific
  • 40:51 - 40:56
    selection might seem a bit arbitrary. And
    for my personal acceptance and willingness
  • 40:56 - 41:02
    to implement them it would be interesting
    to know the reasoning, besides common
  • 41:02 - 41:07
    sense, that justifies this specific
    selection of rules. So, it would be
  • 41:07 - 41:13
    interesting to know if you looked at
    examples from history, or if you just sat
  • 41:13 - 41:19
    down and discussed things, or if you just
    grabbed some rules out of the air. And so
  • 41:19 - 41:26
    my question is: What influenced you for
    the development of these specific rules?
  • 41:26 - 41:34
    Yann: It's a very complicated question. So
    how did we come up this specific selection
  • 41:34 - 41:39
    of rules and also, like, the main building
    blocks of what we think should good
  • 41:39 - 41:47
    technology be. Well, let's say first what
    we didn't want to do, right. We didn't
  • 41:47 - 41:51
    want to create like a value framework and
    say, like, this is good, this is bad,
  • 41:51 - 41:55
    don't do this kind of research or
    technology. Because this would also be
  • 41:55 - 42:00
    outdated, it doesn't apply to everyone. We
    probably couldn't even agree in the expert
  • 42:00 - 42:05
    council on that because it's very diverse.
    Generally, we try to get everyone on the
  • 42:05 - 42:12
    table. And we talked about issues we had,
    like, for example me as an entrepreneur. And when
  • 42:12 - 42:19
    I was, like, in developing products with
    our own engineers. Issues we've seen in terms
  • 42:19 - 42:27
    of public perception. Issues we've seen,
    like, on a more governmental level. Then
  • 42:27 - 42:32
    we also have, like, cryptologists in
    there. So we looked at that as well and
  • 42:32 - 42:43
    then we made a really, really long list
    and kind of started clustering it. And a
  • 42:43 - 42:50
    couple of things did get cut off. But
    generally, based on the clustering, these
  • 42:50 - 42:58
    were kind of the main themes that we saw.
    And again, it's really more of a tool for
  • 42:58 - 43:04
    yourself as a company that developers,
    designers and engineers to really
  • 43:04 - 43:09
    understand the impact and evaluate it. Right.
    This is what these questions are
  • 43:09 - 43:13
    aimed at. And we think that for that they
    do a very good job.
  • 43:13 - 43:19
    From microphone 1: Thank you.
    Herald: Thank you. And I think. Microphone
  • 43:19 - 43:22
    number 2 has a question again.
    Q: Hi. I was just wondering how you've
  • 43:22 - 43:27
    gone about engaging with other standards
    bodies, that perhaps have a wider
  • 43:27 - 43:33
    representation. It looks largely like from
    your team of the council currently that
  • 43:33 - 43:37
    there's not necessarily a lot of
    engagement outside of Europe. So how do
  • 43:37 - 43:42
    you go about getting representation from
    Asia. For example.
  • 43:42 - 43:52
    Yann: No, at the moment you're correct the
    GTC is mostly a European initaitive. We
  • 43:52 - 43:58
    are in talks with other organizations who
    work on similar issues and regularly
  • 43:58 - 44:04
    exchange ideas. But, yeah, we thought we
    should probably start somewhere. In Europe
  • 44:04 - 44:09
    is actually a really good place to start.
    Like a societal discourse about technology
  • 44:09 - 44:14
    and the impact it has and also to to have
    change. But I think if for example
  • 44:14 - 44:20
    compared to things like Asia or the US
    where is a very different perception of
  • 44:20 - 44:25
    privacy and technology and progress and
    like the rights of the individual Europe
  • 44:25 - 44:29
    is actually a really good place to do
    that. And we can also see things like GDPR
  • 44:29 - 44:36
    regulation, that actually, ... It's kind
    of complicated. It's also kind of a big
  • 44:36 - 44:41
    step forward in terms of protecting the
    individual from exactly these kind of
  • 44:41 - 44:47
    consequences. Of course though, long term
    we would like to expand this globally.
  • 44:47 - 44:53
    Herald: Thank you. Microphone number 1
    again.
  • 44:53 - 44:57
    Q: Hello. Just a short question. I
    couldn't find a donate button on your
  • 44:57 - 45:04
    website. Do you accept donations? Is money
    a problem? Like, do you need it?
  • 45:04 - 45:13
    Yann: Yes, we do need money. However it's
    a bit complicated because we want to stay
  • 45:13 - 45:20
    as independent as possible. So we are not
    accepting project related money. So you can't
  • 45:20 - 45:22
    like say we want to do certain research
    product with you, it has to be
  • 45:22 - 45:30
    unconditional. And the second thing we do
    is for the events we organize. We usually
  • 45:30 - 45:34
    have sponsors that provide, like, venue
    and food and logistics and things like
  • 45:34 - 45:39
    that. But that's an, ... for the event.
    And again, I can't, like, change the
  • 45:39 - 45:44
    program of it. So if you want to do that
    you can come into contact with us. We
  • 45:44 - 45:49
    don't have a mechanism yet for individuals
    to donate. We might add that.
  • 45:49 - 45:54
    Herald: Thank you. Did you think about
    Patreon or something like that?
  • 45:54 - 46:04
    Yann: We thought about quite a few
    options. Yeah, but it's actually not so
  • 46:04 - 46:09
    easy to not fall into the trap that,
    like, as organizations in space have been,
  • 46:09 - 46:15
    like, Google at some point sweeps in and
    it's like: Hey, do you want all this cash.
  • 46:15 - 46:19
    And then very quickly you have a big
    conflict of interest. Even if you don't
  • 46:19 - 46:26
    want that to happen it starts happening.
    Herald: Yeah right. Number 1 please.
  • 46:26 - 46:33
    Q: I was wondering how do you unite the
    second and third points in your checklist.
  • 46:33 - 46:38
    Because the second one is intelligence by
    design. The third one is to take into
  • 46:38 - 46:43
    account future technologies. But companies
    do not want to push back their
  • 46:43 - 46:49
    technologies endlessly to take into
    account future technologies. And on the
  • 46:49 - 46:52
    other hand they don't want to compromise
    their own design too much.
  • 46:52 - 47:00
    Yann: Yeah. Okay. Okay. Got it. So you
    were saying if we should always stop
  • 47:00 - 47:04
    these, like, future scenarios and the
    worst case and everything and incorporate
  • 47:04 - 47:08
    every possible thing that might happen in
    the future we might end up doing nothing
  • 47:08 - 47:14
    because everything looks horrible. For
    that I would say, like, we are not like
  • 47:14 - 47:21
    technology haters. We are all from areas
    working in tech. So of course the idea is
  • 47:21 - 47:26
    that you can just take a look at what is
    there today and try to make an assessment
  • 47:26 - 47:30
    based on that. And the idea is if you look
    it up and meet the standards that over
  • 47:30 - 47:35
    time actually you try to,... When you add
    new major features to look back at your
  • 47:35 - 47:40
    assessment from before and see if it
    changed. So the idea is you kind of create
  • 47:40 - 47:47
    a snapshot of how it is now. And this kind
    of document that you end up as part of
  • 47:47 - 47:51
    your documentation kind of evolved over
    time as your product changes and the
  • 47:51 - 47:57
    technology around it changes as well.
    Herald: Thank you. Microphone number 2.
  • 47:57 - 48:03
    Q: So thanks for the talk and especially
    the effort. Just to echo back the
  • 48:03 - 48:07
    question that was asked a bit before on
    starting with Europe. I do think it's a
  • 48:07 - 48:14
    good option. What I'm a little bit worried
    is it might be the only option. It might
  • 48:14 - 48:20
    become irrelevant rather quickly because
    it's easy to do, it's less hard to
  • 48:20 - 48:26
    implement. Maybe in Europe now. Okay. The
    question is. It might work in Europe now
  • 48:26 - 48:31
    but if Europe doesn't have the same
    economical power it cannot bog in as much
  • 48:31 - 48:37
    politically with let's say China or the US
    in Silicon Valley. So will it still be
  • 48:37 - 48:41
    possible and relevant if the economical
    balance shifts?
  • 48:41 - 48:52
    Yann: Yes, I mean we have to start
    somewhere, right? Just saying "Oh,
  • 48:52 - 48:59
    economical balance will shift anyway,
    Google will invent singularity, and that's
  • 48:59 - 49:02
    why we shouldn't do anything" is, I think,
    one of the reasons why we actually got
  • 49:02 - 49:08
    here, why it kind of is this assumption
    that there is like this really big picture
  • 49:08 - 49:14
    that is kind of working against us, so we
    all do our small part to fulfill that
  • 49:14 - 49:21
    kind of evil vision by not doing anything.
    I think we have to start somewhere and I
  • 49:21 - 49:27
    think for having operated for one year, we
    have been actually quite successful so far
  • 49:27 - 49:32
    and we have a good progress. And I'm
    totally looking forward to make it a bit
  • 49:32 - 49:36
    more global and to start traveling more, I
    think that like one event outside Europe
  • 49:36 - 49:40
    last year in the US and that will
    definitely increase over time, and we're
  • 49:40 - 49:46
    also working on making kind of our
    ambassadors more mobile and kind of expand
  • 49:46 - 49:50
    to other locations. So it's definitely on
    the roadmap but it's not like yeah, just
  • 49:50 - 49:54
    staying here. But yeah, you have to start
    somewhere and that's what we did.
  • 49:54 - 50:02
    Herald: Nice, thank you. Number 1 please.
    Mic 1: Yeah. One thing I haven't found was
  • 50:02 - 50:08
    – all those general rules you formulated
    fit into the more general rules of
  • 50:08 - 50:16
    society, like the constitutional rules.
    Have you considered that and it's just not
  • 50:16 - 50:25
    clearly stated and will be stated, or did
    you develop them more from the bottom up?
  • 50:25 - 50:33
    Yann: Yes, you are completely right. So we
    are defining the process and the questions
  • 50:33 - 50:39
    to ask yourself, but we are actually not
    defining a value framework. The reason for
  • 50:39 - 50:43
    that is that societies are different, as I
    said they are widely different
  • 50:43 - 50:48
    expectations towards technology, privacy,
    how society should work, all the ones
  • 50:48 - 50:54
    about. The second one is that every
    company is also different, right, every
  • 50:54 - 50:58
    company has their own company culture and
    things they want to do and they don't want
  • 50:58 - 51:05
    to do. If I would say, for example, we
    would have put in there "You should not
  • 51:05 - 51:08
    build weapons or something like that",
    right, that would mean that all these
  • 51:08 - 51:13
    companies that work in that field couldn't
    try to adapt it. And while I don't want
  • 51:13 - 51:17
    them to build weapons maybe in their value
    framework that's OK and we don't want to
  • 51:17 - 51:21
    impose that, right. That's why I said in
    the beginning we actually, we're called
  • 51:21 - 51:25
    the Good Technology Collective, we are not
    defining what it is and I think that's
  • 51:25 - 51:29
    really important. We are not trying to
    impose our opinion here. We want others to
  • 51:29 - 51:34
    decide for themselves what is good and
    cannot support them and guide them in
  • 51:34 - 51:36
    building products that they believe are
    good.
  • 51:36 - 51:45
    Herald: Thank you. Number two .
    Mic 2: Hello, thanks for sharing. As
  • 51:45 - 51:52
    engineer we always want users to spend
    more time to use our product, right? But
  • 51:52 - 51:59
    I'm working at mobile game company. Yep.
    We are making, we are making a world that
  • 51:59 - 52:06
    users love our product. So we want users
    spend more time in our game. So we may
  • 52:06 - 52:14
    make a lot of money, yeah, but when users
    spend time to play our game they may lose
  • 52:14 - 52:20
    something. Yeah. You know. So how do we
    think about the balance in a game, mobile
  • 52:20 - 52:25
    game. Yeah.
    Yann: Hmm. It's a really difficult
  • 52:25 - 52:32
    question. So the question was like
    specifically for mobile gaming. Where's
  • 52:32 - 52:38
    kind of the balance between trying to
    engage people more and, yeah, basically
  • 52:38 - 52:45
    making them addicted and having them spend
    all their money, I guess. I personally
  • 52:45 - 52:54
    would say it's about intent, right? It's
    totally fine with a business model where
  • 52:54 - 52:58
    you make money with a game. I mean that's
    kind of good and people do want
  • 52:58 - 53:09
    entertainment. But if you actively use,
    like, research in how, like, you know,
  • 53:09 - 53:15
    like the brain actually works and how it
    get super engaged, and if you basically
  • 53:15 - 53:19
    build in, like, gamification and
    lotteries, which a lot of, I think, have
  • 53:19 - 53:22
    done, where basically your game becomes a
    slot machine, right, you always want to
  • 53:22 - 53:28
    see the next opening of a crate
    and see what you got. Kind of making it a
  • 53:28 - 53:33
    luck based game, actually. I think if you
    go too far into that direction, at some
  • 53:33 - 53:36
    point you cross the line. Where that line
    is you have to decide yourself, right,
  • 53:36 - 53:40
    some of it could be a good game and
    dynamic but there definitely some games
  • 53:40 - 53:45
    out there, I would say with quite a reason
    to say that they pushed to the limit quite
  • 53:45 - 53:48
    a bit too far. And if you actually look
    how they did it because they wrote about
  • 53:48 - 53:53
    it, they actually did use very modern
    research and very extensive testing to
  • 53:53 - 53:58
    really find out these, all these patterns
    that make you addicted. And then it's not
  • 53:58 - 54:02
    much better than an actual slot machine.
    And that probably we don't want.
  • 54:02 - 54:08
    Herald: So it's also an ethical question
    for each and every one of us, right?
  • 54:08 - 54:11
    Yann: Yes.
    Herald: I think there is a light and I
  • 54:11 - 54:14
    think this light means the interwebs has a
    question.
  • 54:14 - 54:22
    Signal angel: I, there's another question
    from ploy about practical usage, I guess.
  • 54:22 - 54:25
    Are you putting your guidelines at work in
    your company? You said you're an
  • 54:25 - 54:30
    entrepeneur.
    Yann: That's a great question. Yes, we
  • 54:30 - 54:38
    will. So we kind of just completed some
    and there was kind of a lot of work to get
  • 54:38 - 54:42
    there. Once they are finished and released
    we will definitely be one of the first
  • 54:42 - 54:48
    adopters.
    Herald: Nice. And with this I think we're
  • 54:48 - 54:50
    done for today.
    Yann: Perfect.
  • 54:50 - 54:54
    Herald: Yann, people, warm applause!
  • 54:54 - 54:56
    applause
  • 54:56 - 54:57
    postroll music
  • 54:57 - 55:19
    subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
35C3 - What is Good Technology?
Description:

more » « less
Video Language:
English
Duration:
55:19

English subtitles

Revisions