-
35C3 Intro music
-
Herald Angel: We at the Congress, we not
only talk about technology, we also talk
-
about social and ethical responsibility.
About how we can change the world for
-
good. The Good Technology Collective
supports the development guidelines...
-
sorry, it supports the development process
of new technology with ethical engineering
-
guidelines, that offer a practical way to
take ethic and social impact into account.
-
Yannick Leretaille - and I hope this was
okay - will tell you more about it.
-
Please welcome on stage with a very warm applause
Yann Leretaille.
-
applause
-
Yannick Leretaille: Hi, thanks for the
introduction. So before we start, can you
-
kind of show me your hand if you, like,
work in tech building products as
-
designers, engineers, coders, product
management? OK, so it's like 95 percent,
-
90 percent. Great. Yeah. So, today we kind
of try to answer the question: What is
-
good technology and how can we build
better technology. Before that, shortly
-
something of me. So I am Yann. I'm French-
German. Kind of a hacker, among the CCC
-
for a long time, entrepreneur, like, co-
founder of a startup in Berlin. And I'm
-
also founding member of the Good
Technology Collective. The Good
-
Technology Collective was founded about a
year ago or almost over a year now actually
-
by a very diverse expert council
and we kinda have like 3 areas of work.
-
The first one is trying to educate the
public about current issues with
-
technology, then, to educate engineers or
to build better technology, and then
-
long-term hopefully one day we'll be
able to work like in legislation as well.
-
Here, it's a bit of what we achieved so
far. We've like 27 council members now. We
-
have several media partnerships and
published around 20 articles, that's kind
-
of the public education part. Then we
organized or participated in roughly 15
-
events already. And we are now publishing
one standard, well, kind of today
-
actually, and
applause
-
and if you're interested in what we do,
then, yeah, sign up for the newsletter and
-
we keep you up to date and you can join
events. So as I said the Expert Council is
-
really, really diverse. We have everything
from people in academia, to people in
-
government, to technology makers, to
philosophers, all sorts, journalists.
-
And the reason that is the case is that a year
ago we kind of noticed that in our own
-
circles, like, as technology makers or
academics, we were all talking about a lot
-
of, kind of, voice on development and
technology, but no one was really kind of
-
getting together and looking at it from
all angles. And there have been a lot of
-
very weird and troublesome developments in
the last two years. I think we really
-
finally feel, you know like, the impact of
filter bubbles. Something we have talked
-
for like five years, but now it's like,
really like, you know, deciding over
-
elections and people become politically
radicalized and society is, kind of,
-
polarized more because they only see a
certain opinion. And we have situations
-
that we only knew, like, from science
fiction, just kind of, you know, pre-crime,
-
like, governments, kind of, over-arching
and trying to use machine learning to make
-
decisions on whether or not you should go
to jail. We have more and more machine
-
learning and big data and automization
going into basically every single aspect
-
of our lives and not all of it been has
been positive. You know, like, literally
-
everything from e-commerce to banking to
navigating to moving to the vault now goes
-
through these interfaces. That present us
the data and a slice of the world at a time.
-
And then at the same time we have
really positive developments. Right? We have
-
things like this, you know, like space
travel, finally something's happening.
-
And we have huge advances in medicine. Maybe
soon we'll have, like, self-driving cars
-
and great renewable technology. And it kind
of begs the question: How can it be that
-
good and bad use of technology are kind of
showing up at such an increasing rate in
-
this, like, on such extremes, right? And
maybe the reason is just that everything
-
got so complicated, right? Data is
basically doubling every couple of years,
-
so no human can possibly process anymore.
So we had to build more and more complex
-
algorithms to process it, connecting more
and more parts together. And no one really
-
seems to understand it anymore, it seems.
And that leads to unintended consequences.
-
I've an example here: So, Google Photos –
this is actually only two years ago –
-
launched a classifier to automatically go
through all of your pictures and tell you
-
what it is. You could say "Show me the
picture of the bird in summer at this
-
location" and it would find it for you.
Kind of really cool technology, and they
-
released it to, like, a planetary user
base until someone figured out that people
-
of color were always marked as gorillas.
Of course it was a huge PR disaster, why
-
somehow no one found out about this before
it came out... But now the interesting thing
-
is: In two years they didn't even manage
to fix it! Their solution was to just
-
block all kind of apes, so they're just
not found anymore. And that's how they
-
solved it, right? But if even Google can't
solve this... what does it mean?
-
And then, at the same time, you know,
sometimes we seem to have, kind of,
-
intended consequences?
-
I have an example... another example here:
Uber Greyball. I don't know if anyone
-
heard about it. So Uber was very eager to
change regulation and push the services
-
globally as much as possible, and kind of
starting a fight with, you know, all the
-
taxi laws and regulation, and taxi
drivers in the various countries around the
-
world. And what they realized, of course,
is that they didn't really want people to
-
be able to, like, investigate what they
were doing or, like, finding individual
-
drivers. So they built this absolutely
massive operation which was like following
-
data in social media profiles, linking,
like, your credit card and location data
-
to find out if you were working for the
government. And if you did, you would just
-
never find a car. It would just not show
up, right? And that was clearly
-
intentional, all right. So at the same
time they were pushing, like, on, like,
-
the lobbyism, political side to change
regulation, while heavily manipulating the
-
people that were pushing to change the
regulation, right? Which is really not a
-
very nice thing to do, I would say.
-
And...
-
The thing that I find, kind of...
worrisome about this:
-
No matter if it's intended or unintended,
is that it actually gets worse, right?
-
The more and more systems we
interconnect, the worse these consequences
-
can get. And I've an example here: So this
is a screenshot I took of Google Maps
-
yesterday and you notice there are, like,
certain locations... So they're kind of
-
highlighted on this map and I don't know
if you knew it but this map and the
-
locations that Google highlight look
different for every single person.
-
Actually, I went again and looked today
and it looked different again. So, Google
-
is already heavily filtering and kind of
highlighting certain places, like, maybe
-
this restaurant over there, if you can see
it. And I would say, like, from just
-
opening the map, that's not obvious to you
that it's doing that. Or that it's trying
-
to decide for you which place is
interesting for you. However, that's
-
probably not such a big issue. But the
same company, Google with Waymo, is also
-
developing this – and they just started
deploying them: self-driving cars. They're...
-
...still a good couple of years away from
actually making it reality, but they are
-
really – in terms of, like, all the others
trying it at the moment – the farthest, I
-
would say, and in some cities they started
deploying self-driving cars. So now, just
-
think like 5, 10 years into the future
and you have signed up in your Google...
-
...self-driving car. Probably you don't
have your own car, right? And you go in
-
the car and you are like: "Hey, Yann, where
do you want to go?" Do you want to go to
-
work? Because, I mean obviously that's why
I probably go most of the time. Do you
-
want to go to your favorite Asian
restaurant, like the one we just saw on the
-
map? Which is actually not my favorite,
but the first one I went to. So Google
-
just assumed it was. Do you want to go to
another Asian restaurant? Because,
-
obviously, that's all I like. And then
McDonald's. Because, everyone goes there.
-
And maybe the fifth entry is an
advertisement. And you would say: Well,
-
Yann, you know, that's still kind of fine,
but it's OK because I can still click on:
-
'No, I don't want these 5 options, give me,
like, the full map.' But now, we went back
-
here. So, even though you are seeing the
map, you're not actually not seeing all
-
the choices, right? Google is actually
filtering for you where it thinks you want
-
to go. So now we have, you know, the car
like this symbol of mobility and freedom.
-
It enables so much change in our society
that it's actually reducing the part of
-
the world that you see. And because, I
mean these days they call it AI, I think
-
it's just machine learning, because these
machine learning algorithms all do pattern
-
matching and basically just can recognize
similarities. When you open the map and
-
you zoom in and you select a random place,
it would only suggest places to you where
-
other people have been before. So now the
restaurant that opened around the corner
-
you'll probably not even discover it
anymore. And no one will. And it will
-
probably close. And the only ones that
will stay are the ones that are already
-
established now. And all of that without
being really obvious to anyone who would
-
use the technology. Because it has become
like kind of a black box. So, I do want
-
self-driving cars, I really do. I don't
want a future like this. Right. And if we
-
want to prevent that future, I think we
have to first ask a very simple question,
-
which is: Who is responsible for designing
these products? So, do you know the
-
answer?
audience: inaudible
-
Yann: Say it louder.
audience: We are.
-
Yann: Yeah, we are. Right. That's a really
frustrating thing about it that actually
-
gets us, right, as engineers and
developers. You know we are always driven
-
by perfection. We want to create, like,
the perfect code sources. One problem,
-
really, really nice. You know. Chasing the
next challenge over and over trying to be
-
first. But we have to realize that at the
same time we are kind of working on
-
frontier technologies, right, on things,
technology, that are really kind of on the
-
edge of values and norms we have in
society. And if we are not careful and
-
just, like, focus on our small problem and
don't look at the big picture, then we
-
have no say in on which side of the coin
the technology will fall. And probably it
-
will take a couple of years, or by that
time we alreaday moved on, I guess. So.
-
It's just that technology has become so
powerful and interconnected and impactful,
-
because we are not building stuff that
it's not affecting like 10 or 100 people
-
or a city but literally millions of
people, that we really have to take a step
-
back and not only look at the individual
problem as the challenge but also the big
-
picture. And I think if you want to do
that we have to start by asking the right
-
questions. And the first question of
course is: What is good technology? So,
-
that's also the name of the talk.
Unfortunately, I don't have a perfect
-
answer for that. And probably we will
never find a perfect answer for that. So,
-
what I would like to propose is to
establish some guidelines and engineering
-
processes that help us to build better
technology. To kind of ensure the same
-
where we have quality insurance and
project management systems and processes
-
to, like, kind of, this you were tasked
with. And companies that what we build is
-
actually, has a net positive outcome for
society. And we call it the good
-
technology standard. We've kind of been
working that over, the last year, and we
-
really wanted to make it really practical.
And what we kind of realized is that if you
-
want to make it practical you have to make
it very easy to use and also mostly,
-
actually what was surprising, just ask the
right questions. So, what is important
-
though, is that if you adapt the standard,
it has to be in all project phases. It has
-
to involve everyone. So, from, like, the
CTO to, like, the product managers to
-
actually legal. Today, legal has this
interesting role, where you develop
-
something and then you're like: Okay, now,
legal, make sure that we can actually ship it.
-
And that's what usually happens. And,
yeah, down to the individual engineer. And
-
if it's not applied globally and people
start making exceptions then of course it
-
won't be worth very much. Generally, we
kind of identified four main areas that we
-
think are important, kind of defining,
kind of an abstract way, if a product is
-
good. And the first one is empowerment. A
good product should empower its users. And
-
that's kind of a tricky thing. So, as
humans we have very limited decision
-
power. Right? And we are faced with, as I
said before, like, this huge amount of
-
data and choices. So it seems very natural
to build machines and interfaces that try
-
to make a lot of decisions for us. Like
the Google Maps one we saw before. But we
-
have to be careful because if we do that
too much then the machine ends up making
-
all decisions for us. So often, when you
develop something you should really ask
-
yourself, like, in the end if I take
everything together am I actually
-
empowering users, or am I taking
responsibility away from them? Do I
-
respect the individual choice? Why does he
say: I don't want this, or they give you
-
their preference, do we actually respect
it or do we still try to, you know, just
-
figure out what is better for them. Do my
users actually feel like they benefit from
-
using the product? So, I couldn't,
actually not a lot of people ask themselves,
-
because usually you think like in terms
of: Are you benefiting your company? And I
-
think what's really pressing in that
aspect: does it help the users, the humans
-
behind it, to grow in any way. If it helps
them to be more effective or faster or do
-
more things or be more relaxed or more
healthy, right, then it's probably positive.
-
But if you can't identify any of these,
then you really have to think about it.
-
And then, in terms of AI, in machine
learning, are we actually kind of
-
impacting their own reasoning so that they
can't make proper decisions anymore. The
-
second one is Purposeful Product Design.
That one is one that, it's been kind of a
-
pet peeve for me for a really long time.
So these days we have a lot of products
-
that are kind of like this. I don't have
something specifically against Philips
-
Hue, but there seems to be, like, this
trend that is kind of, making smart
-
things, right? You take a product, put a
Wi-Fi chip on it, just slap it on there.
-
Label it "smart", and then you make tons
of profit, right? And a lot of these new
-
products we've been seeing around us,
like, everyone is saying, like, oh yeah,
-
we will have this great interconnected
feature, but most of them are actually not
-
changing the actual product, right, like,
the Wi-Fi connected washing machine today
-
is still a boring washing machine that
breaks down after two years. But it has
-
Wi-Fi, so you can see what it's doing when
you're in the park. And we think we should
-
really think more in terms of intelligent
design. How can we design it in the first
-
place so it's intelligent, not smart. That
the different components interact in a
-
way, that it serves a purpose well, and
the kind of intelligent by design
-
philosophy is, when you start using your
product you kind of try to identify the
-
core purpose of it. And based on that, you
just use all the technologies available to
-
rebuild it from scratch. So, instead of
building a Wi-Fi connect washing machine
-
would actually try to build a better
washing machine. And if it ends up having
-
Wi-Fi, then that's good, but it doesn't
has to. And along each step actually try
-
to ask yourself: Am I actually improving
washing machines here? Or am I just
-
creating another data point? And yeah, a
good example for that is, kind of, a
-
watch. Of course it's very old and old
technology, it was invented a long time
-
ago. But back when it was invented it was
for something you could have on your arm
-
or in your pocket in the beginning and it
was kind of a natural extension of
-
yourself, right, that kind of enhances
your senses because it's never there, you
-
don't really feel it. But when you need it
it's always there and then you can just
-
look at it and you know the time. And that
profoundly changed how, like, we humans
-
actually worked in society because we
couldn't meet in the same place at the
-
same time. So, when you build a new
product try to ask yourself what is the
-
purpose of the product, who is it for.
Often I talk to people and they talk to me
-
for one hour, what like, literally the
details of how they solved the problem but
-
they can't tell me who their customer is.
Then does this product actually make
-
sense? Do I have features, and these
distract my users, that I maybe just don't
-
need. And can I find more intelligent
solutions by kind of thinking outside of
-
the box and focusing on the purpose of it.
And then of course what is the long term
-
product vision like, where do we want this
to go? This kind of technology I'm
-
developing in the next years. The next one
is kind of, Societal Impact, that goes
-
into what I talked about in the beginning
with all the negative consequences we have
-
seen. A lot of people these days don't
realize that even if you're, like, in a
-
small start up and you're working on, I
don't know, a technology, or robots, or
-
whatever. You don't know if your
algorithm, or your mechanism, or whatever
-
you build, will be used by 100 million
people in five years. Because this has
-
happened a lot, right? So, only when
starting to build it you have to think: If
-
this product would be used by 10 million,
100, maybe even a billion people, like
-
Facebook, would it have negative
consequences? Right, because then you get
-
completely different effects in society,
completely different engagement cycles and
-
so on. Then, are we taking advantage of
human weaknesses? So this is arguably
-
something that's just their technology. A
lot of products these days kind of try to
-
hack your brain, what, we understand
really well how, like, engagement works
-
and addiction. So a lot of things, like
social networks, actually have been
-
focusing, you know, and also built by
engineers, you know, trying to get a
-
little number from 0.1% to 0.2%, can mean
that you just do extensive A/B testing,
-
create an interface that no one can stop
looking at. You just continue scrolling,
-
right? You just continue, and two hours
have passed and you haven't actually
-
talked to anyone. And this attention
grabbing is kind of an issue and we can
-
see that Apple actually now implemented
screen time and they actually tell you how
-
much time you spend on your phone. So
there's definitely ways to build
-
technology that even helps you to get away
from these. And then for everything that
-
involves AI and machine learning, you
really have to take a really deep look at
-
your data sets and your algorithms because
it's very, very easy to build in biases
-
and discrimination. And again, if you it
applied to all of society many people who
-
are less fortunate, or more fortunate, or
they're just different, you know they just
-
do different things, kind of fall out of
the grid and now suddenly they can't,
-
like, [unintelligible] anymore. Or use
Uber, or Air B'n'B, or just live a normal
-
life, or do financial transactions. And
then, kind of what I said in the
-
beginning, not only look at your product
but also, if you combine it with other
-
technologies that are upcoming, are there
certain combinations that are dangerous?
-
And for that I kind of recommend to do,
like, some techno or litmus test to just
-
try to come up with the craziest scenario
that your technology could entail. And if
-
it's not too bad then, probably good. The
next thing is, kind of, sustainability. I
-
think in today's world it really should be
part of a good product, right. The first
-
question is of course kind of obvious. Are
we limiting product lifetime? Do we maybe
-
have planned obsolescence, or if we
build something that is so dependent on so
-
many services and we're not only going to
support it for one year anyways, that
-
basically it will have to be thrown in the
trash afterwards. Maybe it would be
-
possible to add a standalone node or a
very basic fallback feature so that at
-
least the products continues to work.
Especially if you talk about things like
-
home appliances. Then, what is the
environmental impact? A good example here
-
would be, you know, crypto currencies who
are now using as much energy as certain
-
countries. And when you consider that just
think like is there maybe an alternative solution
-
that doesn't have such a big impact. And
of course we are still capitalism, it has
-
to be economically viable, but often there
aren't, often it's again just really small
-
tweaks. Then of course: Which other
services are you working with? But for
-
example I would say, like, as european
companies, we're in Europe here, maybe try
-
to work mostly with suppliers from Europe,
right, because you know they follow GDPR
-
and strict rules, and in a sense the US.
Or check your supply chain if you build
-
hardware. And then for hardware
specifically that's because also I have,
-
like, we also do hardware in my company, I
always found that interesting. We're kind
-
of in a world where everyone tries to
save, like, the last little bit of money
-
out of every device that is built and
often makes the difference between plastic
-
and metal screws like half a cent, right.
And at that point it doesn't really change
-
your margins much. And maybe as an
engineer, you know, just say no and say:
-
You know, we don't have to do that. The
savings are too small to redesign
-
everything and it will impact upon our
quality so much that it just breaks
-
earlier. These are kind of the main four
points. I hope that makes sense. Then we
-
have two more, kind of, additional
checklists. The first one is data
-
collection. So really, just if, especially
like in terms of like IOT, you know,
-
everyone focuses on kind of collecting as
much data as possible without actually
-
having an application. And I think we
really have to start seeing that as a
-
liability. And instead try to really
define the application first, define which
-
data we need for it, and then really just
collect that. And we can start collecting
-
more data later on. And that can really
prevent a lot of these negative cycles we
-
have seen. By just having machine learning
organisms run on of it kind of
-
unsupervised and seeing what comes out.
Then also kind of really interesting I
-
found that, many times, like, a lot of
people are so fascinated by the amount of
-
data, right, just try to have as many data
points as possible. But very often you can
-
realize exactly the same application with a
fraction of data points. Because what you
-
really need is, like, trends. And that
usually also makes the product more
-
efficient. Then how privacy intrusive is
the data we collect? Right. There's a big
-
difference between, let's say, the
temperature in this building and
-
everyone's individual movements here. And
if it is privacy intrusive then we should
-
really, really think hard if we want to
collect it. Because we don't know how it
-
might be used at a later point. And then,
are we actually collecting data without
-
people realizing that they do it, right,
especially if we look at Facebook and
-
Google. They're collecting a lot of data
without really implicit consent. But of
-
course at some point you like all agreed
to the privacy policy. But it's often not
-
clear to you when and which data is
collected. And that's kind of dangerous
-
and kind of in the same way if you kind of
build dark patterns into your app. They
-
kind of fool you into sharing even more
data. I had, like, an example that someone
-
told me yesterday. I don't if you know
Venmo which is this American system where
-
you pay each other with your smartphone.
Basically to split the bill in a
-
restaurant. By default, all transactions
are public. So, like 200 million public
-
transactions which everyone can see,
including the description of it. So for
-
some of the more maybe not so legal
payments that was also very obvious,
-
right? And it's totally un-obvious when
you use the app that that is happening. So
-
that's definitely a dark pattern that
they're employing here. And then the next
-
point is User Product Education and
Transparency. Is a user able to understand
-
how the product works? And, of course, we
can't really ever have a perfect
-
explanation of all the intricacies of the
technology. But these days for most people
-
almost all of the apps, the interfaces,
the building technology and tech. This is
-
a complete black box and no one is really
doing an effort to explain it to them why
-
most companies advertise it like this
magical thing. But that just leads to kind
-
of this immunization where you just look at
it and you don't even try to understand
-
it. I'm pretty sure that no one ever,
like, these days is still opening up a PC
-
and trying looking at the components,
right, because everything is in tablet and
-
it's integrated and it's sold to us like
this magical media consumption machine.
-
Then, are users informed when decisions
are made for them? So we had that in
-
Empowerment, that we should try to reduce
the amount of decisions we make for the
-
user. But sometimes, that's a good thing
to do. But then, is it transparently
-
communicated? I would be totally fine with
Google Maps filtering out for me the
-
points of interest if it would actually
tell me that it's doing that. And if you
-
can't understand why it made that decision
and why it showed me this place. And maybe
-
also have a way to switch it off if I
want. But today we seem to kind of assume
-
that we know better for the people why
it's, so we found the perfect algorithm
-
that has a perfect answer. So we don't
even have to explain how it works, right?
-
We just do it and people will be happy.
But then we end up with is very negative
-
consequences. And then, that's more like a
marketing thing, how is it actually
-
advertised? I find it, for example, quite
worrisome that things like Siri and
-
Alexa and Google home are, like, sold as
these magical AI machines that make your
-
life better, and are you personal
assistant. When in reality they are
-
actually still pretty dumb, pattern
matching. And that also creates a big
-
disconnect. Because now we have children
growing up who actually think that Alexa
-
is a person. And that's kind of dangerous.
And I think we should try to prevent that
-
because for these children, basically, it
kind of creates this veil and it's
-
humanized. And that's especially dangerous
if then the machine starts to make
-
decisions for them. And suggestions
because they will take them as if a human
-
did it for them. So, what is that? So,
these are kind of the main areas. Of course
-
it's a bit more complicated. So we just
published the standard today in the first
-
draft version. And it's basically three
parts of science introduction, kind of the
-
questions and checklists that you just saw.
And then actually how to implement it in
-
your company, which processes to have, at
which point you basically should have
-
kind of a feature gate. And I would kind of
ask everyone to go there, look at it,
-
contribute, shared it with people. We hope
that we'll have a final version ready kind
-
of in Q1 and that by then people can start
to implement it. Oh, yeah. So, even though
-
we have this standard, right, I want to
make it clear having such a standard and
-
implementing it in your organization or
for yourself or your product is great. It
-
actually doesn't remove your
responsibility, right? This can only be
-
successful if we actually all accept that
we are responsible. Right? If today I
-
build a bridge as a structural engineer
and the bridge breaks down because I
-
miscalculated, I am responsible. And I
think, equally, we have to accept that if
-
we build technology like this we also have
to, kind of, assume that responsibility.
-
And before we kind of move to Q&A, I'd
like to kind of take the citations. This
-
is Chamath Palihapitiya, former Facebook
executive, from the really early times.
-
And also, around a year ago when we
actually saw the GTC he said this in a
-
conference: "I feel tremendous guilt. I
think in the back in the deep restlessness
-
of our mind we knew something bad could
happen. But I think the way we defined it
-
is not like this. It is now literally at a
point where I think we have created
-
tools that are ripping apart the social
fabric of how society works." And
-
personally, and I hope the same for you, I
do not want to be that person that five
-
years down the line realizes that they
built that technology. So if there is one
-
take-away that you can take home from this
talk, then to just start asking yourself:
-
What is good technology, what does it mean
for you? What does it mean for the
-
products you build and what does it mean
for your organization? Thanks.
-
applause
-
Herald: Thank you. Yann Leretaille. Do we
have questions in the room? There are
-
microphones, microphones number 1, 2, 3,
4, 5. If you have a question please speak
-
loud into the microphone, as the people in
the stream want to hear you as well.
-
I think microphone number 1 was the fastest.
So please.
-
Question: Thank you for your talk. I just
want to make a short comment first and
-
then ask a question. I think this last
thing you mentioned about offering users
-
the options to have more control of the
interface there is also a problem that
-
users don't want it. Because when you look
at the statistics of how people use online
-
web tools, only maybe 5 percent of them
actually use that option. So companies
-
remove them because for them it seems like
it's something not so efficient for user
-
experience. This was just one thing to
mention and maybe you can respond to that.
-
But what I wanted to ask you was, that all
these principles that you presented, they
-
seem to be very sound and interesting and
good. We can all accept them as
-
developers. But how would you propose to
actually sell them to companies. Because
-
if you adopt a principle like this as an
individual based on your ideology or the
-
way that you think, okay, it's great it
will work, but how would you convince a
-
company which is driven by profits to
adopt these practices? Have you thought of
-
this and what's your idea about this?
Thank you.
-
Yann: Yeah. Maybe to the first part.
First, that giving people choice is
-
something that people do not want and
that's why companies removed it. I think
-
if you look at the development process
it's basically like a huge cycle of
-
optimization and user testing geared
towards a very specific goal, right, which
-
is usually set by leadership which is,
like, bringing engagement up or increase
-
user amount by 200 percent. So I would say
the goals were, or are today, mostly
-
misaligned. And that's why we end up with
interfaces that are in a very certain way,
-
right? If we set the goals
differently, and I mean that's why we have
-
like UI and UX research. I'm very sure we
can find ways to build interfaces that are
-
just different. And still engaging, but
also give that choice. To the second
-
question. I mean it's kind of interesting.
So I wouldn't expect a company like Google
-
to implement something like this, because
it's a bit against that. This is more by
-
that point probably but I've met a lot of,
like, also high level executives already,
-
who were actually very aware of kind of
the issues of technology that they built.
-
And there is definitely interest there.
Also, more like industrial side, and so
-
on, especially, it seems like self-driving
cars to actually adopt that. And in the
-
end I think, you know, if everyone
actually demands it, then there's a pretty
-
high probability that it might actually
happen. Especially, as workers in the tech
-
field, we are quite flexible in the
selection of our employer. So I think if
-
you give it some time, that's definitely
something that's very possible. The second
-
aspect is that, actually, if we looked at
something like Facebook, I think they
-
overdid it. Say, optimize that so far and
push the engagement machine and kind of
-
triggering like your brain cells to
never stop being on the site and keeps
-
scrolling, that people got too much of it.
And now they're leaving the platform in
-
droves. And of course Facebook would not
go down, they own all these other social
-
networks. But for the product itself. as
you can see, that, long term it's not even
-
necessarily a positive business outcome.
And everything we are advertising here
-
still also to have very profitable businesses,
right, just tweaking the right screws.
-
Herald: Thank you. We have a question from
the interweb.
-
Signal Angel: Yes. Fly asks a question
-
that goes into a similar direction. In
recent months we had numerous reports
-
about social media executives forbidding
their children to use the products they
-
create at work. I think these people know
that their products are made addictive
-
deliberately. Do you think your work is
somewhat superfluous because big companies
-
are doing the opposite on purpose.
Yann: Right. I think that's why you have
-
to draw the line between intentional and
unintentional. If we go to intentional
-
things like what Uber did and so on. At
some point it should probably become a
-
legal issue. Unfortunately we are not
there yet and usually regulation is kind
-
of lagging way behind. So I think for now
we should focus on, you know, the more
-
unintentional consequences, of which there
are plentiful and kind of appeal to the
-
good in humans.
Herald: Okay. Microphone number 2 please.
-
Q: Thank you for sharing your ideas about
educating the engineer. What about
-
educating the customer, the consumer who
purchases the product.
-
Yann: Yeah. So that's a really valid
point. Right. As I said I think
-
[unintelligible] like part of your product
development. And the way you build a
-
product should also be how you educate
your users on how it works. Generally, we
-
have a really big kind of technology
illiteracy problem. Things have been
-
moving so fast in the last year that most
people haven't really caught up and they
-
just don't understand things anymore. And
I think again that's like a shared
-
responsibility, right? You can't just do
that in the tech field. You have to talk
-
to your relatives, to people. That's why
we're doing, like, this series of articles
-
and media partnerships to kind of explain
and make these things transparent. One
-
thing we just started working on is a
children's book. Because for children,
-
like, the entire world just exists with
this shiny glass surfaces and they don't
-
understand at all what is happening. But
it's also primetime to explain to them,
-
like, really simple machine learning
algorithms. How they work, how like,
-
filterbubbles work, how decisions are
made. And if you understand that from an
-
early age on, then maybe you'll be able to
deal with what is happening. In a way
-
better, an educated way. But I do think
that is a very long process and so only if
-
we start and the more work we invest in
that, the earlier people will be better
-
educated.
Herald: Thank you. Microphone number 1
-
please.
Q: Thanks for sharing your insights. I
-
feel like, while you presented these rules
along with their meaning, the specific
-
selection might seem a bit arbitrary. And
for my personal acceptance and willingness
-
to implement them it would be interesting
to know the reasoning, besides common
-
sense, that justifies this specific
selection of rules. So, it would be
-
interesting to know if you looked at
examples from history, or if you just sat
-
down and discussed things, or if you just
grabbed some rules out of the air. And so
-
my question is: What influenced you for
the development of these specific rules?
-
Yann: It's a very complicated question. So
how did we come up this specific selection
-
of rules and also, like, the main building
blocks of what we think should good
-
technology be. Well, let's say first what
we didn't want to do, right. We didn't
-
want to create like a value framework and
say, like, this is good, this is bad,
-
don't do this kind of research or
technology. Because this would also be
-
outdated, it doesn't apply to everyone. We
probably couldn't even agree in the expert
-
council on that because it's very diverse.
Generally, we try to get everyone on the
-
table. And we talked about issues we had,
like, for example me as an entrepreneur. And when
-
I was, like, in developing products with
our own engineers. Issues we've seen in terms
-
of public perception. Issues we've seen,
like, on a more governmental level. Then
-
we also have, like, cryptologists in
there. So we looked at that as well and
-
then we made a really, really long list
and kind of started clustering it. And a
-
couple of things did get cut off. But
generally, based on the clustering, these
-
were kind of the main themes that we saw.
And again, it's really more of a tool for
-
yourself as a company that developers,
designers and engineers to really
-
understand the impact and evaluate it. Right.
This is what these questions are
-
aimed at. And we think that for that they
do a very good job.
-
From microphone 1: Thank you.
Herald: Thank you. And I think. Microphone
-
number 2 has a question again.
Q: Hi. I was just wondering how you've
-
gone about engaging with other standards
bodies, that perhaps have a wider
-
representation. It looks largely like from
your team of the council currently that
-
there's not necessarily a lot of
engagement outside of Europe. So how do
-
you go about getting representation from
Asia. For example.
-
Yann: No, at the moment you're correct the
GTC is mostly a European initaitive. We
-
are in talks with other organizations who
work on similar issues and regularly
-
exchange ideas. But, yeah, we thought we
should probably start somewhere. In Europe
-
is actually a really good place to start.
Like a societal discourse about technology
-
and the impact it has and also to to have
change. But I think if for example
-
compared to things like Asia or the US
where is a very different perception of
-
privacy and technology and progress and
like the rights of the individual Europe
-
is actually a really good place to do
that. And we can also see things like GDPR
-
regulation, that actually, ... It's kind
of complicated. It's also kind of a big
-
step forward in terms of protecting the
individual from exactly these kind of
-
consequences. Of course though, long term
we would like to expand this globally.
-
Herald: Thank you. Microphone number 1
again.
-
Q: Hello. Just a short question. I
couldn't find a donate button on your
-
website. Do you accept donations? Is money
a problem? Like, do you need it?
-
Yann: Yes, we do need money. However it's
a bit complicated because we want to stay
-
as independent as possible. So we are not
accepting project related money. So you can't
-
like say we want to do certain research
product with you, it has to be
-
unconditional. And the second thing we do
is for the events we organize. We usually
-
have sponsors that provide, like, venue
and food and logistics and things like
-
that. But that's an, ... for the event.
And again, I can't, like, change the
-
program of it. So if you want to do that
you can come into contact with us. We
-
don't have a mechanism yet for individuals
to donate. We might add that.
-
Herald: Thank you. Did you think about
Patreon or something like that?
-
Yann: We thought about quite a few
options. Yeah, but it's actually not so
-
easy to not fall into the trap that,
like, as organizations in space have been,
-
like, Google at some point sweeps in and
it's like: Hey, do you want all this cash.
-
And then very quickly you have a big
conflict of interest. Even if you don't
-
want that to happen it starts happening.
Herald: Yeah right. Number 1 please.
-
Q: I was wondering how do you unite the
second and third points in your checklist.
-
Because the second one is intelligence by
design. The third one is to take into
-
account future technologies. But companies
do not want to push back their
-
technologies endlessly to take into
account future technologies. And on the
-
other hand they don't want to compromise
their own design too much.
-
Yann: Yeah. Okay. Okay. Got it. So you
were saying if we should always stop
-
these, like, future scenarios and the
worst case and everything and incorporate
-
every possible thing that might happen in
the future we might end up doing nothing
-
because everything looks horrible. For
that I would say, like, we are not like
-
technology haters. We are all from areas
working in tech. So of course the idea is
-
that you can just take a look at what is
there today and try to make an assessment
-
based on that. And the idea is if you look
it up and meet the standards that over
-
time actually you try to,... When you add
new major features to look back at your
-
assessment from before and see if it
changed. So the idea is you kind of create
-
a snapshot of how it is now. And this kind
of document that you end up as part of
-
your documentation kind of evolved over
time as your product changes and the
-
technology around it changes as well.
Herald: Thank you. Microphone number 2.
-
Q: So thanks for the talk and especially
the effort. Just to echo back the
-
question that was asked a bit before on
starting with Europe. I do think it's a
-
good option. What I'm a little bit worried
is it might be the only option. It might
-
become irrelevant rather quickly because
it's easy to do, it's less hard to
-
implement. Maybe in Europe now. Okay. The
question is. It might work in Europe now
-
but if Europe doesn't have the same
economical power it cannot bog in as much
-
politically with let's say China or the US
in Silicon Valley. So will it still be
-
possible and relevant if the economical
balance shifts?
-
Yann: Yes, I mean we have to start
somewhere, right? Just saying "Oh,
-
economical balance will shift anyway,
Google will invent singularity, and that's
-
why we shouldn't do anything" is, I think,
one of the reasons why we actually got
-
here, why it kind of is this assumption
that there is like this really big picture
-
that is kind of working against us, so we
all do our small part to fulfill that
-
kind of evil vision by not doing anything.
I think we have to start somewhere and I
-
think for having operated for one year, we
have been actually quite successful so far
-
and we have a good progress. And I'm
totally looking forward to make it a bit
-
more global and to start traveling more, I
think that like one event outside Europe
-
last year in the US and that will
definitely increase over time, and we're
-
also working on making kind of our
ambassadors more mobile and kind of expand
-
to other locations. So it's definitely on
the roadmap but it's not like yeah, just
-
staying here. But yeah, you have to start
somewhere and that's what we did.
-
Herald: Nice, thank you. Number 1 please.
Mic 1: Yeah. One thing I haven't found was
-
– all those general rules you formulated
fit into the more general rules of
-
society, like the constitutional rules.
Have you considered that and it's just not
-
clearly stated and will be stated, or did
you develop them more from the bottom up?
-
Yann: Yes, you are completely right. So we
are defining the process and the questions
-
to ask yourself, but we are actually not
defining a value framework. The reason for
-
that is that societies are different, as I
said they are widely different
-
expectations towards technology, privacy,
how society should work, all the ones
-
about. The second one is that every
company is also different, right, every
-
company has their own company culture and
things they want to do and they don't want
-
to do. If I would say, for example, we
would have put in there "You should not
-
build weapons or something like that",
right, that would mean that all these
-
companies that work in that field couldn't
try to adapt it. And while I don't want
-
them to build weapons maybe in their value
framework that's OK and we don't want to
-
impose that, right. That's why I said in
the beginning we actually, we're called
-
the Good Technology Collective, we are not
defining what it is and I think that's
-
really important. We are not trying to
impose our opinion here. We want others to
-
decide for themselves what is good and
cannot support them and guide them in
-
building products that they believe are
good.
-
Herald: Thank you. Number two .
Mic 2: Hello, thanks for sharing. As
-
engineer we always want users to spend
more time to use our product, right? But
-
I'm working at mobile game company. Yep.
We are making, we are making a world that
-
users love our product. So we want users
spend more time in our game. So we may
-
make a lot of money, yeah, but when users
spend time to play our game they may lose
-
something. Yeah. You know. So how do we
think about the balance in a game, mobile
-
game. Yeah.
Yann: Hmm. It's a really difficult
-
question. So the question was like
specifically for mobile gaming. Where's
-
kind of the balance between trying to
engage people more and, yeah, basically
-
making them addicted and having them spend
all their money, I guess. I personally
-
would say it's about intent, right? It's
totally fine with a business model where
-
you make money with a game. I mean that's
kind of good and people do want
-
entertainment. But if you actively use,
like, research in how, like, you know,
-
like the brain actually works and how it
get super engaged, and if you basically
-
build in, like, gamification and
lotteries, which a lot of, I think, have
-
done, where basically your game becomes a
slot machine, right, you always want to
-
see the next opening of a crate
and see what you got. Kind of making it a
-
luck based game, actually. I think if you
go too far into that direction, at some
-
point you cross the line. Where that line
is you have to decide yourself, right,
-
some of it could be a good game and
dynamic but there definitely some games
-
out there, I would say with quite a reason
to say that they pushed to the limit quite
-
a bit too far. And if you actually look
how they did it because they wrote about
-
it, they actually did use very modern
research and very extensive testing to
-
really find out these, all these patterns
that make you addicted. And then it's not
-
much better than an actual slot machine.
And that probably we don't want.
-
Herald: So it's also an ethical question
for each and every one of us, right?
-
Yann: Yes.
Herald: I think there is a light and I
-
think this light means the interwebs has a
question.
-
Signal angel: I, there's another question
from ploy about practical usage, I guess.
-
Are you putting your guidelines at work in
your company? You said you're an
-
entrepeneur.
Yann: That's a great question. Yes, we
-
will. So we kind of just completed some
and there was kind of a lot of work to get
-
there. Once they are finished and released
we will definitely be one of the first
-
adopters.
Herald: Nice. And with this I think we're
-
done for today.
Yann: Perfect.
-
Herald: Yann, people, warm applause!
-
applause
-
postroll music
-
subtitles created by c3subtitles.de
in the year 2020. Join, and help us!