-
rC3 preroll music
-
Herald: All right, CWA - three simple
letters, but what stands behind them is
-
not simple at all. For various reasons. The
Corona-Warn-App has been one of the most
-
talked about digital project of the year.
Behind its rather simplistic facade there
-
are many considerations that went into the
App's design to protect its users and
-
their data, while they might not be
visible for most users, these goals had a
-
direct influence on the software
architecture. For instance, the risk
-
calculation. Here today to talk about some
of these backend elements is one of the
-
solution architects of the Corona-Warn-App
- Thomas Klingbeil. And I'm probably not
-
the only one here at rC3, who is an active
user. And I'm pretty curious to hear more
-
about what's going on behind the scenes of
the App. So without further ado, let's
-
give a warm virtual welcome to Thomas
Klingbeil. Thomas, the stream is yours.
-
Thomas Klingbeil: Hello, everybody. I'm
Thomas Klingbeil, and today in the
-
session, I would like to talk about the
German Corona-Warn-App and give you a
-
little tour behind the scenes of the App
development, the underlying technologies
-
and which things are invisible to the end
user, but still very important for the App
-
itself. First, I would like to give you a
short introduction to the App, the
-
underlying architecture and to used
technologies, for example, the Exposure
-
Notification Framework. Then I would like
to have a look on the communication
-
between the App and the backend and
looking at which possible privacy threats
-
could be found and how we mitigated them,
of course. And then I would like to dive a
-
little bit into the risk calculation of
the App to show you what it actually
-
means, If there is a red or a green
screen, visible to the end user. First of
-
all, we can ask ourselves the question,
what is the Corona-Warn-App, actually? So,
-
here it is. This is the German Corona-Warn-App,
you can download it from the App stores and
-
once you have unboarded onto the App, you
will see the following: up here it shows
-
you that the exposure logging is active,
which means this is the currently active
-
App. Then you have this green card. Green
means it's low risk because there have
-
been no exposures so far. The logging has
been permanently active and it has just
-
updated this afternoon. So everything is
all right. Let's say you have just been
-
tested at a doctor's, then you could click
this button here and you get to the
-
screen, we you're able to retrieve your test
result digitally. To do this, you can scan
-
a QR code, which is on the phone, you
received from your doctor, and then you
-
will get an update as soon as the test
result is available. Of course, you can
-
also get more information about the active
exposure logging when you click the button
-
up here, then you get to this screen and
there you can learn more about the
-
transnational exposure logging, because
the German Corona-Warn-App is not alone.
-
It is connected to other Corona-Apps of
other countries within Europe. So users
-
from other countries can meet and they
would be informed mutually about possible
-
encounters. So just to be sure, I would
like to quickly dive into the terminology
-
of the exposure notification framework. So
you know what I'm talking about during
-
this session. It all starts with a
Temporary Exposure Key which is generated
-
on the phone and which is valid for 24
hours. From this Temporary Exposure Key,
-
several things are derived. First, for
example, there is the Rolling Proximity
-
Identifier Key and the Associated
Encrypted Metadata Key. This part down
-
here, we can skip for a moment being and
look at the generation of Rolling
-
Proximity Identifiers. Those Rolling
Proximity Identifiers are only valid for
-
10 minutes each because they are regularly
exchanged once the Bluetooth MAC-Address
-
change takes place. So the Rolling
Proximity Identifier is basically the
-
Bluetooth payload your phone uses, when
the Exposion Verification Framework is
-
active and broadcasting. When I say
broadcasting, I mean every 250
-
milliseconds your phone sends out its own
Rolling Proximity Identifiers, so other
-
phones around, which are scanning for
signal in the air basically can catch them
-
and store them locally. So let's look at
the receiving side. This is what we see
-
down here and now, as I've already
mentioned, we've got those Bluetooth low
-
energy beacon mechanics sending out those
Rolling Proximity Identifiers and they're
-
received down here. This is all a very
simplified schematic, just to give you an
-
impression of what's going on there. So
now we've got those Rolling Proximity
-
Identifiers stored under receiving phone
and now, somehow, this other phone needs
-
to find out that there has been a match,
this happens by transforming those
-
Temporary Exposure Keys into Diagnosis
Keys, which is just a renaming. But as
-
soon as someone has tested positive and a
Temporary Exposure Key is linked to a
-
positive diagnosis, it is called Diagnosis
Key and they are uploaded to the server.
-
And I'm drastically simplifying here. So
they receive the other phone here they're
-
downloaded, all those Diagnosis Keys are
extracted again. And as you can see, the
-
same functions applied, again HKDF, then
AES, and we get a lot of Rolling Proximity
-
Identifiers for matching down here. And
those are the ones we have stored and now
-
we can match them and find out which of
those Rolling Proximity Identifiers we
-
have seen so far. And, of course, the
receiving phone can also make sure that
-
the Rolling Proximity Identifiers
belonging to a single Diagnosis Key, which
-
means they belong to one single other
phone, are connected to each other. So we
-
can also track exposures which have lasted
longer than 10 minutes. So, for example,
-
if you are having a meeting of 90 minutes,
this would allow the explosion
-
notification framework to get together
those up to nine Rolling Proximity
-
Identifiers and transform them into a
single encounter, which you then get
-
enriched with those associated encrypted
metadata, which is basically just the
-
transmit power. As a summary, down here. So
now that we know which data are being
-
transferred from phone to phone, we can
have a look at the actual architecture of
-
the App itself. This gray box here is the
mobile phone, and down here is the German
-
Corona-Warn-App, it's a dashed line, which
means there's more documentation available
-
online. So I can only invite you to go to
GitHub repository. Have a look at our code
-
and, of course, our documentation. So
there are more diagrams available. And as
-
you can see, the App itself does not store
a lot of data. So those boxes here are
-
storages. So it only store something
called a Registration Token and the
-
contact journal entries for our most
recent version, which means that's all the
-
App stores itself. What you can see here
is that it's connected to the operating
-
system API/SDK for the exposure
notifications, so that's the exposure
-
notification framework to which we
interface, which takes care of all the key
-
connecting, broadcasting and key matching
as well. Then there's a protocol buffer
-
library which we need for the data
transfer, and we use the operating system
-
cryptography libraries or, basically, the
SDK. So we don't need to include external
-
libraries for that. What you can see here
is the OS API/SDK for push messages. But
-
this is not remote push messaging, but
only locally. So the App triggers local
-
notifications and to the user, it appears
as if the notifications to push message
-
came in remotely, but actually it only
uses local messages. But what would the
-
App be without the actual backend
infrastructure? So you can see here,
-
that's the Corona-Warn-App server, that's
the actual backend for managing all the
-
keys. And you see the upload path here.
It's aggregated then provided through
-
content delivery network and downloaded by
the App here. But we've got more. We've
-
got the verification server, which has the
job of verifying a positive test result.
-
And how does it do that? There's basically
two ways it can either get the information
-
that a positive test is true though a so-
called teleTAN, which is the most basic
-
way, because people call up the hotline,
get one of those teleTAN, entered into the
-
App and then they are able to upload the
Diagnosis Keys or, if people use the fully
-
digital way, they get their test result
through the App. And that's why we have
-
the test results server up here, which can
be queried by the verification server
-
so users can get the test result through
the infrastructure. But that's not all,
-
because as I've promised earlier, we've also
got the connection to other European
-
countries. So down here is the European
Federation Gateway Service, which gives us
-
the possibility to a) upload our own
national keys to this European Federation
-
Gateway Service, so other countries can
download them and distribute them to their
-
users, but we can also request foreign
keys and, gets even better, we can be
-
informed if new foreign keys are available
for download through a callback mechanism,
-
which is just here on the right side. So
once the app is communicating with the
-
backend, what would actually happen if
someone is listening? So we've got our
-
dataflow here. And. Let's have a look at
it, so in step one, we are actually
-
scanning the QR code with a camera of the
phone and extracted from the QR code would
-
be a GUID, which is then fed into the
Corona-Warn-App. You can see here it is
-
never stored within the app. That's very
important, because we wanted to make sure
-
that as few information as possible needs
to be stored within the app and also that
-
it's not possible to connect information
from different sources, for example, to
-
trace back Diagnosis Key to a GUID to
allow personification. It was very
-
important that this step is not possible.
So we had to take care that no data is
-
stored together and data cannot be
connected again. So in step one, we get
-
this GUID. And this is then hashed on the
phone being sent to the verification
-
server, which in step three generates a
so-called Registration Token and stores it
-
together. So it stores the hash(GUID) and
the hash(Registration Token), making sure
-
that GUID can only be used once and
returns the unhashed Registration Token to
-
the App here. Now the App can store the
Registration Token and use it in step five
-
for polling for test results, but the test
results are not available directly on the
-
verification server, because we do not
store it here. But the verification server
-
connects to the test results server by
using the hash(GUID), which can get from
-
the hash(Registration Token) here, and
then it can ask the test results server. And
-
the test results server might have a data
set connecting the hash(GUID) to the test
-
result. And this check needs to be done
because the test results server might also
-
have no information for this hash(GUID),
and this only means that no test result
-
has received yet. This is what happens
here in step A, the Lab Information
-
system, the LIS, can supply the
test results server with a package of
-
hash(GUID) and the test result - so it's
stored there. And if it's available already
-
on a test result server, it is returned to the
verification server and here in step 7 and
-
accordingly in step 8 to the App. You
might have noted the test results is also,
-
neither cached nor stored here on the
verification server, which means if the
-
user then decides to upload the keys, a
TAN is required to pass onto the backend
-
for verification of the positive test. An
equal flow needs to be followed. So in
-
step 9, again, the Registration Token
is passed to the TAN endpoint, the
-
verification server once more needs to
check with the test results server
-
that it's actually a positive test result.
Gets back here in step 11, TAN is
-
generated in step 12. You can see the TAN
is not stored in plaintext, but it's
-
stored as a hash, but the plaintext is
returned to the App, which can
-
then bundle it with Diagnosis Keys
extracted from the exposure notification
-
framework and upload it to the Corona-
Warn-App server or more specifically, the
-
submission service. But this also needs to
verify that it's authentic, so takes it in
-
step 15 to the verification server on the
verify endpoint. Where the TAN is
-
validated and validation means it is
marked as used already, so at the same
-
time cannot be used twice, and then the
response is given to the backend here,
-
which can then, if it's positive, which
means if it's authentic TAN can store the
-
Diagnosis Key in its own storage. And as
you can see, only the Diagnosis Keys are
-
stored here, nothing else. So there's no
correlation possible between Diagnosis
-
Keys, Registration Token or even GUID
because it's completely separate. But
-
still, what could be found out about users
if someone were to observe the network
-
traffic going on there? An important
assumption in the beginning, the content
-
of all the messages is secure because only
secure connections are being used and only
-
the size of the transfer is observable. So
we can, from a network sniffing
-
perspective observe that a connection is
created. We can observe how many bytes are
-
being transferred back and forth, but we
cannot learn about the content of the
-
message. So here we are, we've got the
first communication between App and server
-
in step two, because we can see: OK, if
someone is requesting something from the
-
Registration Token endpoint, this person
has been tested maybe on that specific
-
day. Then there is next communication
going on in step five, because this means
-
that the person has been tested. I mean,
we might know that from step two already,
-
but this person has still not received the
test result. So it might still be positive
-
or negative. If we can observe that the
request to the TAN endpoint takes place in
-
step 9, then we know the person has
been tested positive. So OK, this is
-
https, so we cannot actually learn which
end point is being queried, but there
-
might be specific sizes to those
individual requests which might allow us
-
to learn about the direction the request
is going into. Just as a thought. OK, and
-
then, of course, we've got also the
submission service in step 14 where users
-
upload their Diagnosis Keys and a TAN, and
this is really, really without any
-
possibility for discussion, because if a
App-context, the Corona-Warn-App server
-
and... builds up a connection - this must
mean that the user has been tested
-
positive and is submitting Diagnosis Keys.
Apart from that, once the user submits
-
Diagnosis Keys, and the App talks to the
Corona-Warn-App backend - it could also be
-
possible to relate those keys to an origin
IP-address, for example. Could there be a
-
way around that? So what we need to do in
this scenario and what we did is to
-
establish plausible deniability, which
basically means we generate so much noise
-
with the connections we build up that it's
not possible to identify individuals which
-
actually use those connections to query their
test results to receive the test result,
-
if it's positive, to retrieve a TAN or to
upload the Keys. So generating noise is
-
the key. So what the App actually does is:
simulate the backend traffic by sending
-
those fake or dummy requests according to
a so-called playbook. So we've got... we
-
call it playbook, from which the App takes
which requests to do, how long to wait,
-
how often to repeat those requests and so
on. And it's also interesting that those
-
requests might either be triggered by real
event or they might be triggered by just
-
some random trigger. So scanning a QR code
or entering a teleTAN also triggers this
-
flow. A little bit different, but it still
triggers it, because if you then get your
-
Registration Token retrieve your test
results and the retrieval of your test
-
results stops at some point, this must
mean, OK, there has been the test result -
-
negative or positive. If it's then
observable that you communicate to the
-
submission service - this would mean that
it has been positive. So what the App
-
actually does is: even if it is negative,
it continues sending out dummy requests to
-
the verification server and it might also,
so that's all based on random decisions
-
within the App, it might also then
retrieve a fake TAN and it might do a fake
-
upload of Diagnosis Keys. So in the end,
you're not able to distinguish between an App
-
actually uploading real data or an App just
doing playbook's stuff and creating noise.
-
So users really uploading the Diagnosis
Keys cannot be picked out from all the
-
noise. And to make sure that our backend,
it's not just swamped with all those fake
-
and dummy requests, there's a special
header field, which informs the backend to
-
actually ignore those requests. But if you
would just ignore them and not send a
-
response - it could be implemented on the
client, but then it would be observable
-
again that it's just a fake request. So
what we do is - we let the backend skip
-
all the interaction with the underlying
database infrastructure, do not modify any
-
data and so on, but there will be a delay
in the response and the response will look
-
exactly the same as if it was to respond
to real request. Also on the data, both
-
directions from the client to the server
and from the server to the client, get
-
some padding, so it's always the same
size, no matter what information is contained
-
in this data packages. So observing the
data packages... so the size does not help
-
in finding out what's actually going on.
Now, you could say, OK, if there's so much
-
additional traffic because they're fake
requests being sent out and fake uploads
-
being done and so on, this must cost a lot
of data traffic to the users. There's a
-
good point. It is all zero rated with
German mobile operators, which means it's
-
not charged to the end customers, but it's
just being paid for. Now, there is still that
-
thing with the extraction of information from
the metadata while uploading the Diagnosis
-
Keys and this metadata might be the source
IP address, it might be the user agent
-
being used. So then you can distinguish
Android from iOS and possibly you could
-
also find out about the OS version and to
prevent it with introduced an intermediary
-
server, which removes the metadata from
the requests and just forwards the plain
-
content of the packages basically to the
backend service. So the backend service,
-
the submission service is not able to tell
from where this package came from. Now,
-
for risk calculation, we can have a look
at which information is available here. So
-
we've got the information about
encounters, which calculated at the device
-
receiving the Rolling Proximity
Identifiers as mentioned earlier and those
-
information come into us in 30 minute
exposure windows. So I mentioned earlier
-
that all the Rolling Proximity Identifiers
belonging to a single Diagnosis Key. So
-
single day UTC basically that is, can be
related to each other. But what the
-
exposure notification framework then does
is split up those encounters in 30 minute
-
windows. So the first scan instance, where
another device has been identified, starts
-
the exposure window and then it's filled
up until the 30 minutes are full. And if
-
there's more encounters with the same
Diagnosis Key basically, a new window is
-
started and so on. The single exposure
window only contains a single device. So
-
it's one to one mapping. And within that
window we can find the number of the scan
-
instances. So scans take place every three
to five minutes and within those scan
-
instances, there are also multiple scans.
-
And we get the minimum and
the average attenuation
-
per instance, and the attenuation is
actually the reported transmit power of
-
the device minus the signal strength when
receiving the signal. So it basically
-
tells us how much signal strength got lost
on the way. If we talk about a low
-
attenuation, this means the other device
has been very close. If the attenuation is
-
higher, it means the other device is farther
away and, from the other way around, so
-
through the Diagnosis Keys, which have been
uploaded to the server, processed on the
-
backend provided on CDN and came to us
through that way, we can also get
-
information about the infectiousness of
the user, which is encoded in something we
-
call Transmission Risk Level, which tells
us how big the risk of infection from that
-
person on that specific day has been. So,
the Transmission Risk Level is based on
-
the symptom status of a person and the
symptom status means: Is the person
-
symptomatic, asymptomatic, does the
person want to tell about the symptoms or
-
maybe do they not want to tell about the
symptoms, and in addition to that, if
-
there have been symptoms, it can also be
clarified whether the symptoms start was a
-
specific day, whether it has been a range
of multiple days when the symptoms
-
started, or people could also say: "I'm
not sure about when the symptoms started,
-
but there have been symptoms definitely".
So this is the first case people can
-
specify when the symptoms started and we
can say that the symptoms start down here
-
and around that date of the onset of
symptoms, it's basically evenly spread the
-
risk of infection: red means high risk,
blue means low risk. See, when you move
-
around that symptom start day also the
infectiousness moves around and there's
-
basically a matrix from where this
information is derived. Again, you can
-
find that all in the code. And there's
also the possibility to say, OK, the
-
symptoms started somewhere within the last
seven days. That's the case up here. See,
-
it's spread a little bit differently.
Users could also specify it started
-
somewhere from one to two weeks ago. You
can see that here in the second chart and
-
the third chart is the case for when the
symptoms started more than two weeks ago.
-
Now, here's the case, that user specify
that they just received a positive test
-
result. So they're definitely Corona
positive, but they have never had
-
symptoms, which might mean they are
asymptomatic or presymptomatic. And,
-
again, you see around the submission,
there is an increased risk, but all the
-
time before here only has a low
transmission level asigned. If users want
-
to specify that they can't remember when
the symptoms started, but they definitely
-
had symptoms, then it's all spread a
little bit differently. And equally, if
-
users do not want to share the
information, whether they had symptoms at
-
all. So now we've got this big risk
calculation chart here, and I would like
-
to walk you quickly through it. So on the
left, we've got the configuration which is
-
being fed into the exposure notification
framework by Appe / Google, because
-
there's also some mappings which the
framework needs from us. There is some
-
internal configuration because we have
decided to do a lot of the risk
-
calculation within the App instead of
doing it in the framework, mainly because
-
we have decided we want a eight levels,
transmission risk levels, instead of the
-
only three levels, so low, standard and
high, which Apple and Google provide to
-
us. For the sake of having those eight
levels, we actually sacrifice the
-
parameters of infectiousness, which is
derived from the parameter days since
-
onset of symptoms and the report type,
which is always a confirmed test in Europe.
-
So we got those three bits actually, which
we can now use as a Transmission Risk
-
Level, which is encoded on the server in
those two fields, added to the Keys and
-
the content delivery network, downloaded
by the App and then passed through the
-
calculation here. So it comes in here. It
is assembled from those two parameters,
-
Report Type and Infectiousness, and now it
goes along. So first, we need to look,
-
whether the sum of the durations at below
73 decibels. So that's our first threshold
-
has been less than 10 minutes. If it has
been less than 10 minutes, just drop the
-
whole exposure window. If it has been more
or equal 10 minutes, we might use it,
-
depending on whether the Transmission Risk
Level is larger or equal three and we use
-
it. And now we actually calculate the
relevant time and times between 60...
-
between 55 and 63 decibels are only counted
half, because that's a medium distance and
-
times at below 55 decibels, that's up here
are counted full, then added up. And
-
then we've got the weight exposure time
and now we've got this transmission risk
-
level, which leads us to a normalization
factor, basically. And this is multiplied
-
with the rate exposure time. What we get
here is the normalized exposure time per
-
exposure window and those times for each
window are added up for the whole day. And
-
then that's the threshold of 15 minutes,
which decides whether the day had a high
-
risk of infection or a low risk. So now
that you all know how to do those
-
calculations, we can walk through it for
three examples. So the first example is
-
here: it's a transmission risk level of
seven. You can see those all are pretty
-
close so our magic thresholds are here at
73. That's for whether that's counted or
-
not. Then at 63, it's this line. And at
55. So we see, OK, there's been a lot of
-
close contact going on and some medium
range contact as well. So let's do the
-
pre-filtering, even though we already see
it has been at least 10 minutes below 73
-
decibels. Yes, definitely, because each of
those dots represents three minutes. So,
-
for this example calculation, I just
assumed the scan windows are three minutes
-
apart. Is it at least transmission risk
level three? Yes, it's even seven. So now
-
we do the calculation. It has been 18
minutes a day low attenuation, so at a
-
close proximity, so that's 18 minutes and
nine minutes those and those - three dots
-
here at a medium attenuation. So a little
bit farther apart, they count as four and
-
a half minutes. We've got a factor here
adding it up, it gets us to 25 minutes
-
multiplied by 1.4 giving us 33... 31.5
minutes, which means red status. Already
-
with a single window. Now, in this
example, we can always see that's pretty
-
far away and that's been one close
encounter here, transmission risk level
-
eight even, pre-filtering: has it been at
least 10 minutes below 73 decibels? Nope.
-
OK, then we already drop it. Now that's
the third one. Transmission risk level
-
eight again. It has been a little bit
away, but there's also been some close
-
contact, so we do the pre-filtering: has
it been at least 10 minutes below 73? Now
-
we already have to look closely. So, yes.
It is below 73, this one as well. OK, so
-
we've got four dots below 73 decibels.
Gives us 12 minutes. Yes, transmission
-
risk level three. OK, that's easy. Yes.
And now we can do the calculation. It has
-
been six minutes at the low attenuation -
those two dots here. OK, they count full
-
and zero minutes at the medium
attenuation. You see this part is empty
-
and the transmission risk level eight
gives us a factor of 1.6. If we now
-
multiply the six minutes by 1.6, we get
9.6 minutes. So if this has been the only
-
encounter for a day, that's stil
green. But if, for example, you had two
-
encounters of this kind, so with the same
person or with different people, then it
-
would already turn into red because then
it's close to 20 minutes, which is above
-
the 15 minute threshold. Now, I would like
to thank you for listening to my session,
-
and I'm available for Q&A shortly.
-
Herald: OK, so thank you, Tomas. This was
a prerecorded talk and the discussion was
-
very lively in the IRC during the talk,
and I'm glad that Thomas will be here for
-
the Q&A. Maybe to start with the first
question by MH in IRC on security and
-
replay attacks: Italy and Netherlands
published TAKs DKs so early today are
-
still valid. We learned that yesterday and
the time between presentation, how is this
-
handled in the European cooperation and
can you make them adhere to the security
-
requirements? This is the first question
for you, Thomas.
-
Thomas: OK, so thank you for this
question. The way we handle Keys coming
-
in from other European contries,
that's through the European federation
-
gateway service is, that they are handled
-
as if they were national keys,
which means they are put in some kind of
-
embargo for two hours until... so two
hours after the end of their validity to
-
make sure that replay attacks are not
possible.
-
Herald: All right, I hope that answers
this actually. OK, and then there was
-
another one on international
interoperability: is it EU only or is
-
there is also cooperation between EU and,
for example, Switzerland?
-
Thomas: So so far, we've got the cooperation
with other EU countries from audio glitches
-
the European Union, which interoperates
already, and regarding the integration of
-
non-EU countries, that's basically a
political decision that has to be made
-
from this place as well. So that's nothing
I as an architect can drive or control. So
-
so far, it's only EU countries.
Herald: All right. And then I have some
-
comments and also questions on community
interaction and implementation of new
-
features, which seems a little slow for
some. There was, for example, a proposal
-
for functionality called Crowd Notifier
for events and restaurants to check in by
-
scanning a QR code. Can you tell us a bit
more about this or what's there? Are you
-
aware of this?
Thomas: So I've personally seen that there
-
are proposals online, and that is also a
lively discussion on those issues, but
-
what you need to keep in mind is that we
are also... we have the task of developing
-
this App for the federal ministry of
Health, and they are basically the ones
-
requesting features and then there's some
scoping going on. So I'm personally and so
-
to say that again, I am the architect so I
can't decide which features are going to
-
be implemented. It's just as soon as the
decision has been made that we need a new
-
feature, so after we've been given
the task, then I come in and prepare the
-
architecture for that. So I'm not aware of
the current state of those developments,
-
to be honest, because that's out of my
personal scope.
-
Herald: All right. I mean, it's often the
case, I suppose, with great projects, with
-
huge project. But overall, people seem to
be liking the fact that everything is
-
available on GitHub. But some people are
really dedicated and seem to be a bit
-
disappointed that interaction with the
community on GitHub seems a bit slow,
-
because some issues are not answered as
people would hope it would be. Do you know
-
that about some ideas on adding dedicated
community managers to the GitHub community
-
around the App? So the people we speak
with, that was one note in IRC, actually
-
seem to be changing every month. So are
you aware of this kind of position of
-
community management.
Thomas: So there's people definitely
-
working on the community management,
there's also a lot of feedback and
-
comments coming in from the community, and
I'm definitely aware that there are people
-
working on that. And, for example, I get
asked by them to jump in on certain
-
questions where verification was needed
from an architectural point of view. And
-
that's... if you look at GitHub, there's
also some issues I've been answering, and
-
that's because our community team has
asked me to jump in there. So but the
-
feedback that people are not fully
satisfied with the way how the community
-
is handled, is something I would
definitely take back to our team
-
internally and let them know about it.
Herald: Yeah, that's great to know,
-
actually. So people will have some answers
on that. Maybe one last very concrete
-
question by duffman in the IRC: Is the
inability of the App to show the time/day
-
of exposures a limitation of the
framework or is it an implementation
-
choice? And what would be the privacy
implications of introducing such a
-
feature? Actually, a big question, but
maybe you can cut it short.
-
Thomas: Yeah, OK, so the only information,
the exposion notification framework by
-
Google / Apple can give us - is the date
of the exposure, and date always relates
-
to UTC there. And so we never get the time
of the actual exposure back. And when
-
moving to the exposure windows, we also do
not get the time back of the exposure
-
window. And the implications if you were
able to tell the exact time of the
-
encounter, would be that people are often
aware where they've been at a certain
-
time. And let's say at 11:15, you were
meeting with a friend and you get a
-
notification that at 11:15, you had that
exact encounter, it would be easy to tell
-
whom you've met, who's been infected. And
that's something not desired, that you can
-
trace it back to a certain person. So the
personification would basically then be
-
the thing.
Herald: All right, and I hope we have time
-
for this last question asked on IRC:
have you considered training a machine
-
learning method to classified the risk
levels instead of the used rule-based
-
method?
Thomas: So, I mean, classifying the risk
-
levels through machine learning is
something I'm not aware of yet. So the
-
thing is, it's all based on basically a
cooperation with the Fraunhofer Institute,
-
where they have basically reenacted
certain situations, did some measurements
-
and that's what has been transferred into
the risk model. So all those thresholds
-
are derived from, basically, practical
tests. So no ML at the moment.
-
Herald: All right, so I suppose this was
our last question and again, Thomas, a
-
warm round of virtual applause to you and
thank you again, Thomas, for giving this
-
talk, for being part of this first remote
case experience and for giving us some
-
insight into the backend of the Corona-
Warn-App. Thank you.
-
Thomas: Was happy to do so. Thank you for
having me here.
-
rC3 postroll music
-
Subtitles created by c3subtitles.de
in the year 2021. Join, and help us!