preroll music
Vasilios: Hello, everyone, thanks for coming
today. I'm going to introduce the ultrasound
ecosystem, which is an exotic and kind of
little known ecosystem. So I would like to
start with a short story about the
product, which is also our motivation for
this work. So some time ago, there was a
product that worked in the ultrasound
spectrum that cannot be perceived by
humans. And the product was actually an
interesting idea. It was very promising
and everything, but it also had a fatal
flaw. So now that I've done this
introduction, I would like to tell you
more about the story of the product and
how it came to be and what was it? What
was its lifecycle. So 2012, a company
called SilverPush was a startup in
India. It was founded there and they had
this ultrasound device tracking product.
I'll go more into the technical details
later. So for a couple of years, they were
working on that product. And it wasn't
until 2014 that they basically got some
serious funding from a venture center or
other angel investors for millions. So in
2014, they also got a few months after
they got funded. They also got some press
coverage about the product and they got
some pretty good reviews on their
newspapers and articles about what the
product could do. And at the same time,
they were doing what most of the companies
are doing, like publishing patents about
their technology and everything. So things
later started to go like year after year
and half maybe started to go not that well
for them. The security community noticed
and there was some press coverage about
the product that was not so positive
anymore. So this is one of the very first
emails that appear on the Web regarding
the product. So it's from a W3C
working group. So a researcher there is
basically. Notifying the other members of
the group that, OK, there is this product,
maybe there are transparency issues, and
certainly the users are not aware of
what exactly is going on there. So let's
keep an eye on it. And so this was a very
one of the very first things published
about the product from the privacy and
security perspective. So what happened
then was the press took notice and they
got all those headlines urging users to be
very careful. And, oh, this is a this is
evil, take care. People are eavesdropping
on you and stuff. So, of course, this led
also on the FTC to take action. They
organized a workshop on cross device tracking
in general, I think, and they made specific
mentions for ultrasound cross device tracking
don't worry if you're not familiar with this terms,
I'm going to define everything later. So
what basically they were saying is
transparency issues. How do how do we
protect ourselves? How is that thing
working? So, then the users, of course,
started to react. And there were like many
people who were unhappy, they were
complaining, what is this? I don't want
that thing. So people were actually
suggesting solutions and the solutions
that were making sense. And of course, you
have always the users that are completely
immune to what you have there. So what
happened then is like five months after
the FTC took much more serious action
regarding this specific product. So it
sent a letter to all the developers. And
the letter was essentially saying, you
know, you're using this framework in
Europe. We've seen it in Google Play
store. It's not enough that you are asking
for the microphone permission. You should
let the users know that you are tracking
them if you are doing so. Otherwise, you
are violating rule X, Y, Z, and you're not
allowed to do that. So this was pretty
serious, I would say. And what happened
next is basically the company withdrew
from the US market and said, you know, we
have nothing to do with the U.S. market
and this product is not active there. You
shouldn't be concerned. So end of story
like the product is not out there in the
US at least anymore. Are we safe? So it
seemed to us that it was assumed that this
was an isolated security incident. And to
be fair, very little became known about
the technology. At this point. The press
moved on to other hot topics at the time,
people went quiet, like if people are not
using it, it's fine. So everyone
seemed happy. But we're curious people. So
we had lots of questions that were not
answered. So our main questions was like
why they were using ultrasounds. We'll see
that what they are doing, you can do with
our technologies, how such frameworks
work. We had no idea there was no coverage
or nothing about it. The technical,
technically speaking, out there, are there
other such products there? Because we were
aware of one. Everyone on all the articles
was referring to that one product, but we
were not sure if there are others doing
the same thing. And of course, we were
looking for a report about the whole
ecosystem and how it works. And there was
nothing. So what do you do then if if
there are no technical resources?
Basically, we decided to do our own
research and come up with this report that
we were lacking. So we're done with
motivation so far. We were pretty pumped
up about looking into it. OK, what's
there? The rest of the presentation will
go as follows. Like first I'm going to
introduce ultrasound tracking and other
terminology, then I'm going to go on with
the attack details. And indeed, we have an
attack again against the Tor browser. Then
we're doing a formal security analysis of
the ecosystem and try to pinpoint the
things that went wrong. And then we'll try
to introduce our countermeasures and
advocate for proper practices. So to begin
with, I'm Vasilis. I've done this work
with other curious people. These are
showing how Yanick Fratantonio, Christopher
Kruegel and Giovanni Vigna from UCSB and also
Federico Maggi from Polytechnical
Damilola. Let's now start with the
ecosystem, so apparently ultrasounds are
used in a lot of places and they can be
utilized for different purposes, some of
them are cross device tracking that are
referred already to audience analytics,
synchronized content, proximity, marketing
and device pairing. You can do some other
things, but you will see them later. So to
begin with what cross device tracking is,
cross device tracking is basically the holy
grail for marketers right now because
you're using your multiple devices
smartphone, laptop, computer, maybe your
TV and to them, your appear as different
people. And they all want to be able to
link to link those devices to know that
you're the same person so that they can
build their profiles more accurately. So,
for instance, if you're watching an ad on
the TV, they want to be able to know that
it's you so that they can push relevant
ads from your smartphone or follow up ads.
Um. So this is employed by major
advertising networks, and there are two
ways to do it, either deterministically or
probabilistically, that deterministic
approach is much more reliable. You get
100 percent accuracy and works as follows.
If you are Facebook, the users are heavily
incentivized to log in from all their
devices. So what happens is that. You can
immediately know that, OK, this user has
these three devices and I can put relevant
content to all of them. However, if you
are not Facebook or Google you, it's much
more unlikely that the users would want to
log into your platform from their
different devices. So you have to look for
alternatives. And one tool to come up with
those alternatives are ultrasound beacons.
So, um, ultrasound tracking products are
using ultrasound because they may sound
exotic, but basically there they are. What
they are doing is they are encoding a
sequence of symbols, um, in a very high
frequency that it's inaudible by humans.
That's the first key feature. The second one
is they can be emitted by most commercial
speakers and they can be captured by most
commercial microphones, for instance,
found on your smartphone. So the technical
details are the following. I know there
are a lot of experts in these kinds of
things here, so I'm averaging out what how
the companies are doing it right now. I'm
not saying that this is the best way to do
it, but this is more or less what they're
doing. Of course, they have patents, so
each one of them is doing a slightly
different thing so they don't overlap.
They're using the near ultrasound spectrum
between the eight eight kilohertz and 20
kilohertz, which is inaudible by usually
by adults. They divide it in smaller
chunks. So if you divide it in chunks that
have size of 75 Hertz, you get 26, about 26
chunks, and then you can assign letter of
the alphabet on each one of them. And then
what they are doing is usually within four
to five seconds. They emit sequences of
characters. Usually they contain for four
to six characters in there, and they use
it to incorporate a unique ID
corresponding to their source, they attach
the beacon to. So there is no ultrasound
beacon standard, as I said previously, but
there are lots of patents, so each one of
them is doing a slightly different thing.
But this is a basic principle. We did some
experiments and we found out that within
seven meters, you get pretty good accuracy
in low error rate. So of course, this depends
exactly how you encode things. But with
applications found on Google Play, this
worked up to seven meters. Um, we couldn't
find computer speakers that were not able
to emit near ultrasound frequencies and
work with this technology and.. we this is
pretty known for this kind of frequencies,
they cannot penetrate through physical
objects, but this is not a problem for
their purposes. And we did some
experiments with our research assistant
and we can say that they are audible by
animals. So if you combine cross device
tracking and ultrasound because you get
ultrasound cross device tracking. So now what
you can do with this and this is this is a
pretty good idea, actually, because it
offers high accuracy, you don't ask the
users to log in, which is very high, very
demanding thing to ask for. You can embed
those beacons in websites or TV ads, and
this technology, however, requires some
sort of sophisticated backend
infrastructure. We're going to see more
about it later. And you also need the
network of publishers who are willing to
incorporate incorporate beacons in their
content, whatever this content is. And
then, of course, you need an ultrasound
cross device tracking framework that is going
to run on the user's mobile device, a
smartphone. So these frameworks are
essentially and as the advertising SDK is the
key that the developers can use to display
ads on their free apps. So it's not that
the developers are going to incorporate
the ultrasound framework is going to
incorporate an advertising SDK with
varying degrees of understanding of what
it does. So here is how ultrasound cross device
tracking works. On step one, basically, we
have the advertising client. He just wants
to advertise, advertises his products. He
goes to the ultrasound cross device
tracking provider who has the
infrastructure set up, set up a campaign,
and they provide their associates a unique
ultrasound because with this campaign and
then pushes this become to content
publishers to incorporate them
incorporated into their content, depending
on what the advertiser advertising client
is trying to achieve. So this is step
three or step for a user is basically
accessing all of those content providers
either. This is a TV ad or a website on
the Internet and one this once this
content is loaded or displayed by your TV.
At the same time, the device, the devices
speakers are emitting the ultrasounds. And
if you have the ultrasound cross device tracking
framework on your phone, which is usually
listening on the background, then it picks
up the ultrasound and on step six, it
submits it back to the service provider,
which now knows that, OK, this guy has
watched this DVR or whatever it is, and
I'm going to add this to his profile and
push our target dates back to his device.
So, of course, by doing this, they're just
trying to improve, improve their
conversion rate and get more customers.
Another use of ultrasounds currently in
practice is proximity marketing, so venues
basically set up multiple, multiple
ultrasound meters. This is kind of fancy
name for speakers and this is kind of the
nice thing about the ultrasound. You just
need speakers. So they put this in
multiple locations in their venue, either
a supermarket or a stadium, for instance,
and then there is a customer up. If you're
a supermarket, there is a supermarket up.
If you're an NBA team, which will see
later, you have this fun application that
the fans of your team can download
and install on their smartphones. And then
once this app, this happens, listing on
the background and it picks up the
ultrasound and submits them back to the
company. So the main purpose of using is
this is basically to study in user
behavior, in user behavior, provide real
time notifications like, OK, you are in
this aisle on the supermarket, but if you
just walk two meters down, you're going to
see this product in discount. Or the third
point, which kind of incentivizes the
users more, is basically you're offering
reward points for users visiting your
store. And actually there is a product
doing exactly that on the market. So some
other uses are device pairing. And this
basically relies on the fact that
ultrasounds do not penetrate through
objects. So if you have a small TV, say,
with or Chromecast, for instance, they can
emit random PIN through ultrasound. Your
device picks it up and submits it back to
the device through the Internet. And now
you've proved that you are on the same
physical location with the with Chromecast
or whatever your TV is. Also, Google
recently acquired sleek login. They are
also using ultrasounds for authentication.
It's not entirely clear what their product
is about, though. And also you have
audience measurement and analytics. So
what they are doing is basically if you're
if you incorporate multiple beacons in the
night, then you can basically track the
reactions and the behavior of the users of
it, of the audience in the sense that
first, you know, how many people have
watched your ad a second, you know what
happened. So if they show it's Sanderlin
between and this, so they submit only the
first beacon of the two, if you have two,
then you also track their behavior. OK, so
we've seen all these technologies and then
we started wondering how secure is that
thing? Like, OK, what security measures
are there applied by companies and
everything? So I'm going to immediately
start with the exploitation of the
technology. So to do that, we just need
the computer with speakers and the Tor browser
and the smartphone with an ultrasound app
and a state level advisory. I'm going to
say more about the state level advisory
later, but just keep in mind that it's on
the Tor threat model, so. I have a
video of the attack. I'm going to stop it,
I'm going to pose it in different places
to explain some more stuff. Yeah, OK, so
I'm going to set up the scene before that.
So let's make the assumption that we have
a whistle blower that wants to leak some
documents to a journalist, but he doesn't
know that the journalist is working with
the government and his main intent is
basically to deanonymize him. So the
journalist does the following, asks the
whistleblower to upload the documents to a
Tor hidden service or a website that he owns.
And the whistleblower basically thinking
that he's safe to do that through Tor
loads the page. So now I'm having I have the
demo, which is exactly that implements
exactly that scenario. So the whistle
blower opens the Tor browser, so the setup is
the following, we have the phone next to
the computer. This can be up to seven
meters away, but for practical purposes,
it has to be next to the computer. So we
have the Tor browser. What are we going to do
first? For the purpose of the demo, we use
them smart for listening framework that's
visible to the.. to the user. This is
basically the demo(?). Those apps, ultrasound
cross device tracking apps run in the background,
so now we're setting set it on listening
mode so that it starts listening. Of
course, in normal framework, the user
doesn't have to do that part. But we want
to show that. We want to show that what's
happening. So now the whistleblower is
going to load the innocuous were paid,
suggested by the journalist and see what
happens to. OK, now we've loaded the page
and the phone is listening in reality in
the background, so let's see what happens.
OK, this is looks pretty bad. We have lots
of information about the user visiting our
hidden service. I assume you already have some
clues about how this happened, what the
information that we have is the following.
First of all. We have his IP address,
phone number. Don't call this phone
number, because this isn't right. The ID
is he may end his Google account email. So
this is enough to say and his location, of
course, and this is enough to say that we
essentially deanonymized him, even if we
had the IP address, that would have been
enough. So before I explain exactly how
the attacked work, I'm going to introduce
some tools that the attackers have at
their disposal. The first one is a Bitcoin
injection. So what you can essentially do
is basically craft your own ultrasound
beacons and push them to devices,
listening for beacons, and then their
devices are going to treat them like valid
beacons and submit them back to the
company's backend. And then the same
things. Basically, you can also replace
ultrasound beacons, meaning that you can
capture them from virus location. And this
is actually happening on the wild at a
large scale for a specific application.
And then once you capture those beacons,
you can replace them back to the company's
back end through the user's devices to
give you a clue. There is a company that
incentivizes users to visit stores by
providing them offers and end points when
they are visiting stores and people are
capturing the beacons and are replaying them
back to their devices from home. So they
are selling the beacons through the
Internet so that they don't have to go to
the actual stores. OK, the problem here is
basically that the framework is handling
every beacon. It doesn't have a way to
distinguish between the valid and
maliciously crafted beacons. And my favorite
tool for the attackers is basically a beacon
trap, which is a code snippet that
once you loaded, you basically reproduce
one or more inaudible beacons that the
attacker chose to. So this can happen in
lots of ways on the demo. I use the first
one. So you build a website and you have
some JavaScript there just playing the
ultrasounds from the back. What else you can
do is basically start crosseyed scripting
vulnerability. Just exploit it on any
random website and then you can inject
beacons to the visitors of this website
or a man-in-the-middle attacks just
adding or javascript snippet on that
user's traffic or they send an audio
message to the to the victim. So how did
Tor deanonymization attack work? It's the
following. So first the adversary needs to
set up, set up a campaign, and then once
he captures the the beacon associated with
that campaign, he builds a beacon trap and
essentially on step three lures, the user
to visit it. This is what the journalist
basically did for the whistleblower on our
scenario. Then the user loads the
resource. He has no idea this is possible.
And she slapped him amidst the ultrasound,
beacon. If you if your smartphone has such a
framework, it's going to pick it up and
submit it back to the provider and I don't
know about you, but when I'm using Tor,
I'm not connecting my phone through to the
Internet, through the Tor network. My
phone is connected through my normal Wi-
Fi. So now the ultrasound service provider
knows that the you know, this smartphone
device omitted that specific beacon. And
then I step seven, basically the
adversary, which is state level adversary.
Can simply subpoena the provider for the
AP or other identifiers, which from what
we've seen, they collect plenty of them.
OK, so the first two elements, we have
them already like the Tor browser
computer, which biggest fine smartphone
with ultrasound tracking enabled
framework. Fine. What about the state
level adversity? So we didn't have a state
level adversity handy. So what we did is
basically we redirected the
traffic from step six to the advertized
backend. And I want to stress a point
here. This is not. A long, long shot
assumption. So what we've seen in October
is the following. I don't know how many of
you realize, but AT&T was running a spy
program, a thing called Hammesfahr, and it
was providing paid access to governments
only with an administrative subpoena,
which is not doesn't even need to be
obtained by it's ads. So it's pretty easy
for them to get access to this kind of
data. Especially we're talking about an IP
address. It's not it's very easy for them
to get it. So we also came up with some
more attacks. First one is profile,
corruption. Advertisers really like to
build profiles about you, your interests
and your behavior. So what you are
basically doing is you can inject beacons
to other people or even to your own phone
and then you can malform their profile.
Exactly. The impact of this attack depends
on how the backend of the advertising
company and the infrastructure works, but
the attack is definitely possible. And
then there is information leakage attack
were works under a similar assumption. You
can replay Beacon's eavesdrop and replay
because your own phone to make your
profile similar to that of the victims.
And then based on how recommendation
systems work, you're very likely to get
similar arts and similar content with that
of the victims. So of course, this also
depends about exactly how the
recommendation system is implemented, but
it's definitely possible. OK, so we've
seen certain things that makes us think
that, OK, the ecosystem is not very
secure. Um, we try to find out exactly why
this happened. So we did a security
evaluation or we came up with four points.
The first one is that we came up with we
realized that the threat model is
inaccurate, that ultrasound, because none
of the implementations we've seen had any
security features. Um, they also violated
the fundamental security principle and
they lacked transparency when it comes
when it came to user interface. So let's
go through them one by one. So inaccurate
and model. Basically what they do is
basically they rely on the fact that
ultrasounds cannot penetrate the walls and
they travel up to seven meters reliably.
However, as I said, as a matter of fact,
they also assume that you cannot capture
and replay because because of that, that's
the reason, um, what what's happening in
practice, that you can get really close
using beacon traps. So their assumption
is not that accurate. Um, also, the
security capabilities of beacons are
heavily constrained by the low bandwidth
the channel is has the limited time that
you have to reach the users. So if someone
is in a supermarket, he's not going to
stop somewhere for a very long time. So
you have limited time and a noisy
environment. So you want a very low error
rate. And adding crypto to the beacons
it may not be a good idea, but it also
results. This also results in replay in
injection attacks being possible. Um, we
also hear the violation of the privilege
of, uh, sorry, the principle of least privilege.
So what happens is basically all these
apps need full access to the microphone.
And based on the way it works, it's
completely unnecessary for them to gain
access to the audible frequencies.
However, even if they want to, there's no
way to gain access only to the ultrasound
spectrum, both in Android and iOS. You
have to gain either access to the whole
spectrum or no access at all. So this, of
course, results in the first malicious
developers can at any time start using
their access to the microphone. And of
course, all the benign ultrasound enabled
apps are perceived by as malicious by the
users. And this actually will say more
about it later. So lack of transparency is
inclose. This is a bad combination with
what exactly we've seen previously,
because it that we've observed large
discrepancies between apps when it comes
to informing the users and also lots of
discrepancies when it comes to providing
opt out options. And there is a conflict
of interest there, because if you're a
framework developer, developer, you want
to advise for proper practices for your
customers, but you are not you're not
going to enforce them or kind of blackmail
them. Either you do it properly or you're
not using my framework. So there is a
conflict of interest there. So what
happened because of a lack of
transparency is the following. Signals 360 is
one of those frameworks. An NBA team
started using this in May. And then a few
months after there is a sue and someone
claims, you know, that thing is listening
in the background. And what's interesting
is on the claim, what they are saying is,
OK, I gave permission through the Android
permission system for them to access the
microphone, but it was not explained to me
exactly what they were doing. And this is
in close ties with what the FTC was saying
in the letter a few months ago. Also,
again, the same story, um, football team
starts using such a framework a few months
after people are complaining that they are
being eavesdropped on. Um, I think what
happened here is that. When the team was
playing a match, the application started
listening for ultrasounds, but not all
your fans are going to be in the stadium,
so you end up listening for ultrasounds in
a church and other places. So, yeah,
people were also pissed. Um, OK, just to
put it into perspective how prevalent
these technologies are, the ecosystem is
growing. Even though that one company
withdrew. There are other companies in the
ecosystem are coming up with new products
as well. So the number of users is
relatively low, but it's also very hard to
estimate right now. We could find around
10 companies offering ultrasound related
products and the majority of them is
gathered around proximity marketing. There
was only one company doing ultrasound
cross device tracking. At least we found
one. Um, and this is mainly due to
infrastructure complexity. It's not easy
to do all those things. And secondly, I
also believe that the whole backslash from
the security community is incentivized
other companies from joining because they
don't want a tarnished reputation. OK, so
we have this situation right now.
Companies are using ultrasound. What are
we going to do? So this was our initial
idea. This is what we thought first. But
we want to fix things. So we tried to come
up with certain steps that we need to take
to actually fix that thing and make it
usable, but not dangerous. Um, so we
listed what's wrong with it. We did it
already. We we developed some quick fixes
that I'm going to present later and medium
term solutions as well. And then we
started advocating for a long term changes
that are going to make the ecosystem
reliable. And we need the involvement from
the community there. Definitely. So. We
developed some short and medium term
solutions, um, the first one is a browser
extension, our browser extension basically
does the following is based on HTML5, the
Web audio API. Um, it filters all audio
sources and places a filter between the
audio source and the destination on the
Web page and filters out ultrasounds. To
do that, we use a heisel filter that
attenuates all frequencies above 18kHz
and it works pretty reliably. And
we leave all audible frequencies, intact.
But it's not going to work with
obsolete legacy technologies such as
flash. OK, we also have an adroit
permission, I think this somewhat more
medium term solution, what we did is we
developed a unique developed parts for the
Android permission system. This allows for
fine grained control over the audio channel,
basically separates the permission needed
for listening to audible sound and the
permission needed for listening to the
ultrasound spectrum. So at least we force the
applications to specifically declare that
they are going to listen to four
ultrasounds. And of course, users can, on
the latest Android versions, can also
disable this permission and it can act as
an opt out option if the app is not
providing it. We also initiated discussion
on the Turbo Tracker, but, um, we have,
um, we are advocating for some long term
solutions, so we really need some
standardization here. Um, let's agree on
ultrasound to confirm that and decide what
security features can be there. I mean, we
need to figure out what's technically
possible there because it's not clear. And
then once we have a standard, we can start
building some APIs. And the APIs are very
nice idea because, um, they will work as
the Bluetooth APIs work, meaning that they
will provide some methods to discover,
process, generate and emit the sound
beacons through a new API related
permission. And this means that we will
stop having overprivileged apps. We won't
need access to the microphone anymore,
which is a huge problem right now. And of
course, the applications will not be
considered spying anymore. And there is
also another problem that we found out
while we were playing with those shops.
Um, if you have a framework listening
through the microphone, other apps cannot
access it. So we are trying to open the
camera app to record the video on the app.
Camera app was crashing because the framework
was locking the access to the
microphone. Now we may have some
developers from frameworks saying, you
know, I'm not going to use your API. I'm
going to keep asking for access to the
microphone. But we can force them to use
this API if we somehow, um, by default
filter out the ultrasound frequencies
from the microphone and
provide the way to the user to enable them
on a pure application basis from his
phone. OK, so. Here's what we did, um, we
analyzed them, multiple ultrasound
tracking technologies, we saw what what's
out there in the real world and reverse
engineered such frameworks. We identified,
um, quite a few security shortcomings. We
introduced our attacks and proposed some,
um, usable countermeasures. Um, and
hopefully we initiated the discussion
about standardizing ultrasound because,
um, but there are still things left to do.
So for the application developers, please,
um, explicitly notify the users about what
your app is doing. Many of them would
appreciate to know that. Um, also, we need
to improve transparency in the data
collection process because they collecting
lots of data and very few information were
available about what kind of data they
framework's collect. Um, we also think
it's a good idea to have an opt in option
if it's not too much to ask, at least an
opt out and standard security practices,
um, as always. So framework providers
basically need to make sure that the
developers inform the users and also make
sure that the users consent regularly to
listening for because like it's not enough
if you consent once and then a month after
the app is still listening for ultrasound beacons
you have to periodically ask the user if it's
still okay to do that. Um. Ideally, every time
you are going to listen and then, of
course, we need to work on standardizing
ultrasound because this is going to be a
long process and then building the
specialized, specialized API. Hopefully
this is going to be easier once we have a
standard and see what kind of
authentication mechanisms can we have in
this kind of constrained transmission
channel. So..
applause
Herald: Thank you Vasilios. If you have any
questions, please do line up at the four
microphones here in the walkways and the
first question will be the front
microphone here.
Mic: Hello and thank you for your
presentation. And I have a couple of
questions to ask that are technical and
they are very related. First of all, do
you think that blocking out in our system
level the high frequencies for either
microphone or the speakers as well, a
something that is technically feasible and
will not put a very high latency in the
processing?
Vasilios: So we did that through the
permission. You are talking
about the smartphone right?
Mic: Yeah, basically, because you have to
have a real time sound and microphone
feedback.
Vasilios: So we did that with the
permission. And I think it's not it's not
to resource demanding, if that's
your question. So it's
definitely possible to do that.
Mic: And the second part is, so
there is a new market maybe for some
companies producing and microphones and
speakers that explicitly block out
ultrasounds, right?
Vasilios: Possibly. Possibly. Um, I'm not
sure if you can do this from the
application level. We developed parts for
the Android system. I think our first
approach back then was basically try to
build an app to do that from the
application, from the user land. And
basically, I'm not sure if you can I doubt
actually an Android if you can filter out
ultrasounds. But from a browser, we have
our extension. It works on Chrome. You can
easily use our code to do the
same thing on the Firefox.
Mic: Thanks.
Herald: The next question is from the
front right microphone.
Mic: Thank you for your talk. I have a
question about the attack requirements
against the whistleblower using Tor.
I'm curious, the attacker has access to
the app on the smartphone and also access
to the smartphone microphone. Wouldn't the
attacker then be able to just listen in on
the conversation of the whistleblower and
thereby identify him?
Vasilios: Yeah, absolutely. Absolutely.
It's a major problem. The problem is that
they have access to the microphone. So
this is very this is very real and it's
not going to be resolved even if we had
access only to the ultrasound spectrum.
What we're saying is basically, if we only
had access to the ultrasound spectrum,
you're still uhm you are still vulnerable
to these attacks unless you incorporate
some crypto mechanisms that prevent these
things from happening. Is this your
question or?
Mic: Um, well, I can still pull off the
same attack if I don't
use ultrasound right?
Vasilios: Through the audible spectrum?
Mic: Yes,
Vasilios: You can absolutely do. There is
one company doing tracking in the audible
spectrum. This is much harder to mitigate.
We're looking into it about ways, but
there are so many ways to incorporate
beacons in the audible spectrum. The thing
is that there is not much of an ecosystem
in this area right now that so you don't
have lots of frameworks are there as many
as you have for ultrasounds.
Mic: Thank you.
Herald: Our next question will be from
the Internet via our signal angel
Signal Angel: $Username is asking, have
you heard about exploiting parricide
ultrasound emiters like IC component's?
Vasilios: Can you please
repeat the question?
Signal Angel: Yes, sure. The question is,
can you use other components on the main
board or maybe the hard disk to emit
ultrasounds and then broadcast the beacon
via this?
Vailios: Uh. So that's a very that's a
very good question. The answer is I don't
know, possibly, and it's very scary. Um,
hopefully not, but I doubt it. I think
there should be a way to do it. Um, maybe
the problem is that you cannot do this
completely in a completely inaudible way.
Like you may be able to meet ultrasounds,
but you will also emit some sort of sound
in the audible spectrum so that the user
will know that something is going on.
Herald: The next question
from the left microphone.
Mic: Thank you for your talk and
especially thanks for the research. So,
uh, do you know of any framework's or, uh,
STKs that cash the beacon's they find?
Because for my use case, I my phone was
mostly offline. I just make it online when
I have to check
something. So I'm not that concerned. But
you do you know, if they like cash the
beacons and and submit them later
something like this. Of course they do.
I'm not surprised, unfortunately. Yeah.
Thanks. Next question from the rear.
Right. Oh, what is the data rate? You can
send in the ultrasound. Very good
question. And it's totally relevant to the
cryptographic mechanisms we want to
incorporate from our experiments. Um, in
four seconds you can basically send like
five to six alphabet characters if you're
willing to kind of reduce the range a lot
less in less than seven meters, you may be
able to send more. But the standard is not
very robust in this sense. But these
experiments were done with this kind of
naive encoding that most of the companies
are using. So if you do the encoding in a
very smart way, possibly you can increase
that. And a small second part, what's the
energy consumption on the phone if that is
running all the time? Wouldn't I detect
that? So it's not, uh, it's not good. We
saw that it was draining the battery and
actually in the comments, I don't know if
I had that comment here. Some people were
complaining that, um, I tried and it was
draining my battery. And, um, there is an
impact. Absolutely. Amazon and Google Nest
and all the other parts, aren't you more
worried about that? You know, the always
listening thing from Google and Amazon and
everyone is coming up with some something
like that that's always on. And so that
it's kind of strange because a user's
consent. But at the same time, they don't
completely understand. So there is a gray
line there, like you can say that the
users, OK, you consented to that up,
starting with your with your phone and
listening on the background. But at the
same time, the users don't have the best
understanding. Always. Thank you. Next
question from the front left microphone
first. Thank you for the talk. I would be
interested in how you selected your real
world applications and how many you found
that already use such a framework. So what
was the first part of the question, how
you selected your real world applications
from the marketplace staff if you had any.
So we're trying to do a systematic scan of
the whole market, but it's not easy. So we
not able to do that. There are resources
on the Internet. Luckily, the companies
need to advertise their product. So they
basically publish press releases saying,
you know, this NBA team started using our
product. We did some sort of scanning
through alternative datasets, but
definitely we don't have an exhaustive
list of applications. What I can say,
though, is that there are applications
with. Using such frameworks with nearly up
to, if I remember correctly, up to one
million installations. One notable
example, OK, I'm not entirely sure what I
wanted, but up to a million we definitely
saw. OK, thanks. Do we have more questions
from the Internet? Yes, E.F. is asking, is
he aware of or are you aware sorry? Are
you aware of any framework available by
Google or Apple? In other words, how do we
know that it's not, for instance,
seriously snitching on us? How do we know
that it's not true? It's not serious. Some
maybe Aleksa snitching on us. We don't. I
think that's a that's a very large
discussion. Right. So is the same problem
that these companies are having? Because
if I go back here, basically the users are
accusing them of eavesdropping. Especially
here from reverse engineering those
frameworks, we couldn't find any such
activity, but again, it's very hard to
convince the users that you are listening
to the ultrasound spectrum. You if you're
accessing the whole audible frequencies
through the microphone, you're going to or
you will always find yourself in this
position. So I guess it's the same problem
that Alexa has from Amazon. But in this
case, you can actually solve it by
constraining the spectrum that you gain
access to. Next question from the front
left microphone, please. Has anybody done
an audible demonstration off these beacons
bypassed by transposing them down an
octave or two, I think might be useful for
for or your talk to something like that.
So you mean a demo, but using audible
frequencies? Essentially, there is this
one company, but they are being pretty to
all of these companies are being pretty
secretive with their technology. So they
publish what's needed for marketing
purposes like accuracy sometimes remains
very limited technical details. But apart
from these, you have to get your hands on
the framework somehow and analyze it
yourself. So in this kind of overview we
need for the ecosystem, we had to do
everything by ourselves. There was no
resources out there were very limited, um,
or recording it and playing it down and
transposing it yourself, if you know where
as a beacon of. Possibly I'm not I'm not
entirely sure you could. Yeah. Another
question from our signal, angel mestas,
again asking, um, would it be possible,
even if you have a low pass filter to use,
uh, for instance, the cost effect and high
cost effect to transmit the beacon via
ultrasound, but in a regime which is as
free for the app? So it's basically the
question, can I somehow, via Aliasing USA
address on signal to make a normal signal
out of it? Possibly, I don't know. I think
you are much more creative than I am, so
maybe I should add more bullet points on
this controversialist here. Apparently,
there are many more ways to do this,
possibly like hardware missions. This one
sounds like a good idea, too. So next
question from the real right microphone. I
apologize if you explain the story they
didn't understand, but is is sort of
drowning out the signals, like jamming.
They just broadcasting white noise in that
spectrum, an effective countermeasure. And
as a follow up, if it is, would it
terrorize my dog? So absolutely, it's
effective. I mean, this it works up to
seven meters, but we're not saying it's
not fragile, so you can do that, but it's
noise pollution. And my dog, I don't think
it was happy. I did it for a very limited
time. I could see her ears moving, but I
don't think she would appreciate it if I
had the device at home doing this all the
time. Do we have any more questions from
the Internet? Yes, EULEX is asking to what
extent could we use these for our own
needs? For example, people in repressive
situations, for example, activists could
use it to transmit secret encrypted
messages. Are there any efforts in this
area? Yes, there are. People are
developing ultrasound modems. I think
there is even a tag on it. And yes, of
course there is. So I would say, yes, I'm
not entirely sure about the capabilities
of this channel in terms of bandwidth, but
this is why we we are not advocating to
kill the technology just to make it secure
and know its limitations. So you can do
good stuff with it. And this is what we
want. Next question from the Rio, right?
Yeah, I'm wondering if you could transfer
that technique from the ultrasound range
also to the Audible Range, for example, by
using watermarks, audio, watermarks, and
then, well, your permission thingy with
the ultrasound permissions would be
ineffective and you could also track the
user. How about this? Is it possible audio
watermarks in the audible spectrum? Yeah,
it's absolutely possible. Um, our
countermeasures are not effective against
this. Um, it's just that there is from our
research, just one company doing this. Uh,
so this one, um, I think technically it's
a bit more challenging to do that.
Instead, they're just admitting they are
doing it in a very basic way. So
hopefully, um, if there is a clear way to
do it through ultrasounds, they are not
going to reside reside in the audible
spectrum. But our countermeasures are not
effective against the audible. Um.
Watermarks. Yeah, thanks, next question
from the front left microphone. I've heard
that I don't think it's very credible, but
I've heard that there is some sound on
this sub sound spectrum. There were some
experiments showing that they can
influence our mood, the mood of humans. Is
there any relevant information about how
ultrasounds could affect us? So without
being an expert in this particular area?
I've read similar articles when I was
looking into it. I can tell you it's very
annoying, especially if you're listening
to it through headphones. You cannot
really hear the sound, but you can if
you're using headphones, you can feel the
pressure. So if I don't know what kind of
medical condition you may develop, but you
won't be very sane after. Do we have any
more questions? Yes. One further question,
um, would it be possible to, um, use a
charming solution to get rid of the
signals? Yes, but you you're going to
follow the you know, it's going to result
in noise pollution, but if you are being
paranoid about it, yes, it's and it's, I
think, a straightforward thing to do. Any
more questions? One more on the front left
microphone. Know, you said that physical
objects will block the ultrasound. How
solid do the physical objects need to be?
So, for example, does my pocket block the
ultrasound and thus prevent my phone to
call the environment and vice versa? OK,
well, that's a good question. I don't
think that clothes can actually do that
unless it's very thick. Thin girls
definitely block it. Um. Thick glass, I
would say it reduce the transmission rate,
the signal to noise ratio by a lot, but it
could go through it, so. You need
something quite concrete, metal. I don't
think it goes through it. So are there any
more? Doesn't look like it, maybe, maybe
one more sorry. Oh, good signal, good bye.
Kitty is asking, could you name or compile
a list of tracking programs and apps? So.
That's a good question. We're trying to
make an exhaustive list and try to resolve
this in a systematic way. I've already
listed two Macenta frameworks. One is the
Silverbush one three actually. One is the
Silver Paswan. There is another one used
by single 360. So developed the signal
360, and then there is a listener one.
These are very popular. Um, and then its
developer is incorporating them into their
applications in different ways, offering
varying levels of transparency for the
users. So it's better if you start knowing
what the frameworks are and then trying to
find the applications using them, because
you know what? You're looking in the code
and you can develop some queries and
enabling you to access an ability to to
track which applications are using them.
What what we observed for Silverbush is
basically after the company announced that
they are moving out of the US and because
of the whole backslash, maybe even before
that, um, companies started to drop the
framework. So all their versions had the
framework, but they are not using it
anymore. I think that's it. Thank you very
much, Vasilios Lovelady's.
Subtitles created by c3subtitles.de
in the year 2021. Join, and help us!