< Return to Video

35C3 - Censored Planet: a Global Censorship Observatory

  • 0:00 - 0:19
    35C3 preroll music
  • 0:19 - 0:25
    Herald Angel: All right. It's my very big
    pleasure to introduce Roya Ensafi to you.
  • 0:25 - 0:31
    She's gonna talk about "Censored Planet: a
    Global Censorship Observatory". I'm
  • 0:31 - 0:36
    personally very interested in learning
    more about this project. Sounds like it's
  • 0:36 - 0:41
    gonna be very important. So please welcome
    Roya with a huge warm round of applause.
  • 0:41 - 0:43
    Thank you.
  • 0:43 - 0:49
    Applause
  • 0:49 - 0:56
    Roya: It's wonderful to finally make it to
    CCC. I had joined talk with multiple of my
  • 0:56 - 1:00
    friends over the past years and the visa
    stuff never worked out. This year I
  • 1:00 - 1:06
    applied for a conference in August and the
    visa worked for coming to CCC. My name is
  • 1:06 - 1:11
    Roya Ensafi and I'm professor at the
    University of Michigan. My research
  • 1:11 - 1:18
    focuses on security and privacy with the
    goal of protecting users from adversarial
  • 1:18 - 1:28
    network. So basically I investigate
    network interference ...and somebody is
  • 1:28 - 1:56
    interfering right now. Damn it. What the
    heck. Cool, I'm good. Oh, no I'm not.
  • 1:56 - 2:08
    laughter OK. In my lab we develop
    techniques and systems to be able to
  • 2:08 - 2:14
    detect network interference often at a
    scale and apply these frameworks and tools
  • 2:14 - 2:20
    to be able to understand the behaviors of
    these actors that do the interference and
  • 2:20 - 2:25
    use this understanding to be able to come
    up with a defense. Today I'm going to talk
  • 2:25 - 2:30
    about a project that is very dear to my
    heart. The one that I spent six years
  • 2:30 - 2:35
    working on it. And in this talk I'm going
    to talk about censorship, internet
  • 2:35 - 2:41
    censorship. And by that I mean any action
    that prevents users' access to the
  • 2:41 - 2:49
    requested content. We have heard an
    alarming level of censorship happening all
  • 2:49 - 2:54
    around the world. And while it was
    previously multiple countries that were
  • 2:54 - 3:01
    capable of using deep packet inspections
    to tamper with user traffic thanks to
  • 3:01 - 3:09
    commercialization of these DPIs now many
    countries are actually messing with users'
  • 3:09 - 3:17
    data. For the first time that the users
    type CNN.com in their browsers, their
  • 3:17 - 3:22
    traffic is subject to some level of
    interference by different actors. First
  • 3:22 - 3:27
    for example the DNS query where the
    mapping between the domain and the IP
  • 3:27 - 3:34
    where the content is, can be manipulated.
    For example the DNS assets can be a dead
  • 3:34 - 3:41
    IP where the content is not there. If the
    DNS succeed then the users and the servers
  • 3:41 - 3:48
    are going to establish a connection, TCP
    handshake and that can be easily blocked.
  • 3:48 - 3:54
    If that succeed then users and servers
    start actually sending back and forth the
  • 3:54 - 4:00
    actual data and there are enough to clear
    text to be the traffic encrypted or not
  • 4:00 - 4:06
    that the DPI can detect a sensitive
    keyboard and send a reset package to both
  • 4:06 - 4:13
    basically shut down the connections.
    Before I forget let me tell you and
  • 4:13 - 4:18
    emphasize that it's not just the
    governments and the policies that impose
  • 4:18 - 4:25
    on the ISPs that lead to censorship.
    Actually server side which provides the
  • 4:25 - 4:31
    data are also blocking users. Especially
    if they are located in a region that they
  • 4:31 - 4:40
    don't provide any revenue. We recently
    investigated this issue of dual blocking
  • 4:40 - 4:49
    in deep and provide more details about
    what role CDNs actually provide. Imagine
  • 4:49 - 4:57
    now we have how many users, how many ISPs,
    how many transit networks and how many
  • 4:57 - 5:03
    websites. Each of which are going to have
    their own policies of how to block users'
  • 5:03 - 5:10
    access. More, censorship changes from time
    to time, region to region and country to
  • 5:10 - 5:15
    country. And for that reason many
    researchers including me have been
  • 5:15 - 5:21
    interested in collecting data about
    censorship in a global way and
  • 5:21 - 5:30
    continuously. Well, I grew up under severe
    censorship. Be it the university,
  • 5:30 - 5:35
    government, more frustrating the server
    side. And I genuinely believe that
  • 5:35 - 5:45
    censorship take away opportunities and
    degrade human dignity. It is not just
  • 5:45 - 5:54
    China, Bahrain, Turkey that does internet
    censorship. Actually with the DPIs become
  • 5:54 - 6:02
    cheaper and cheaper many governments are
    following their leads. As a result
  • 6:02 - 6:07
    Internet is becoming more and more
    balkanized and the users around the world
  • 6:07 - 6:10
    are going to soon have a very very
    different pictures of what this Internet
  • 6:10 - 6:16
    is. And we need to be able to collect the
    data and to be able to know what is being
  • 6:16 - 6:25
    censored, how it's being censored, where
    it's being censored and for how long. This
  • 6:25 - 6:33
    data then can be used to bring
    transparency and accountability to
  • 6:33 - 6:39
    governments or private companies that
    practice internet censorship. It can help
  • 6:39 - 6:44
    us to know where the circumvention to,
    where the defense needs to be deployed. It
  • 6:44 - 6:49
    can help us to let the users around the
    world to know what their governments are
  • 6:49 - 6:59
    up to and more important provide valid and
    good data for the policymakers to come up
  • 6:59 - 7:08
    with the good policies. Existing research
    already shows that if we can provide this
  • 7:08 - 7:18
    data to users they act by their own will
    to ensure Internet freedom. For many years
  • 7:18 - 7:23
    my goal has been to come up with a weather
    map, a censorship weather map where you
  • 7:23 - 7:27
    can actually see changes in censorship
    over time, how some countries are
  • 7:27 - 7:34
    different from others and do that for a
    continuous duration of time, and for all
  • 7:34 - 7:42
    over the world. Creating such a map was
    impossible with the techniques, Internet
  • 7:42 - 7:47
    measurement methods that we had at that
    time. At the time and even the common
  • 7:47 - 7:54
    techniques we now use. The measurement
    methods to be able to use for measuring
  • 7:54 - 7:59
    internet censorship is often by deploying
    a software or giving your customized
  • 7:59 - 8:06
    Raspberry Pi to either a client or a
    server and based on that measure what's
  • 8:06 - 8:13
    happening between client and servers.
    Well, this approach has a lot of
  • 8:13 - 8:18
    limitations. For example there are not
    that many volunteers around the whole
  • 8:18 - 8:25
    world that are eager to download a
    software and run it. Second, the data
  • 8:25 - 8:33
    collected from this approach are often not
    continuous because the user's connection
  • 8:33 - 8:38
    can die for a variety of reasons or users
    may loose interest to keep running the
  • 8:38 - 8:45
    software. And therefore we end up with
    sparse data where we cannot have a good
  • 8:45 - 8:53
    baseline for internet censorship studies.
    More measuring domains that are sensitive
  • 8:53 - 9:00
    often create risks for the local
    collaborators and might end up with their
  • 9:00 - 9:10
    government's retaliate. These risks are
    not hypothetical. When the Arab Spring was
  • 9:10 - 9:17
    happening I was approached by many
    colleagues to recruit local friends and
  • 9:17 - 9:24
    colleagues in Middle East to be able to
    collect measurement data at the time that
  • 9:24 - 9:30
    was very interesting to capture the
    behavior of the network and most dangerous
  • 9:30 - 9:36
    for the locals, and volunteers to collect
    that. My painting actually expressed what
  • 9:36 - 9:44
    I felt at the time. I can't just imagine
    asking people on the ground to help at
  • 9:44 - 9:55
    these times of unrest. In my opinion,
    conspiring to collect the data against the
  • 9:55 - 10:02
    government's interest can be seen as an
    act of treason. And these governments are
  • 10:02 - 10:12
    unpredictable often. So it has exposed
    these volunteers to a severe risk. While
  • 10:12 - 10:19
    no one has yet been arrested because of
    measuring internet censorship as far as we
  • 10:19 - 10:26
    know, and I don't know how we can know
    that on a global scale, I think the clouds
  • 10:26 - 10:34
    are on the horizon. I'm still at awe how
    Turkish government used their surveillance
  • 10:34 - 10:42
    data at a time of a co-op and tracked down
    and detained hundreds of users because
  • 10:42 - 10:49
    there was a traffic between them and by
    luck a messenger app that was used by co-
  • 10:49 - 10:57
    op administrators. These things happens.
    Before I continue, if you know OONI you
  • 10:57 - 11:08
    might ask how OONI prevents risk. Well,
    with a great level of efforts. And if you
  • 11:08 - 11:12
    don't know OONI, OONI is a global
    community of volunteers that collect data
  • 11:12 - 11:21
    about censorship around the world. Well,
    first and foremost they provide their
  • 11:21 - 11:28
    volunteers with the very honest consent,
    telling them that "hey, if you run this
  • 11:28 - 11:35
    software, anybody who is monitoring your
    traffic know what you're up to." They also
  • 11:35 - 11:39
    go out of their way to give freedom to
    these volunteers to choose what website
  • 11:39 - 11:46
    they want to run, what data they want to
    push. They establish a great relationship
  • 11:46 - 11:54
    with the local activist organization in
    the countries. Well, now that I prove to
  • 11:54 - 11:59
    you guys that I am a supporter of OONI and
    I am actually friends with most of them; I
  • 11:59 - 12:05
    want to emphasize that I still believe
    that consistent and continuous and global
  • 12:05 - 12:12
    data about censorship requires a new
    approach that doesn't need volunteers'
  • 12:12 - 12:22
    help. I've become obsessed with solving
    this problems. What if we could measure
  • 12:22 - 12:29
    without a client, in anywhere around the
    world, can talk to a server without being
  • 12:29 - 12:36
    close to a client. Somewhere from here,
    from University of Michigan. And see
  • 12:36 - 12:42
    whether the two hosts can talk to each
    other, globally and remotely, off the
  • 12:42 - 12:50
    path. When I talk to the people about
    this, honestly, everybody was like "you
  • 12:50 - 12:54
    don't know what you're talking about, it's
    really really challenging". Well, they
  • 12:54 - 13:01
    were right. The challenge is there, and
    I'm going to walk you through it. We have
  • 13:01 - 13:07
    at least 140 million IP addresses that
    respond to same packet. This means they
  • 13:07 - 13:16
    speak to the world, and they follow
    blindly TCP/IP protocol. So the question
  • 13:16 - 13:24
    becomes: how can I leverage the subtle
    properties of TCP/IP to be able to detect
  • 13:24 - 13:36
    that two hosts can talk to each other?
    Well, Spooky Scan is a technique that Jed
  • 13:36 - 13:43
    Crandall from University of New Mexico and
    I developed that uses TCP/IP side channels
  • 13:43 - 13:50
    to be able to detect whether the two
    remote hosts can establish a TCP handshake
  • 13:50 - 13:57
    or not, and if not, in which direction the
    packets are being dropped. Off the path
  • 13:57 - 14:04
    and remotely. And I'm gonna start telling
    you how this works. First I have to cover
  • 14:04 - 14:11
    some background. So any connection that is
    based on TCP, one of the basic
  • 14:11 - 14:16
    communication protocols we have, is it
    needs to establish a TCP handshake. So
  • 14:16 - 14:23
    basically you should, you send a SYN and
    in the packet you send, in the IP header,
  • 14:23 - 14:31
    you have a field called "identification
    IP_ID", and this field is used for
  • 14:31 - 14:37
    fragmentation reason, and I'm going to use
    this field a lot in the rest of the talk.
  • 14:37 - 14:42
    After the user received a SYN, it is going
    to send a SYN-ACK back, have another IP_ID
  • 14:42 - 14:48
    in it. And then, if I want to establish a
    connection I send ACK. Otherwise I send a
  • 14:48 - 14:56
    RESET (RST). Part of the protocol says
    that if you send a SYN-ACK packet to a
  • 14:56 - 15:01
    machine with a port open or closed, it's
    going to send you a RST, telling you "what
  • 15:01 - 15:05
    the heck you are sending me SYN-ACK, I
    didn't send you a SYN" and another part
  • 15:05 - 15:09
    said: if you send a SYN packet to a
    machine with the port open, eager to
  • 15:09 - 15:14
    establish connection, it will send you a
    SYN-ACK. If you don't do anything, because
  • 15:14 - 15:20
    TCP/IP is reliable, it's going to send you
    multple SYN-ACK. It depends on operating
  • 15:20 - 15:30
    system, 3, 5, you name it. Spooky Scan
    requires some basic characteristics. For
  • 15:30 - 15:37
    example, the client, the vantage points
    that we are interested, should maintain a
  • 15:37 - 15:44
    global variable for the IP_ID. It means
    that, when they receive the packets and
  • 15:44 - 15:49
    they want to send a packet out, no matter
    who they're sending the packet to, this
  • 15:49 - 15:54
    IP_ID is going to be a shared resource, as
    in going to be increment by one. So by
  • 15:54 - 15:58
    just watching the IP_ID changes you can
    see how much a machine is noisy, how much
  • 15:58 - 16:04
    a machine is sending traffic out. A server
    should have a port open, let's say 80 or
  • 16:04 - 16:09
    443, and wants to establish a connection,
    and the measurement machine, me, should be
  • 16:09 - 16:15
    able to spoof packets. It means sending
    packet with the source IP different from
  • 16:15 - 16:21
    my own machine. To be able to do that, you
    need to talk to upstream network and ask
  • 16:21 - 16:28
    them not to drop the packets. All of these
    requirements I could easily satisfy with a
  • 16:28 - 16:37
    little bit of effort. A Spooky Scan starts
    with measurement machine send a SYN-ACK
  • 16:37 - 16:41
    packet to one of this client with a global
    IP_ID, at a time let's say the value is
  • 16:41 - 16:49
    7000. The client is going to send back a
    RST, following the protocol, revealing to
  • 16:49 - 16:54
    me what the value of IP_ID. In the next
    step I'm going to send a spoofed SYN
  • 16:54 - 17:02
    packet to a server using a client IP. As a
    result, the SYN-ACK is going to be sent to
  • 17:02 - 17:06
    the client. Again, client is going to send
    a RST back, the IP_ID is going to be
  • 17:06 - 17:11
    incremented by 1. Next time I query IP_ID
    I'm going to see a jump too. In a
  • 17:11 - 17:17
    noiseless model, I know that this machine
    talked to the server. If I query it again,
  • 17:17 - 17:25
    I won't see any jump. So, Delta 2, Delta
    1. Now imagine there is a firewall that
  • 17:25 - 17:33
    blocks the SYN-ACKs going from the server
    to the client. Well, it doesn't matter how
  • 17:33 - 17:37
    much of the traffic I send, it's not going
    to get there. It's not going to get there.
  • 17:37 - 17:44
    So the delta I see is 1, 1. In the third
    case when the packets are going to be
  • 17:44 - 17:50
    dropped from the client to the server:
    Well, my SYN-ACK gets there. The SYN-ACK
  • 17:50 - 17:55
    gets to the client, the client is going to
    set the RST back, but it's not going to
  • 17:55 - 17:59
    get to the server. And so server thinks
    that a packet got dropped, so it's going
  • 17:59 - 18:07
    to send multiple SYN-ACK. And as a result
    the RST is going to be plus plus more. And
  • 18:07 - 18:14
    so what jump I would see is, let's say, 2,
    2. Let me put them all together. So you
  • 18:14 - 18:20
    have 3 cases. Blocking in this direction.
    No blocking and blocking in the other. And
  • 18:20 - 18:26
    you see different jumps or different
    deltas. So it's detectable. Yes, yes, in a
  • 18:26 - 18:32
    noiseless model. I know the clients talk
    to so many others and the IP_ID is going
  • 18:32 - 18:38
    to be changed because of a variety of
    reason. I call all of those noise. And
  • 18:38 - 18:43
    this is how we are going to deal with it.
    Well, intuitively thinking we can amplify
  • 18:43 - 18:48
    the signal. We can actually instead of
    sending one spoofed SYN packet we can send
  • 18:48 - 18:55
    n. And for a variety of reasons packets
    can get dropped. So we need to repeat this
  • 18:55 - 19:04
    measurement. So here is some data from a
    Spooky Scan where I used the following
  • 19:04 - 19:13
    probing method. For 30 seconds I spoofed
    the, I've sent a query for IP_ID. And then
  • 19:13 - 19:21
    for another 30 seconds I send these 5
    spoofed SYN packets. This is machines or
  • 19:21 - 19:27
    clients in Azerbaijan, China and United
    States. And we wanted to check whether it
  • 19:27 - 19:33
    has reached the TOR-relay that we had in
    Sweden. You can see there are different
  • 19:33 - 19:40
    jump or different levels-shift that you
    observe in a second phase. And just
  • 19:40 - 19:45
    visually looking at it or using auto-
    regressive moving average or ARMA you
  • 19:45 - 19:51
    can actually detect that. But there is an
    insight here, which is that not all the
  • 19:51 - 19:57
    clients have the same level of noise. And
    for which, for some of them, especially
  • 19:57 - 20:02
    these guys, you could easily detect after
    five level of sending IP_ID-query and then
  • 20:02 - 20:11
    five seconds of spoofing. So in the
    follow-up work we tried to use this
  • 20:11 - 20:16
    insight, to be able to come up with a
    scalable and efficient technique to be
  • 20:16 - 20:25
    able to use it in a global way. And that
    technique is called "Augur". Well Augur
  • 20:25 - 20:33
    adopts this probing method. First, for four
    seconds it queries IP_ID, then in one
  • 20:33 - 20:42
    second sends 10 spoofed SYN-packets. Then
    look at the IP_ID-acceleration or second
  • 20:42 - 20:50
    derivative, and see whether we see a jump,
    a sudden jump at the time of perturbation,
  • 20:50 - 20:56
    when we did the spoofing. How confident we
    are that that jump is the result of our
  • 20:56 - 21:02
    own spoofed packet? Well, I'm not
    confident, run it again. I think so, run
  • 21:02 - 21:09
    it again, until you have a sufficient
    confidence. It turns out there is a
  • 21:09 - 21:15
    statistical analysis called "sequential
    hypothesis testing" that can be used to be
  • 21:15 - 21:23
    able to gradually improve our confidence
    about the case we're detecting. So I'm
  • 21:23 - 21:28
    going to give you a very, very rough
    overview of how this works. But for
  • 21:28 - 21:37
    sequential hypothesis testing we need to
    define a random variable. And we use
  • 21:37 - 21:43
    IP_ID-acceleration at the time of
    perturbation, being 1 or 0, based on you
  • 21:43 - 21:54
    see jump or not. We also need to calculate
    some empirical priors, known
  • 21:54 - 21:59
    probabilities. If you look at everything,
    what would be the probability that you see
  • 21:59 - 22:08
    jump when there is actually no blocking?
    And so on. After we put all this together
  • 22:08 - 22:16
    then we can formalize an algorithm
    starting by run a trial. Update the
  • 22:16 - 22:21
    sequence of values for the random
    variables. Then check whether this
  • 22:21 - 22:27
    sequence of values belongs to the
    distribution of where the blocking happen
  • 22:27 - 22:33
    or not. What's the likelihood of that? If
    you're confident, if we reached the level
  • 22:33 - 22:39
    that we are satisfied, then we call it a
    case. So putting all this together this is
  • 22:39 - 22:48
    how Augur works. We scan the whole IPv4,
    find global IP_ID-machines. And then we
  • 22:48 - 22:56
    have some constraint that is it a stable
    machine? Is it a noisier or have a noise
  • 22:56 - 23:02
    that you want to deal with? We also need
    to figure out what website are we
  • 23:02 - 23:09
    interested to test reachability towards?
    What countries we are? So after we decide
  • 23:09 - 23:18
    all the input then we run a scheduler
    making sure that no client and server are
  • 23:18 - 23:26
    under the measurement in the same time
    because they mess each other's detection.
  • 23:26 - 23:32
    And then we actually use our analysis to
    be able to call the case and summarize the
  • 23:32 - 23:39
    results. I started by saying that the
    common methods have this limitation, for
  • 23:39 - 23:45
    example coverage continuity and ethics.
    Well, when it comes to coverage there are
  • 23:45 - 23:53
    more than 22-million global IP_ID-
    machines. These are WindowsXP or
  • 23:53 - 24:03
    predecessors. And FreeBSDs for
    example. Compared to the previous board,
  • 24:03 - 24:08
    one successful project is the RIPE-atlas,
    and they have around 10000 probes globally
  • 24:08 - 24:19
    deployed. When it comes to continuity we
    don't depend on the end user. So it's much
  • 24:19 - 24:29
    more reliable to use this. Well, by not
    asking volunteers to help we were already
  • 24:29 - 24:35
    reducing the risk. Because there is no
    users conspiring against their governments
  • 24:35 - 24:43
    to collect this data. But our approach is
    not also zero risk. If you look you have a
  • 24:43 - 24:50
    different kind of risk here. The client
    and server exchanging SYN-ACK and RST
  • 24:50 - 24:56
    without each of them giving a consent. And
    we don't want to ask for consent. Because
  • 24:56 - 25:01
    if you do, the dilemma exists. We have to
    go back and it's just the same that's
  • 25:01 - 25:07
    asking volunteers. So, to deal with that
    and cope with that, to reduce the risk
  • 25:07 - 25:15
    more, we don't use end-IPs. We actually
    use 2 hops back, routers which high
  • 25:15 - 25:22
    probability they are infrastructure
    machines and use those as a vantage point.
  • 25:22 - 25:31
    Even in this harsh constraint we still
    have 53000 global IP_ID-routers. To test
  • 25:31 - 25:39
    the framework to see that whether Augur
    works we chose 2000 of these global IP_ID-
  • 25:39 - 25:45
    machines, uniformly selected from all the
    countries we had vantage point. We
  • 25:45 - 25:53
    selected websites from Citizen Lab
    Testlist. This is the research
  • 25:53 - 25:58
    organization in Toronto University where
    they crowdsourced websites that are
  • 25:58 - 26:03
    potentially being blocked or potential
    sensitive. And then we used thousands of
  • 26:03 - 26:10
    the websites from Alexa top-10k. And then
    we get the Augur running for 17 days and
  • 26:10 - 26:17
    collect this data. One of the challenges
    that we have to validate Augur was like:
  • 26:17 - 26:23
    So, what is the truth? What is the ground-
    truth? What would we see that makes sense?
  • 26:23 - 26:26
    So, and this is the biggest and
    fundamental challenge for internet-
  • 26:26 - 26:34
    censorship anyway. But so the first
    approach is leaning on intuition, which is
  • 26:34 - 26:40
    like no client should show blocking
    towards all the websites. No server should
  • 26:40 - 26:46
    show blocking for bulk of our clients. And
    if anything happens like that we just
  • 26:46 - 26:52
    trash it. And we should see more bias
    towards the sensitive domain versus the
  • 26:52 - 27:02
    ones that are popular. And so on. And also
    we hope to replicate the anecdotes, the
  • 27:02 - 27:09
    reports out there. And we did all of
    those. And that's how we validate Augur.
  • 27:09 - 27:18
    So at the end Augur is a system that is as
    scalable and efficient, ethical and can be
  • 27:18 - 27:25
    used to detect TCP/IP-blocking
    continuously. Yes I know that is just
  • 27:25 - 27:32
    TCP/IP. What about the other layers? Can
    we measure them remotely as well? Well,
  • 27:32 - 27:40
    let me focus on the DNS. You might ask: Is
    there a way that we can remotely detect
  • 27:40 - 27:47
    DNS poisoning or manipulation? Well let's
    think it out loud. From now on I'm gonna
  • 27:47 - 27:54
    give just the highlights of the papers we
    work for the lack of the time. Well, if we
  • 27:54 - 28:06
    scan the whole IPv4 we have a lot of open
    DNS resolvers, which means that they are
  • 28:06 - 28:15
    open to anybody sending a query to them to
    resolve. And these open DNS-resolvers can
  • 28:15 - 28:23
    be used as a vantage point. We can use
    open DNS-resolvers in different ISPs
  • 28:23 - 28:30
    around the world to see whether that DNS
    queries are poisoned or not. Well, wait.
  • 28:30 - 28:35
    We need to make sure that they don't
    belong to the end user. So we come up with
  • 28:35 - 28:43
    a lot of checks to make sure that these
    open DNS-resolvers are organizational,
  • 28:43 - 28:51
    belonging to the ISP or infrastructure.
    After we do that then we start sending all
  • 28:51 - 28:58
    our queries to these, let's say, open DNS-
    resolvers in the ISP in Bahrain, for all
  • 28:58 - 29:04
    the domain we're interested. And capture
    what we receive what IPs we receive. The
  • 29:04 - 29:11
    challenge is then to detect what is the
    wrong answer. And so we have to come up
  • 29:11 - 29:20
    with a lot of heuristics. A set of
    heuristics. For example the response that
  • 29:20 - 29:29
    we received is that equal to a reply we
    got from our control measurements, where
  • 29:29 - 29:36
    we know the IP is not blocked or poisoned
    or something. The content is there. Or we
  • 29:36 - 29:42
    can actually look at the IP that we
    received and see whether it has a valid
  • 29:42 - 29:51
    http cert, with or without the SNI or
    servername identification or something.
  • 29:51 - 29:56
    And so on so forth. So we come up with
    lots of heuristics to detect wrong
  • 29:56 - 30:07
    answers. The results of all these efforts
    ended up being a project called
  • 30:07 - 30:12
    "Satellite", which was started by Will
    Scott. I'm sure he is in the audience
  • 30:12 - 30:17
    somewhere. A great friend of mine and very
    good supporter of CensoredPlanet.
  • 30:17 - 30:24
    Selflessly, he has been a miracle that I I
    had the opportunity and fortune to meet
  • 30:24 - 30:32
    him. We have Satellite. Satellite automate
    the whole steps that I told you. For this
  • 30:32 - 30:37
    work we use science that developed in both
    of the work. We call it Satellite because
  • 30:37 - 30:46
    of seniority and sticking with the name. So
    how much coverage Satellite has? If you
  • 30:46 - 30:55
    scan IPv4 you end up with 4.2 million open
    DNS-resolvers in every country in their
  • 30:55 - 31:01
    territories. We make, we need, we we
    actually need to make sure there are
  • 31:01 - 31:09
    ethics for that reason. If we put a harsh
    condition. We say that let's only use the
  • 31:09 - 31:18
    ones that fallow their valid PTR record
    followed this expression. Basically let's
  • 31:18 - 31:23
    just use the open DNS-resolvers that are
    name servers or at least their PDR record
  • 31:23 - 31:30
    suggests that. This is a really harsh
    constraint. Actually, my students have
  • 31:30 - 31:34
    been adding more and more regular
    expression for the ones that we are sure
  • 31:34 - 31:43
    they are organizational. But for now just
    being this harsh we have 40k of DNS-
  • 31:43 - 31:57
    revolvers in almost 169 countries I guess.
    So censorship happened in other layers as
  • 31:57 - 32:01
    well. How do we want to deal with that
    remote channel, with the remote side
  • 32:01 - 32:13
    channel? And, especially, like, what about
    http traffic or disruption that can happen
  • 32:13 - 32:30
    to you know TLS centric. I hate water.
    Oh no. Okay. So. So it's scratching
  • 32:30 - 32:38
    noise it's well documented that many DPIs
    especially in the Great Firewall of China monitor
  • 32:38 - 32:44
    the traffic and then they see a key word,
    a sensitive keyword like "Falun Gong".
  • 32:44 - 32:50
    They act and a drop traffic or send a RST.
    And as I mentioned earlier there are
  • 32:50 - 32:57
    enough clear text everywhere. Even in TLS
    handshakes SNI is in clear text. And for a
  • 32:57 - 33:04
    long time I was trying to come up with a
    way of detecting application layer using
  • 33:04 - 33:09
    this fancy side channel. Like, how can I
    detect that when the client and server
  • 33:09 - 33:15
    need to first establish a TCP handshake,
    how the side channel can jump in and then
  • 33:15 - 33:23
    detect the rest? We were lucky enough that
    the end pointed to a protocol called
  • 33:23 - 33:33
    "Echo". It's a protocol designed in 1983
    and it's for testing reasons, for the
  • 33:33 - 33:41
    debu..it is a debugging tool, basically.
    It's a predecessor to ping. And basically,
  • 33:41 - 33:50
    after you establish a TCP handshake to
    port 7, whatever you send the Echo servers
  • 33:50 - 33:57
    on port 7 it's gonna echo it back. Now
    think about it. How we can use Echo
  • 33:57 - 34:05
    servers to be able to detect application
    layer blocking? Well, when it's not
  • 34:05 - 34:08
    available, let's say I have an Echo server
    in the U.S. and a measurement machine in
  • 34:08 - 34:14
    the University of Michigan I establish a
    TCP handshake and I send a GET request
  • 34:14 - 34:19
    to... using a censored keyboard for
    example. It's gonna get back to me the
  • 34:19 - 34:28
    same thing I sent. But now let's put the
    DPI that is gonna be triggered by it.
  • 34:28 - 34:37
    Well, for sure, either I'm going to
    receive a RST first or something else. So
  • 34:37 - 34:44
    we can actually come up with a algorithm
    to be able to use Echo servers to detect
  • 34:44 - 34:48
    disruptions on application layer.
    Basically keyboards blocking, URL
  • 34:48 - 34:59
    blocking. Results of this is a tool called
    Quack. And Quack actually uses Echo
  • 34:59 - 35:06
    servers to be able to detect in a scalable
    way and say if, whether the keywords are
  • 35:06 - 35:14
    being blocked around the world. So what
    did we do is first scan the whole IPv4. We
  • 35:14 - 35:23
    find 47k Echo servers running around the
    world. Then we need to be able to check
  • 35:23 - 35:27
    whether they or not belong to the end
    users. And that was a very challenging
  • 35:27 - 35:37
    part because there is not a clear signal
    as it's.. there are 90 percent of them are
  • 35:37 - 35:41
    infrastructure but there is still some
    portion of them that we don't know. So
  • 35:41 - 35:47
    what we do is we look at the FreedomHouse
    reports and the countries that are
  • 35:47 - 35:53
    partially open or not open, not free or
    partially free what they're called. This
  • 35:53 - 35:59
    is around 50... This is around 50
    countries. And for those we use... we
  • 35:59 - 36:05
    randomly select some that we want and we
    use OS detection of Nmap. And if you have,
  • 36:05 - 36:16
    it will give us back it's a server, it's a
    switch and so on. We use those. So with
  • 36:16 - 36:23
    the help of so many collaborators after
    almost six years we end up with three
  • 36:23 - 36:32
    systems that can capture TCP/IP blocking,
    DNS, and application layer blocking using
  • 36:32 - 36:43
    infrastructure and organizational
    machines. So while it was, it was a dream
  • 36:43 - 36:48
    or a vision that we can come up with a
    better map to collect this data in a
  • 36:48 - 36:56
    continuous way, thanks to help of a lot of
    people especially my students, Will, and
  • 36:56 - 37:02
    other collaborators we now have
    CensoredPlanet. CensoredPlanet collects
  • 37:02 - 37:09
    semi-weekly snapshots of Internet
    censorship using our vantage point in all
  • 37:09 - 37:18
    the layers and provide this data in a raw
    format now in our web site. We also
  • 37:18 - 37:25
    provide some visualization way for people
    to be able to see how many vantage points
  • 37:25 - 37:30
    we have in each country and so on. Of
    course, this is the beginning of
  • 37:30 - 37:34
    CensoredPlanet. We launched this at August
    and we have been collecting data for
  • 37:34 - 37:40
    almost four months and we have a long way
    to go. We have users right now through
  • 37:40 - 37:45
    organizations using our data and helping
    us debug by finding things that doesn't
  • 37:45 - 37:52
    make sense pointing to us and any of you
    that ended up using these data, please
  • 37:52 - 37:57
    share your feedback with us and we are
    very responsive to be able to change it,
  • 37:57 - 38:04
    not as much as you need. They have a
    collective of very well dedicated people
  • 38:04 - 38:11
    participating. So, now that we have this
    CensoredPlanet let me give you how it can
  • 38:11 - 38:19
    help when there is a political situation
    going on. You all must remember around
  • 38:19 - 38:25
    October there Jamal Khashoggi, a
    Washington Post reporter, disappeared,
  • 38:25 - 38:35
    killed at the Saudi Arabian embassy in
    Turkey. At the time of this happening
  • 38:35 - 38:41
    there was a lot of media attention and
    this, this news especially two weeks in
  • 38:41 - 38:47
    become very internationally spread.
    CensoredPlanet didn't know this event was
  • 38:47 - 38:53
    going to happen. So we have been
    collecting this data semi-weekly for 2000
  • 38:53 - 38:58
    domain or so. And so we went back and we
    checked the Saudi Arabia. Did we see
  • 38:58 - 39:05
    anything interesting? And yes, we saw for
    example at two weeks in, around October
  • 39:05 - 39:13
    16, the domains that we were that was news
    category and media category, the
  • 39:13 - 39:18
    censorship related to those doubled. And
    let me emphasize, we didn't see like a
  • 39:18 - 39:23
    block or not block over the whole country
    not all the countries have a homogeneous
  • 39:23 - 39:28
    censorship happening. We saw it in
    multiple of the ISPs that we had vantage
  • 39:28 - 39:35
    point. Actually I freaked out when one of
    the activists in Saudi Arabia told us that
  • 39:35 - 39:42
    "I don't see this". And we said "What ISP
    you are in?" And this wasn't the ISPs that
  • 39:42 - 39:49
    we had vantage point in. So we were
    looking for hints that "Is anybody else
  • 39:49 - 39:56
    seeing what we were seeing?". And so we
    ended up seeing there was a commander
  • 39:56 - 40:04
    lab project that also saw around October
    16 the number of malwares or whatever they
  • 40:04 - 40:10
    are testing is also doubled or tripled. I
    don't know the other. So something was
  • 40:10 - 40:17
    going on two weeks in when the news broke.
    Let me emphasize this news media that I am
  • 40:17 - 40:22
    talking about or the global news media
    that we check like L.A. Times, Fox News
  • 40:22 - 40:31
    and so on. But we also checked Arab News
    which is as the activists told us is a
  • 40:31 - 40:38
    Saudi Arabia's propaganda newspaper. That
    in one of the ISPs was being poisoned. So
  • 40:38 - 40:50
    again, censorship measurement is very
    complex problem. So where we're heading?
  • 40:50 - 40:56
    Well, having said that about side channels
    and the techniques that help us remotely
  • 40:56 - 41:02
    collect this data I have to also say that
    the data we collect doesn't replicate the
  • 41:02 - 41:07
    picture of the internet censorship. I mean
    having a root access on a volunteers
  • 41:07 - 41:18
    machine to do a detailed test is powerful.
    So in the next step, in the next year, one
  • 41:18 - 41:28
    of our goal is to join force with OONI to
    integrate the data and from remote and
  • 41:28 - 41:38
    basically local measurements to provide
    the best of both worlds. Also, we have
  • 41:38 - 41:44
    been thinking a lot about what would be a
    good visualization tools that doesn't end
  • 41:44 - 41:51
    up to misrepresent internet censorship. I
    literally hate that one. Hate it. The
  • 41:51 - 41:57
    number of vantage point in countries are
    not equal. We don't know whether all the
  • 41:57 - 42:01
    vantage points that the data has resulted
    from it is from one ISP or all of our
  • 42:01 - 42:08
    ISPs. And then we test domains that are
    like benign and like I don't know defined
  • 42:08 - 42:14
    based on some western values of the
    freedom of expression. I believe in all of
  • 42:14 - 42:19
    them but still culture, economy might play
    something red. And then we put colors on
  • 42:19 - 42:25
    the map, rank the countries, call some
    countries awful and not giving full
  • 42:25 - 42:31
    attention to the others. So something
    needs to be changed and it's in our
  • 42:31 - 42:38
    horizon too. Think about it more deeper.
    We want to be able to have more statistic
  • 42:38 - 42:44
    tools to be able to spot when the patterns
    change. We want to be able to compare the
  • 42:44 - 42:50
    countries when for example Telegram was
    being blocked at Russia. If you remember
  • 42:50 - 42:55
    millions of IPs being blocked. If you
    don't, know go to my friend Leonid's talk
  • 42:55 - 43:00
    about Russia. You're going to learn a lot
    there. But anyway. So when the Russia was
  • 43:00 - 43:07
    blocking Telegram, I said to everyone I
    bet in the following some other
  • 43:07 - 43:10
    governments are going to jump to block
    Telegram as well. And that's actually what
  • 43:10 - 43:15
    we heard, rumors like that. So we need to
    be able to do that automatically. And
  • 43:15 - 43:26
    overall, I want to be able to develop an
    empirical science of internet censorship
  • 43:26 - 43:37
    based on rich data with the help of all of
    you. CensoredPlanet is now being
  • 43:37 - 43:43
    maintained by a group of dedicated
    students, great friends that I have and
  • 43:43 - 43:50
    needs engineers and political scientists
    to jump on our data and help us to bring
  • 43:50 - 43:57
    meaning to what we are collecting. So if
    you are a good engineer or a political
  • 43:57 - 44:07
    scientist or a dedicated person who wants
    to change the world, reach out to me. For
  • 44:07 - 44:12
    as a reference for those of you
    interested: these are the publications
  • 44:12 - 44:20
    that my talk was based on.
    And now I am open to questions.
  • 44:20 - 44:26
    applause
  • 44:26 - 44:31
    Herald: Allright, perfect. Thank you so
    much, Roya, so far. We have some time for
  • 44:31 - 44:36
    questions so if you have a question in the
    room please go to one of the room
  • 44:36 - 44:40
    microphones one, two, three, four, and
    five in the very back. And if you're
  • 44:40 - 44:44
    watching the stream you can ask questions
    to the signal angel via IRC or Twitter and
  • 44:44 - 44:49
    we'll also make sure to relay those to the
    speaker and make sure those get asked. So
  • 44:49 - 44:52
    let's just go ahead and
    start with Mic two please.
  • 44:52 - 44:57
    Question: Hey, great talk. Do you worry
    that by publishing your methods as well as
  • 44:57 - 45:03
    your data that you're going to get a
    response from governments that are
  • 45:03 - 45:06
    censoring things such that it makes it
    more difficult for you to monitor what's
  • 45:06 - 45:09
    being censored? Or has
    that already happened?
  • 45:09 - 45:15
    Roya: It hasn't happened. We have control
    measures to be able to detect that. But
  • 45:15 - 45:19
    that has been... it's a really good
    question and often comes up after I
  • 45:19 - 45:25
    present. I can tell you based on my
    experience it's really hard to synchronize
  • 45:25 - 45:31
    all the ISPs in all the countries to act
    to the SYN-ACK and RST that I'm sending.
  • 45:31 - 45:36
    Like, for example for Augur, this is
    unsolicited packets and for governments to
  • 45:36 - 45:42
    block that they are going to be a lot of
    collateral damage. You might say that
  • 45:42 - 45:46
    well, Roya, they're going to block the IP
    of the University of Michigan. They're a
  • 45:46 - 45:51
    spoofing machine. We have a measure for
    that. I have multiple places that I
  • 45:51 - 45:56
    actually have a backup if that case
    happened. But overall this is a global
  • 45:56 - 46:03
    scale measurement, and even in one city or
    like multiple ISPs you know of it's really
  • 46:03 - 46:07
    hard to synchronize being like blocking
    something and maintaining. So it is
  • 46:07 - 46:14
    something that's in our mind thinking
    about. But as as of now it's not a worry.
  • 46:14 - 46:16
    Herald: All right then let's
    go over to Mic one.
  • 46:16 - 46:21
    Question: Thank you. I wondered, it's kind
    of similar to this question. What if you
  • 46:21 - 46:25
    are measuring from a country that is
    blocking? Do you also distribute the
  • 46:25 - 46:30
    measurements over several countries?
    Roya: Absolutely. Every snapshot that we
  • 46:30 - 46:37
    collect is from all the vantage point we
    have in like certain countries and portion
  • 46:37 - 46:42
    of vantage point in like China or like US
    because they have millions of vantage
  • 46:42 - 46:46
    points or like thousands of vantage
    points. So basically at each snapshot,
  • 46:46 - 46:52
    which takes us three days, we collect the
    data from all of all of the vantage point.
  • 46:52 - 46:58
    And so let's say that somebody is reacting
    to us. We have a benign domain that we
  • 46:58 - 47:03
    check as well like for example a domain
    example.com or random.com. So if we see
  • 47:03 - 47:09
    something going on there we actually
    double check. But good point, because now
  • 47:09 - 47:15
    our efforts is very manual labor and we're
    trying to automate everything so it's
  • 47:15 - 47:19
    still a challenge. Thank you.
    Herald: All right then let's go to Mic
  • 47:19 - 47:23
    three.
    Question: Hi. Have you measured how much
  • 47:23 - 47:28
    does IP-ID randomization
    break your probes?
  • 47:28 - 47:35
    Roya: Oh. This is also really good. Let me
    give a shout out to [name]. He's the guy
  • 47:35 - 47:46
    at 1998 discovered IP-ID or published
    something that I ended up reading. So like
  • 47:46 - 47:54
    for example Linux or Ubuntu in the U.S.
    version they randomized it but it still
  • 47:54 - 47:59
    draws this legacy operating system like
    WindowsXP and predecessors and FreeBSD
  • 47:59 - 48:05
    that still have global IP-ID. So one
    argument that often come up is, what if
  • 48:05 - 48:09
    all these machines get updated to the new
    operating system where it doesn't have a
  • 48:09 - 48:14
    maintain global IP-ID? And I can tell you
    that, well, we'll come up with another
  • 48:14 - 48:20
    side channel. For now, that works. But my
    gut feeling is that if it didn't change
  • 48:20 - 48:25
    from 1998 until now with all the things
    that everybody says that global IP-ID
  • 48:25 - 48:30
    variable is a horrible idea, it's not going
    to change in the coming five years so
  • 48:30 - 48:33
    we're good.
    Question: Thank you.
  • 48:33 - 48:37
    Herald: Okay, then let's just
    move on to Mic four.
  • 48:37 - 48:41
    Question: Thank you very much for the
    great talk. When you were introducing
  • 48:41 - 48:47
    Augur I was wondering, does the detection
    of the blockage between client server
  • 48:47 - 48:52
    necessarily indicate censorship? So,
    because you were talking about validating
  • 48:52 - 48:59
    Augur I was wondering if it turns out that
    there is like a false alarm. What do you
  • 48:59 - 49:05
    think could be the potential cause?
    Roya: You're absolutely right. And I tried
  • 49:05 - 49:12
    to emphasize on that that what we end up
    collecting is can be seen as a disruption.
  • 49:12 - 49:17
    Something didn't work. The SYN-ACK or RST
    got disrupted. Is that there is a
  • 49:17 - 49:22
    censorship or it can be a random packet
    drop. And the way to be able to establish
  • 49:22 - 49:28
    that confidence is to check whether
    aggregate the results. Do we see this
  • 49:28 - 49:34
    blocking between multiple of the routers
    within that country or within that AS .
  • 49:34 - 49:39
    Because if one of this is for accident
    that just didn't make sense or didn't get
  • 49:39 - 49:44
    dropped, what about the others? So the
    whole idea and this is another point that
  • 49:44 - 49:50
    I'm so so concerned about: Most of this
    report and anecdotes that we read is based
  • 49:50 - 49:56
    on one VPN or one man touch points in the
    country. And then there are a lot of lot
  • 49:56 - 50:01
    of conclusion out of that. And you often
    can ask that well this vantage point might
  • 50:01 - 50:06
    be subject to so many different things
    than a government's censorship. Also I
  • 50:06 - 50:12
    emphasized that the censorship that I use
    in this talk is any action that stops
  • 50:12 - 50:17
    users' access to get to the requested
    content. I'm trying to get away from a
  • 50:17 - 50:23
    semantic where of the intention applied.
    But great question.
  • 50:23 - 50:26
    Herald: All right, then let's go back to
    Mic one right.
  • 50:26 - 50:30
    Question: Hi Roya. You mentioned that you
    have a team of students working on all of
  • 50:30 - 50:34
    these frameworks. I was wondering if your
    frameworks were open source are available
  • 50:34 - 50:38
    online for collaboration? And if so, where
    those resources would be?
  • 50:38 - 50:45
    Roya: So the data is open. The code hasn't
    been. For one reason is I'm so low
  • 50:45 - 50:49
    confident in sharing code, like I'm
    friends with Philipp Winter, Dave Fifield.
  • 50:49 - 50:54
    These people are pro open source and they
    constantly blame me for not. But it really
  • 50:54 - 51:01
    requires confidence to share code. So we
    are working on that at least for Quack. I
  • 51:01 - 51:06
    think the code is very easily can be
    shared. For Augur, we spent a heck amount
  • 51:06 - 51:12
    of time to make a production ready code
    and for Satellite I think that is also
  • 51:12 - 51:17
    ready. I can share them personally with
    you but before sharing to the world I want
  • 51:17 - 51:22
    to actually give another person to audit
    and make sure we're not using a curse word
  • 51:22 - 51:26
    or something. I don't know. It's just
    completely my mind being a little bit
  • 51:26 - 51:31
    conservative. But happy if you send me an
    e-mail I send you to code.
  • 51:31 - 51:36
    Question: Thank you.
    Herald: All right then move to Mic two.
  • 51:36 - 51:40
    Question: Thanks again for sharing your
    great vision. I find it really
  • 51:40 - 51:47
    fascinating. Also I'm not really a data
    scientist but my question is: did you find
  • 51:47 - 51:56
    any any usefulness in your approaches in
    the spreading of the Internet of Things? I
  • 51:56 - 52:07
    understood that you used routers to make
    queries but did you send and maybe receive
  • 52:07 - 52:11
    back any data from
    washing machines, toasters,...?
  • 52:11 - 52:17
    Roya: I mean, I know, being ethical and
    trying to not use end user machine limits
  • 52:17 - 52:23
    your access a lot. And but but but that's
    our goal. We are going to stick with
  • 52:23 - 52:28
    things that don't belong to the end users.
    And so it's all routers, organizational
  • 52:28 - 52:32
    machines. So I want to make sure that
    whatever we're using belong to the
  • 52:32 - 52:35
    identity that can protect themselves if
    something went wrong. They can just say
  • 52:35 - 52:40
    "Hey this is a freaking router, it
    receives and sends so many things. I mean,
  • 52:40 - 52:45
    look, let me give you show you a TCP (?),
    for example. A volunteer might not be able
  • 52:45 - 52:49
    to defend that because it's already
    conspiring and collecting this data. But
  • 52:49 - 52:54
    good questions, I wish I could
    but I won't pass that line.
  • 52:54 - 52:57
    Herald: All right. I don't see any more
    questions in the room right now. But we
  • 52:57 - 53:01
    have one from the internet
    so please, signal angel.
  • 53:01 - 53:07
    Signal Angel: Yes. Actually a question
    from koli585: I was in an African
  • 53:07 - 53:10
    country where the internet has been
    completely shut down. How can I quickly
  • 53:10 - 53:15
    and safely inform others
    about the shut down?
  • 53:15 - 53:21
    Roya: So while I think local users' values
    are highly highly needed they can use
  • 53:21 - 53:28
    social media like Twitter to send and say
    whatever, there is a project called IODA.
  • 53:28 - 53:37
    It's a project at CAIDA UCSD University in
    U.S. and Philipp Winter, Alberto
  • 53:37 - 53:43
    [Dainotti] and Alistair [King] are working
    on that. They basically remotely keep
  • 53:43 - 53:52
    track of shutdowns and push them out. If
    you look at the IODA on Twitter you can
  • 53:52 - 54:03
    see their live feed of how the shutdowns
    where the shutdowns happen. So I haven't
  • 54:03 - 54:09
    thought about how to reach to the users
    telling them what we see or how we can
  • 54:09 - 54:19
    incorporate the users' feedback. We are
    working with a group of researchers that
  • 54:19 - 54:27
    already developed tools to receive this
    data from Tweeters and basically use that
  • 54:27 - 54:32
    as some level of ground truth, but OONI
    does such a great job that I haven't felt
  • 54:32 - 54:37
    a need.
    Herald: Alright. Unless the signal angel
  • 54:37 - 54:44
    has another question? No?
    Roya: And let me, can I add one thing? So
  • 54:44 - 54:53
    I was listening to a talk about how
    Iranian versus Arabs were sympathetic
  • 54:53 - 55:01
    towards Boston bombing in United States
    and there were a lot of assumptions and a
  • 55:01 - 55:06
    lot of conclusions were made that, oh
    this, I'm completely paraphrasing. I don't
  • 55:06 - 55:10
    remember. But this Iranian doesn't care
    because they didn't tweet as much. So
  • 55:10 - 55:17
    basically their input data was a bunch of
    tweets around the time of Boston bombing.
  • 55:17 - 55:22
    After the talk was over I said: you know
    that in this country Twitter has been
  • 55:22 - 55:29
    blocked and so many people couldn't tweet.
    applause
  • 55:29 - 55:33
    Herald: Alright. That concludes our Q&A,
    so thanks so much Roya.
  • 55:33 - 55:35
    Roya: Thank you.
  • 55:35 - 55:41
    applause
  • 55:41 - 55:46
    postroll music
  • 55:46 - 56:04
    Subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
35C3 - Censored Planet: a Global Censorship Observatory
Description:

more » « less
Video Language:
English
Duration:
56:04

English subtitles

Revisions