< Return to Video

Towards a more Trustworthy Tor Network

  • 0:00 - 0:08
    intro music
  • 0:08 - 0:13
    Herald: This is now
    "Towards a more trustworthy Tor network"
  • 0:13 - 0:15
    by Nusenu
  • 0:15 - 0:22
    The talk will give examples of malicious
    relay groups and current issues and how to
  • 0:22 - 0:28
    tackle those to empower Tor users for
    self-defense, so they don't necessarily
  • 0:28 - 0:32
    need to rely on the detection and removal
    of those groups.
  • 0:32 - 0:36
    So without further ado, enjoy!
  • 0:36 - 0:41
    And we will see each other
    for Q&A afterwards.
  • 0:46 - 0:50
    Nusenu: Thanks for inviting me to give a
    talk about something I deeply care about:
  • 0:50 - 0:52
    The Tor network.
  • 0:52 - 0:54
    The Tor network
    is a crucial privacy infrastructure,
  • 0:54 - 0:57
    without which,
    we could not use Tor Browser.
  • 0:57 - 1:02
    I like to uncover malicious Tor relays
    to help protect Tor users.
  • 1:02 - 1:07
    But since that does not come without
    personal risks, I'm taking steps
  • 1:07 - 1:10
    to protect myself from those
    running those malicious nodes,
  • 1:10 - 1:13
    so I can continue to fight them.
  • 1:13 - 1:18
    For this reason, this is a prerecorded
    talk without using my own voice.
  • 1:18 - 1:21
    Thanks to the people behind the scenes
  • 1:21 - 1:25
    who made it possible to
    present this talk in a safe way.
  • 1:27 - 1:29
    A few words about me.
  • 1:29 - 1:32
    I have a long-standing interest
    in the state of the Tor network.
  • 1:32 - 1:38
    In 2015, I started OrNetRadar,
    which is a public mailing list and
  • 1:38 - 1:43
    website showing reports about new
    relay groups and possible Sybil attacks.
  • 1:43 - 1:50
    In 2017, I was asked to join the private
    bad-relays Tor Project mailing list
  • 1:50 - 1:55
    to help analyze and confirm reports
    about malicious relays.
  • 1:56 - 2:01
    To get a better understanding of who runs
    what fraction of the Tor network over time
  • 2:01 - 2:08
    I started OrNetStats. It shows you also
    which operators could de-anonymize Tor
  • 2:08 - 2:13
    users because they are in a position
    to perform end-to-end correlation attacks,
  • 2:13 - 2:15
    something we will describe later.
  • 2:15 - 2:20
    I'm also the maintainer of
    ansible-relayor, which is an Ansible role
  • 2:20 - 2:23
    used by many large relay operators.
  • 2:23 - 2:28
    Out of curiosity, I also like
    engaging in some limited open-source
  • 2:28 - 2:32
    intelligence gathering on malicious
    Tor network actors, especially when
  • 2:32 - 2:36
    their motivation for running relays
    has not been well understood.
  • 2:37 - 2:40
    To avoid confusions,
    with regards to the Tor Project:
  • 2:40 - 2:45
    I am not employed by the Tor Project
    and I do not speak for the Tor Project.
  • 2:48 - 2:53
    In this presentation, we will go through
    some examples of malicious actors on
  • 2:53 - 2:59
    the Tor network. They basically represent
    our problem statement that motivates us to
  • 2:59 - 3:04
    improve the "status quo". After describing
    some issues with current approaches to
  • 3:04 - 3:09
    fight malicious relays, we present a new,
    additional approach aiming at achieving a
  • 3:09 - 3:14
    safer Tor experience using trusted relays
    to some extent.
  • 3:15 - 3:18
    The primary target audience
    of this presentation are:
  • 3:18 - 3:25
    Tor users, like Tor Browser users,
    relay operators,
  • 3:25 - 3:29
    onion service operators
    like, for example, SecureDrop
  • 3:29 - 3:33
    and anyone else that cares about Tor.
  • 3:36 - 3:41
    To get everyone on the same page,
    a quick refresher on how Tor works
  • 3:41 - 3:45
    and what type of relays – also
    called nodes – there are.
  • 3:45 - 3:50
    When Alice uses Tor Browser
    to visit Bob's website,
  • 3:50 - 3:56
    her Tor client selects three Tor relays
    to construct a circuit that will be used
  • 3:56 - 4:00
    to route her traffic through the
    Tor network before it reaches Bob.
  • 4:00 - 4:04
    This gives Alice location anonymity.
  • 4:04 - 4:09
    The first relay in such a circuit
    is called an entry guard relay.
  • 4:09 - 4:15
    This relay is the only relay seeing
    Alice's real IP address and is therefore
  • 4:15 - 4:21
    considered a more sensitive type of relay.
    The guard relay does not learn that Alice
  • 4:21 - 4:26
    is connecting to Bob, though. It
    only sees the next relay as destination.
  • 4:26 - 4:31
    Guard relays are not changed frequently,
    and Alice's Tor client waits up to 12
  • 4:31 - 4:36
    weeks before choosing a new guard
    to make some attacks less effective.
  • 4:36 - 4:42
    The second relay is called a middle
    or middle only relay. This relay
  • 4:42 - 4:47
    is the least sensitive position, since it
    only sees other relays, but does not know
  • 4:47 - 4:51
    anything about Alice or Bob because it
    just forwards encrypted traffic.
  • 4:52 - 4:56
    And,
    the final relay is called an exit relay.
  • 4:56 - 5:01
    The exit relay gets to learn the
    destination, Bob, but does not know
  • 5:01 - 5:06
    who is connecting to Bob.
    The exit relay is also considered
  • 5:06 - 5:11
    a more sensitive relay type, since it
    potentially gets to see and manipulate
  • 5:11 - 5:20
    clear text traffic (if Alice is not using
    an encrypted protocol like HTTPS.)
  • 5:20 - 5:26
    Although exit relays see the destination,
    they can not link all sites Alice visits
  • 5:26 - 5:32
    at a given point in time to the same Tor
    client, to profile her, because Alice's
  • 5:32 - 5:36
    Tor Browser instructs the Tor client to
    create and use distinct circuits for
  • 5:36 - 5:43
    distinct URL bar domains. So, although
    this diagram shows a single circuit only,
  • 5:43 - 5:49
    a Tor client usually has multiple open Tor
    circuits at the same time. In networks
  • 5:49 - 5:56
    where Tor is censored, users make use of a
    special node type, which is called Bridge.
  • 5:56 - 6:02
    Their primary difference is that they are
    not included in the public list of relays,
  • 6:02 - 6:07
    to make it harder to censor them. Alice
    has to manually configure Tor Browser if
  • 6:07 - 6:13
    she wants to use a bridge. For redundancy,
    it is good to have more than one bridge in
  • 6:13 - 6:16
    case a bridge goes down or gets censored.
  • 6:16 - 6:22
    The used bridge also gets to see Alice's
    real IP address, but not the destination.
  • 6:25 - 6:28
    Now that we have a basic
    understanding of Tor's design,
  • 6:28 - 6:31
    we might wonder,
    why do we need to trust the network,
  • 6:31 - 6:36
    when roles are distributed
    across multiple relays?
  • 6:36 - 6:39
    So let's look into some attack scenarios.
  • 6:41 - 6:45
    If an attacker controls
    Alice's guard and exit relay,
  • 6:45 - 6:47
    they can learn that Alice connected to Bob
  • 6:47 - 6:51
    by performing
    end-to-end correlation attacks.
  • 6:51 - 6:57
    Such attacks can be passive,
    meaning no traffic is manipulated
  • 6:57 - 7:02
    and therefore cannot be detected by
    probing suspect relays with test traffic.
  • 7:03 - 7:10
    OrNetStats gives you a daily updated list
    of potential operators in such a position.
  • 7:10 - 7:15
    There are some restrictions a default
    Tor client follows when building circuits
  • 7:15 - 7:19
    to reduce the likelihood of this occurring
  • 7:19 - 7:25
    For example, a Tor client does not use
    more than one relay in the same /16 IPv4
  • 7:25 - 7:31
    network block when building circuits. For
    example, Alice's Tor client would never
  • 7:31 - 7:36
    create this circuit because guard and exit
    relays are in the same net block one
  • 7:36 - 7:46
    192.0./16. For this reason, the number of
    distinct /16 network blocks an attacker
  • 7:46 - 7:52
    distributed its relays across is relevant
    when evaluating this kind of risk.
  • 7:52 - 7:59
    Honest relay operators declare their group
    of relays in the so-called "MyFamily"
  • 7:59 - 8:05
    setting. This way they are transparent
    about their set of relays and Tor clients
  • 8:05 - 8:09
    automatically avoid using more than a
    single relay from any given family in a
  • 8:09 - 8:15
    single circuit. Malicious actors will
    either not declare relay families or
  • 8:15 - 8:18
    pretend to be in more than one family.
  • 8:20 - 8:25
    Another variant of the end-to-end
    correlation attack is possible
  • 8:25 - 8:29
    when Bob is the attacker or
    has been compromised by the attacker,
  • 8:29 - 8:35
    and the attacker also happens to run
    Alice's guard relay. In this case,
  • 8:35 - 8:41
    the attacker can also determine
    the actual source IP address used by Alice
  • 8:41 - 8:43
    when she visits Bob's website.
  • 8:45 - 8:50
    In cases of large, suspicious, non-exit
    relay groups, it is also plausible that
  • 8:50 - 8:54
    they are after onion services, because
    circuits for onion services do not require
  • 8:54 - 9:02
    exit relays. Onion services provide
    location anonymity to the server side.
  • 9:02 - 9:04
    By running many non-exits,
  • 9:04 - 9:11
    an attacker could aim at finding the real
    IP address / location of an onion service.
  • 9:13 - 9:18
    Manipulating exit relays are probably
    the most common attack type
  • 9:18 - 9:23
    detected in the wild. That is also
    the easiest-to-perform attack type.
  • 9:23 - 9:30
    Malicious exits usually do not care who
    Alice is or what her actual IP address is.
  • 9:30 - 9:35
    They are mainly interested to
    profit from traffic manipulation.
  • 9:35 - 9:41
    This type of attack can be detected
    by probing exits with decoy traffic,
  • 9:41 - 9:45
    but since malicious exits moved
    to more targeted approaches
  • 9:45 - 9:50
    (specific domains only), detection
    is less trivial than one might think.
  • 9:51 - 9:56
    The best protection against this
    kind of attack is using encryption.
  • 9:56 - 10:01
    Malicious exit relays cannot harm
    connections going to onion services.
  • 10:03 - 10:07
    Now, let's look into
    two real-world examples
  • 10:07 - 10:10
    of large scale and persistent
    malicious actors on the Tor network.
  • 10:13 - 10:20
    The first example, tracked as BTCMITM20,
    is in the malicious exit's business and
  • 10:20 - 10:27
    performs SSL strip attacks on exit relays
    to manipulate plaintext HTTP traffic,
  • 10:27 - 10:32
    like Bitcoin addresses,
    to divert Bitcoin transactions to them.
  • 10:32 - 10:38
    They have been detected for the first time
    in 2020, and had some pretty large relay
  • 10:38 - 10:44
    groups. On this graph, you can see how
    much of the Tor exit fraction was under
  • 10:44 - 10:50
    their control in the first half of 2020.
    The different colors represent different
  • 10:50 - 10:56
    contact infos they gave on their relays
    to pretend they are distinct groups.
  • 10:56 - 11:00
    The sharp drops show events when
    they were removed from the network,
  • 11:00 - 11:03
    before adding relays again.
  • 11:04 - 11:12
    In February 2021, they managed over 27%
    of the Tor network's exit capacity,
  • 11:12 - 11:16
    despite multiple removal attempts
    over almost a year.
  • 11:17 - 11:23
    At some point in the future,
    we will hopefully have HTTPS-Only mode
  • 11:23 - 11:28
    enabled by default in Tor Browser
    to kill this entire attack vector for good
  • 11:28 - 11:32
    and make malicious exits less lucrative.
  • 11:32 - 11:37
    I encourage you to test
    HTTPS-Only mode in Tor Browser
  • 11:37 - 11:42
    and notify website operators
    that do not work in that mode.
  • 11:42 - 11:46
    If a website does not work
    in HTTPS-Only mode,
  • 11:46 - 11:50
    you also know it is probably
    not safe to use in the first place.
  • 11:52 - 11:57
    The second example actor,
    tracked as KAX17,
  • 11:57 - 12:03
    is still somewhat of a mystery. And
    that is not the best situation to be in.
  • 12:03 - 12:07
    They are remarkable for:
    their focus on non-exit relays,
  • 12:07 - 12:13
    their network diversity,
    with over 200 distinct /16 subnets,
  • 12:13 - 12:19
    their size – it is the first actor I know
    of that peaked at over 100 Gbit/s
  • 12:19 - 12:26
    advertised non-exit bandwidth – and
    they are active since a very long time.
  • 12:27 - 12:32
    Let's have a look at some KAX17
    related events in the past two years.
  • 12:32 - 12:39
    I first detected and reported them
    to the Tor Project in September 2019.
  • 12:39 - 12:44
    In October 2019,
    KAX17 relays got removed
  • 12:44 - 12:47
    by the Tor directory
    authorities for the first time.
  • 12:50 - 12:54
    In December 2019,
    I published the first blog post about them
  • 12:54 - 12:58
    At that point, they were already
    rebuilding their infrastructure
  • 12:58 - 13:01
    by adding new relays.
  • 13:02 - 13:08
    In February 2020, I contacted an email
    address that was used on some relays that
  • 13:08 - 13:13
    did not properly declare their relay group
    using the "MyFamily" setting. At the time,
  • 13:13 - 13:19
    they said they would run bridges instead,
    so they do not have to set MyFamily.
  • 13:19 - 13:24
    Side note:
    MyFamily is not supported for bridges.
  • 13:24 - 13:30
    I was not aware that this email address
    is linked to KAX17 until October 2021.
  • 13:31 - 13:38
    In the first half of 2020,
    I regularly reported large quantities of
  • 13:38 - 13:44
    relays to the Tor Project, and they got
    removed at high pace until June 2020,
  • 13:44 - 13:48
    when directory authorities changed their
    practices and stopped removing them
  • 13:48 - 13:54
    because they didn't want to "scare away"
    potential new relay operators.
  • 13:55 - 13:59
    In July 2020, an email address joined
    a tor-relays mailing list discussion
  • 13:59 - 14:07
    I started about a proposal to limit
    large-scale attacks on the network.
  • 14:07 - 14:13
    Now we know
    that email address is linked to KAX17.
  • 14:14 - 14:18
    Since the Tor directory authorities
    no longer removed the relay groups
  • 14:18 - 14:24
    showing up, I sent the information
    of over 600 KAX17 relays
  • 14:24 - 14:27
    to the public tor-talk mailing list.
  • 14:28 - 14:32
    In October 2021, someone who asked for
    anonymity reached out to me and provided a
  • 14:32 - 14:39
    new way to detect Tor relay groups that
    do not run the official Tor software.
  • 14:41 - 14:45
    Using this methodology,
    we were able to detect KAX17
  • 14:45 - 14:48
    using a second detection method.
  • 14:48 - 14:52
    This also apparently convinced
    the Tor directory authorities,
  • 14:52 - 14:57
    and in November this year,
    a major removal event took place.
  • 14:59 - 15:04
    Sadly, the time span during which
    KAX17 was running relays without
  • 15:04 - 15:11
    limitations was rather long.
    This motivated us to come up with a
  • 15:11 - 15:17
    design that avoids this kind of complete
    dependency on Tor directory authorities
  • 15:17 - 15:19
    when it comes to safety issues.
  • 15:20 - 15:23
    And, as you might guess,
  • 15:23 - 15:27
    KAX17 is already attempting
    to restore their foothold again.
  • 15:30 - 15:33
    Here are some KAX17 properties.
  • 15:33 - 15:40
    After the release of my second
    KAX17 blog post in November 2021,
  • 15:40 - 15:44
    the media was quick with using
    words like "nation-state" and
  • 15:44 - 15:47
    "Advanced Persistent Threat".
  • 15:47 - 15:53
    But I find it hard to believe such such
    serious entities would be so sloppy.
  • 15:53 - 15:58
    Since they claim to work for an ISP
    in every other email…
  • 15:59 - 16:02
    I looked into their AS distribution.
  • 16:03 - 16:07
    I guess they work for more than one ISP.
  • 16:08 - 16:11
    This chart shows used Autonomous System,
  • 16:11 - 16:16
    sorted by the unique IP addresses
    used at that hoster. So, for example,
  • 16:16 - 16:21
    They used more than 400 IP
    addresses at Microsoft to run relays.
  • 16:21 - 16:27
    These are not exact numbers,
    since it only includes relays since 2019,
  • 16:27 - 16:33
    and there are likely more.
    If we map their IP addresses
  • 16:33 - 16:39
    to countries, we get this. Do not take
    this map too seriously, as the used GEOIP
  • 16:39 - 16:45
    database was severely outdated and such
    databases are never completely accurate,
  • 16:45 - 16:55
    but it gives us a rough idea. To be clear,
    I have no evidence that KAX17 is
  • 16:55 - 17:01
    performing any kind of attacks against Tor
    users, but in our threat model it is
  • 17:01 - 17:06
    already a considerable risk if even a
    benevolent operator is not declaring their
  • 17:06 - 17:12
    more than 800 relays as a family. Good
    protections should protect against
  • 17:12 - 17:18
    benevolent and malicious Sybil attacks
    equally. The strongest input factor for
  • 17:18 - 17:23
    the risk assessment of this actor is the
    fact they do not run the official Tor
  • 17:23 - 17:29
    software on their relays. There are still
    many open questions, and the analysis into
  • 17:29 - 17:39
    KAX17 is ongoing. If you have any input,
    feel free to reach out to me. After
  • 17:39 - 17:44
    looking at some examples of malicious
    actors, I want to shortly summarize some
  • 17:44 - 17:52
    of the issues in how the malicious relays
    problem is currently approached. It is
  • 17:52 - 17:57
    pretty much like playing Whack-A-Mole. You
    hit them and they come back. You hit them
  • 17:57 - 18:03
    again, and they come back again, over and
    over and while you're at it, you're also
  • 18:03 - 18:09
    training them to come back stronger next
    time. Malicious actors can run relays
  • 18:09 - 18:15
    until they get caught/detected or are
    considered suspicious enough for removal
  • 18:15 - 18:21
    by a Tor directory authorities. If your
    threat model does not match the Tor
  • 18:21 - 18:25
    directory's threat model, you are out of
    luck or have to maintain your own
  • 18:25 - 18:31
    exclusion lists. Attempts to define a
    former set of "do not do" requirements for
  • 18:31 - 18:37
    relays that Tor directory authorities
    commit to enforce, have failed, even with
  • 18:37 - 18:46
    the involvement of a core Tor developer.
    It is time for a paradigm change. The
  • 18:46 - 18:51
    current processes for detecting and
    removing malicious Tor relays are failing
  • 18:51 - 18:57
    us and are not sustainable in the long
    run. In recent years, malicious groups
  • 18:57 - 19:06
    have become larger, harder to detect,
    harder to get removed and more persistent.
  • 19:06 - 19:12
    Here are some of our design goals. Instead
    of continuing the single sided arms race
  • 19:12 - 19:17
    with malicious actors. We aim to empower
    Tor users for self-defense without
  • 19:17 - 19:22
    requiring the detection of malicious Tor
    relays and without, solely, depending on
  • 19:22 - 19:28
    Tor directly authorities for protecting us
    from malicious relays. We aim to reduce
  • 19:28 - 19:37
    the risk of de-anonymization by using at
    least a trusted guard or exit or both. We
  • 19:37 - 19:41
    also acknowledge it is increasingly
    impossible to detect all malicious relays
  • 19:41 - 19:47
    using decoy traffic, therefore, we stop
    depending on the detectability of
  • 19:47 - 19:56
    malicious relays to protect users. In
    today's Tor network, we hope to not choose
  • 19:56 - 20:02
    a malicious guard when we pick one. In the
    proposed design, we would pick a trusted
  • 20:02 - 20:08
    guard instead. In fact, this can be done
    with today's Tor browser, if you set any
  • 20:08 - 20:14
    trusted relays as your bridge. Another
    supported configuration would be to use
  • 20:14 - 20:20
    trusted guards and trusted exits. Such
    designs are possible without requiring
  • 20:20 - 20:25
    code changes in Tor, but are cumbersome to
    configure manually, since Tor only
  • 20:25 - 20:33
    supports relay fingerprints and does not
    know about relay operator identifiers. But
  • 20:33 - 20:39
    what do we actually mean by trusted
    relays? Trusted relays are operated by
  • 20:39 - 20:45
    trusted operators. These operators are
    believed to run relays without malicious
  • 20:45 - 20:52
    intent. Trusted operators are specified by
    the user. Users assign trust at the
  • 20:52 - 20:57
    operator, not the relay level, for
    scalability reasons, and to avoid
  • 20:57 - 21:06
    configuration changes when an operator
    changes their relays. Since users should
  • 21:06 - 21:12
    be able to specify trusted operators, we
    need human-readable, authenticated and
  • 21:12 - 21:19
    globally unique operator identifiers. By
    authenticated, we mean they should not be
  • 21:19 - 21:27
    spoofable arbitrarily like current relay
    contact infos. For simplicity, we use DNS
  • 21:27 - 21:34
    domains as relay operator identifiers, and
    we will probably restrict them to 40
  • 21:34 - 21:47
    characters in length. How do Authenticated
    Relay Operator IDs, short AROI, work. From
  • 21:47 - 21:52
    an operator point of view, configuring an
    AROI is easy. Step one: The operator
  • 21:52 - 22:00
    specifies the desired domain under her
    control using Tor's ContactInfo option.
  • 22:00 - 22:07
    Step two: The operator publishes a simple
    text file using the IANA well-known URI
  • 22:07 - 22:13
    containing all relay fingerprints. If no
    web server is available or if the web
  • 22:13 - 22:19
    server is not considered safe enough,
    DNSSEC-signed TXT records are also an
  • 22:19 - 22:26
    option for authentication. Using DNS is
    great for scalability and availability due
  • 22:26 - 22:31
    to DNS caching, but since every relay
    requires its own TXT record, it will take
  • 22:31 - 22:37
    longer than the URI type proof when
    performing proof validation. Operators
  • 22:37 - 22:42
    that have no domain at all can use free
    services like GitHub pages or similar to
  • 22:42 - 22:50
    serve the text file. For convenience, Eran
    Sandler created this simple to use
  • 22:50 - 22:55
    ContactInfo generator, so relay operators
    don't have to read the specification to
  • 22:55 - 23:01
    generate the required ContactInfo string
    for their configuration. For the
  • 23:01 - 23:06
    Authenticated Relay Operator ID the "url"
    and "proof" fields are the only relevant
  • 23:06 - 23:14
    fields. There are already over 1000 relays
    that have implemented the Authenticated
  • 23:14 - 23:22
    Relay Operator ID. OrNetStats displays an
    icon in case the operator implemented it
  • 23:22 - 23:29
    correctly. Out of the top 24 largest
    families by bandwidth, all but eight
  • 23:29 - 23:35
    operators have implemented the
    Authenticated Relay Operator ID already.
  • 23:35 - 23:40
    On the right-hand side, you can see a few
    logos of organizations running relays with
  • 23:40 - 23:47
    a properly set up AROI. The most relevant
    distinction between lines having that
  • 23:47 - 23:52
    checkmark icon and those that do not have
    it is the fact that the string in lines
  • 23:52 - 23:59
    that do not include the icon can be
    arbitrarily spoofed. This graph shows the
  • 23:59 - 24:06
    largest exit operators that implemented
    the AROI. I want to stress one crucial
  • 24:06 - 24:13
    point about AROIs though, authenticated
    must not be confused with trusted.
  • 24:13 - 24:19
    Malicious operators can also authenticate
    their domain and they do. A given AROI can
  • 24:19 - 24:26
    be trusted or not. It is up to the user,
    but using AROIs instead of ContactInfo for
  • 24:26 - 24:31
    assigning trust is crucial because
    ContactInfo can not be trusted directly
  • 24:31 - 24:38
    without further checks. This graph shows
    what fraction of the Tor network's exit
  • 24:38 - 24:44
    capacity implemented the Authenticated
    Relay Operator ID over time. Currently, we
  • 24:44 - 24:50
    are at around 60 percent already, but
    guard capacity is a lot lower, around 15
  • 24:50 - 24:56
    percent. The reason for that is that exits
    are operated mostly by large operators and
  • 24:56 - 25:01
    organizations, while guards are
    distributed across a lot more operators.
  • 25:01 - 25:11
    There are over 1800 guard families, but
    only around 400 exit families. How does a
  • 25:11 - 25:19
    Tor client make use of AROIs, current Tor
    versions do not know what AROIs are and
  • 25:19 - 25:25
    primarily take relay fingerprints as
    configuration inputs. So, we need some
  • 25:25 - 25:29
    tooling to generate a list of relay
    fingerprints starting from a list of
  • 25:29 - 25:37
    trusted AROIs. We have implemented a quick
    and dirty proof of concept that puts
  • 25:37 - 25:41
    everything together and performs all the
    steps shown on this slide, to demonstrate
  • 25:41 - 25:47
    the concept of using trusted AROIs to
    configure Tor client to use trusted exit
  • 25:47 - 25:54
    relays. It is not meant to be used by end-
    users, it merely is a preview for the
  • 25:54 - 25:58
    technical audience who would like to see
    it in action to achieve a better
  • 25:58 - 26:04
    understanding of the design. The current
    proof of concept performs all proof checks
  • 26:04 - 26:10
    itself without relying on third parties,
    but since there are a lot of reasons for
  • 26:10 - 26:15
    doing proof-checks centrally instead, for
    example, by directory authorities. I
  • 26:15 - 26:21
    recently submitted a partial proposal for
    it to the Tor development mailing list to
  • 26:21 - 26:25
    see whether they would consider it before
    proceeding with a more serious
  • 26:25 - 26:31
    implementation than the current proof of
    concept. I find it important to always try
  • 26:31 - 26:36
    achieving a common goal together with
    upstream first before creating solutions
  • 26:36 - 26:41
    that are maintained outside of upstream
    because it will lead to better maintained
  • 26:41 - 26:46
    improvements and likely a more user-
    friendly experience if they are integrated
  • 26:46 - 26:53
    in upstream. Here is a link to the
    mentioned tor-dev email, for those who
  • 26:53 - 27:02
    would like to follow along. To summarize,
    after reviewing some real world examples
  • 27:02 - 27:08
    of malicious actors on the Tor network, we
    concluded that current approaches to limit
  • 27:08 - 27:16
    risks by bad relays on Tor users might not
    live up to Tor users expectations, are not
  • 27:16 - 27:22
    sustainable in the long run and need an
    upgrade to avoid depending on the
  • 27:22 - 27:29
    detectability of malicious relays, which
    is becoming increasingly hard. We
  • 27:29 - 27:35
    presented a design to extend current anti
    bad relay approaches that does not rely on
  • 27:35 - 27:41
    the detection of malicious relays using
    trusted Authenticated Relay Operator IDs.
  • 27:41 - 27:47
    We have shown that most exit capacity has
    implemented AROIs already, while guard
  • 27:47 - 27:53
    capacity is currently significantly lower,
    showing a lack of insights on who operates
  • 27:53 - 28:00
    Tor's guard capacity. When publicly
    speaking about modifying Tor's path
  • 28:00 - 28:07
    selection in front of a wide audience, I
    also consider it to be my responsibility
  • 28:07 - 28:13
    to explicitly state that you should not
    change your Tor configuration options that
  • 28:13 - 28:18
    influenced path selection behavior without
    a clear need, according to your threat
  • 28:18 - 28:26
    model to avoid potentially standing out.
    Using trusted AROIs certainly comes with
  • 28:26 - 28:32
    some tradeoffs of its own, like for
    example, network load balancing, to name
  • 28:32 - 28:39
    only one. Thanks to many large, trusted
    exit operators, it should be feasible in
  • 28:39 - 28:43
    the near future to use trusted exits
    without standing out in a trivially
  • 28:43 - 28:49
    detectable way because it is harder in the
    sense of takes longer to statistically
  • 28:49 - 28:55
    detect a Tor client changed its possible
    pool of exits, if it only excluded a
  • 28:55 - 29:03
    smaller fraction of exits. Detecting Tor
    clients using only a subset of all guards
  • 29:03 - 29:09
    takes a lot longer than detecting custom
    exit sets because guards are not changed
  • 29:09 - 29:16
    over a longer period of time when compared
    with exits. And finally, Tor clients that
  • 29:16 - 29:23
    make use of trusted AROIs will need a way
    to find trusted AROIs, ideally, they could
  • 29:23 - 29:30
    learn about them dynamically in a safe
    way. There is an early work in progress
  • 29:30 - 29:40
    draft specification linked on this slide.
    I want to dedicate this talk to Karsten
  • 29:40 - 29:47
    Loesing who passed away last year. He was
    the kindest person I got to interact with
  • 29:47 - 29:54
    in the Tor community. Karsten was the Tor
    metrics team lead and without his work, my
  • 29:54 - 30:00
    projects, OrNetStats and OrNetRadar would
    not exist. Every time you use
  • 30:00 - 30:07
    metrics.torproject.org, for example, the
    so-called "Relay Search", you are using
  • 30:07 - 30:15
    his legacy. Thank you for listening, and
    I'm really looking forward to your
  • 30:15 - 30:20
    questions. I'm not sure I'll be able to
    respond to questions after the talk in
  • 30:20 - 30:24
    real time, but it would be nice to have
    them read out. So they are part of the
  • 30:24 - 30:29
    recording and I'll make an effort to
    publish answers to all of them via
  • 30:29 - 30:36
    Mastodon, should I not be able to respond
    in real time. I'm also happy to take tips
  • 30:36 - 30:41
    about unusual things you observed on the
    Tor network. Do not underestimate your
  • 30:41 - 30:48
    power as Tor user to contribute to a safer
    Tor network by reporting unusual things.
  • 30:48 - 30:55
    Most major hits against bad relay actors
    were the result of Tor user reports.
  • 30:55 - 31:19
    quietness
  • 31:19 - 31:28
    Herald: OK. Thank you very much for this
    very informative talk and yes so we will
  • 31:28 - 31:41
    switch over to the Q&A now. Yeah, thanks
    again. Very fascinating. So we have
  • 31:41 - 31:51
    collected several questions from our IRC
    chat, so I'm just going to start. If
  • 31:51 - 31:57
    bridges don't need the MyFamily setting
    isn't this a wide open gap for end-to-end
  • 31:57 - 32:04
    correlation attacks, for example if a
    malicious actor can somehow make the relay
  • 32:04 - 32:10
    popular as bridge?
    Nusenu: Yes, bridges are a concern in the
  • 32:10 - 32:15
    context of MyFamily, for that reason, it
    is not recommended to run bridges and
  • 32:15 - 32:20
    exits at the same time in current versions
    of Tor, but future versions of Tor will
  • 32:20 - 32:26
    get a new and more relay operator friendly
    MyFamily setting. That new MyFamily design
  • 32:26 - 32:34
    will also support bridges. This will
    likely be in Tor 0.4.8.x at some point in
  • 32:34 - 32:45
    2022.
    Herald: OK, thanks. Despite what kind of
  • 32:45 - 32:55
    attack, are there statistics who or from
    which country these attacks are coming
  • 32:55 - 33:03
    most? Background here is there are rumors
    about NSA driven and exit notes.
  • 33:03 - 33:08
    Nusenu: I don't know about any general
    statistics, but I usually include used
  • 33:08 - 33:14
    autonomous systems by certain groups when
    blogging about them. There are some
  • 33:14 - 33:18
    autonomous systems that are notorious for
    being used by malicious groups, but
  • 33:18 - 33:23
    malicious groups also try to blend in with
    the rest by using large ISPs like Hetzner
  • 33:23 - 33:31
    and OVH.
    Herald: Thanks. Is using a bridge that I
  • 33:31 - 33:35
    host also safer than using a random guard
    node?
  • 33:35 - 33:42
    Nusenu: This is a tricky question, since
    it also depends on whether it is a private
  • 33:42 - 33:47
    bridge, a bridge that is not distributed
    to other uses by a bridgeDB. I would say
  • 33:47 - 33:53
    it is better to not run the bridges you
    use yourself.
  • 33:53 - 34:03
    Herald: OK. What is worse? KAX17 or a well
    known trusted operators running 20 percent
  • 34:03 - 34:07
    of Tor's exits?
    Nusenu: Currently, I would say KAX17.
  • 34:07 - 34:17
    Herald: OK. I think that's the last one
    for now: Isn't the anonymity, not
  • 34:17 - 34:22
    decreased or changed while using trusted
    relay list?
  • 34:22 - 34:27
    Nusenu: Yes, this is a trade-off that
    users will need to make. This heavily
  • 34:27 - 34:37
    depends on the threat model.
    Herald: OK. So I think we have gathered
  • 34:37 - 34:43
    all the questions and they were all
    answered. So thank you again for. Yes,
  • 34:43 - 34:46
    thank you again.
  • 34:46 - 34:59
    rc3 postroll music
  • 34:59 - 35:03
    Subtitles created by c3subtitles.de
    in the year 2022. Join, and help us!#
Title:
Towards a more Trustworthy Tor Network
Description:

more » « less
Video Language:
English
Duration:
35:03

English subtitles

Revisions