Return to Video

How Uber Uses your Phone as a Backup Datacenter

  • 0:07 - 0:11
    (audience claps)
    (Man) Ok.
  • 0:11 - 0:15
    Alright every body so let's dive in.
  • 0:15 - 0:18
    So let's talk about how Uber trips
    even happen
  • 0:18 - 0:20
    before we get into the nitty gritty
    of how
  • 0:20 - 0:23
    we save them from the clutches
    of the datacenter failover.
  • 0:23 - 0:27
    So you might have heard we're all about
    connecting riders and drivers.
  • 0:27 - 0:28
    This is what it looks like.
  • 0:28 - 0:30
    You've probably at least
    seen the rider app.
  • 0:30 - 0:32
    You get it out, you see
    some cars on the map.
  • 0:32 - 0:34
    You pick where you want
    to get a pickup location.
  • 0:34 - 0:37
    At the same time,
    all these guys that you're
  • 0:37 - 0:39
    seeing on the map, they have
    a phone open somewhere.
  • 0:39 - 0:41
    They're logged in waiting for a dispatch.
  • 0:41 - 0:43
    They're all pinging
    into the same datacenter
  • 0:43 - 0:45
    for the city that you both are in.
  • 0:45 - 0:48
    So then what happens is you put
    that pin somewhere,
  • 0:48 - 0:51
    and you get ready to pick up your trip,
    you get ready to request.
  • 0:52 - 0:55
    You hit request and
    that guy's phone starts beeping.
  • 0:55 - 0:58
    Hopefully if everything works out,
    he'll accept that trip.
  • 0:59 - 1:03
    All these things that we're talking about
    here, the request of a trip
  • 1:03 - 1:06
    the offering of it to a driver, him
    accepting it.
  • 1:06 - 1:09
    That's something we call a state change
    transition.
  • 1:09 - 1:12
    From the moment that you start
    requesting the trip,
  • 1:12 - 1:16
    we start creating your trip data
    in the backend datacenter.
  • 1:16 - 1:21
    And that transaction that might live
    for anything like 5, 10, 15, 20, 30,
  • 1:21 - 1:24
    however many minutes it takes you
    to take your trip,
  • 1:24 - 1:27
    we have to consistently handle that trip
    to get it all the way through
  • 1:27 - 1:31
    to completion to get you
    where you're going happily.
  • 1:31 - 1:33
    So every time this state change happens,
  • 1:37 - 1:41
    things happen in the world, so next up
    he goes ahead and shows up to you.
  • 1:42 - 1:45
    He arrives, you get in the car,
    he begins the trip.
  • 1:45 - 1:47
    Everything's going fine.
  • 1:49 - 1:52
    So this is of course, ...
    some of these state changes
  • 1:52 - 1:55
    are more or less important to everything
    that's going on.
  • 1:55 - 1:58
    The begin trip and the end trip are the
    real important ones, of course.
  • 1:58 - 2:00
    The ones that we don't want
    to lose the most.
  • 2:00 - 2:02
    But all these are really important
    to keep onto.
  • 2:02 - 2:07
    So what happens in the sense of
    failure is your trip is gone.
  • 2:07 - 2:11
    You're both back to,
    "OMG where'd my trip go?"
  • 2:11 - 2:15
    There you're just seeing empty cars again
    and he's back into an open thing
  • 2:15 - 2:18
    like where you were when you
    started/opened the application
  • 2:18 - 2:18
    in the first place.
  • 2:18 - 2:22
    So this is what used to happen for us
    not too long ago.
  • 2:22 - 2:26
    So how do we fix this
    how do you fix this in general?
  • 2:26 - 2:28
    So classically you might try
    and say,
  • 2:28 - 2:32
    "Well, let's take all the data in that one
    datacenter and copy it replicate it to a
  • 2:32 - 2:33
    backend data center."
  • 2:33 - 2:37
    This is pretty well understood, classic
    way to solve this problem.
  • 2:38 - 2:40
    You control the active data center
    the backup data center
  • 2:43 - 2:43
    so it's pretty easy to reason about.
  • Not Synced
    People feel comfortable with this scheme.
  • Not Synced
    It could work more or less well depending
    on what database you're using.
  • Not Synced
    But there's some drawbacks.
  • Not Synced
    It gets kinda complicated
    beyond two datacenters.
  • Not Synced
    It's always gonna be subject
    to replication lag
  • Not Synced
    because the datacenters are separated by
    this thing called the internet,
  • Not Synced
    or maybe leased lines
    if you get really into it.
  • Not Synced
    So it requires a constant level of high
    bandwidth, especially if you're not using
  • Not Synced
    a database well-suited to replication,
    or if you haven't really tuned
  • Not Synced
    your business model to get
    the deltas really good.
  • Not Synced
    So, we chose not to go with this route.
  • Not Synced
    We instead said, "What if we could solve
    it to vac down to the driver?"
  • Not Synced
    Because since we're already in constant
    communication with these driver phones,
  • Not Synced
    what if we could just save the data there
    to the driver phone?
  • Not Synced
    Then he could failover to any datacenter,
    rather than having to control,
  • Not Synced
    "Well here's the backup datacenter for
    this city, the backup datacenter
  • Not Synced
    for this city," and then, "Oh, no no, what
    if in a failover, we fail the wrong phones
  • Not Synced
    to the wrong datacenter and now we lose
    all their trips again?"
  • Not Synced
    That would not be cool.
  • Not Synced
    So we really decided to go with this
    mobile implementation approach of saving
  • Not Synced
    the trips to the driver phone.
  • Not Synced
    But of course, it doesn't come without a
    trade-off, the trade-off here being
  • Not Synced
    you've got to implement some kind of a
    replication protocol in the driver phone
  • Not Synced
    consistently between whatever platforms
    you support.
  • Not Synced
    In our case, iOS and Android.
  • Not Synced
    ...But if it could, how would this work?
  • Not Synced
    So all these state transitions
    are happening when the phones
  • Not Synced
    communicate with our datacenter.
  • Not Synced
    So if in response to his request to begin
    trip or arrive, or accept, or any of this,
  • Not Synced
    if we could send some data back down to
    his phone and have him keep ahold of it,
  • Not Synced
    then in the case of a datacenter failover,
    when his phone pings into
  • Not Synced
    the new datacenter, we could request that
    data right back off of his phone, and get
  • Not Synced
    you guys right back on your trip with
    maybe only a minimal blip,
  • Not Synced
    in the worst case.
  • Not Synced
    So in a high level, that's the idea, but
    there are some challenges of course
  • Not Synced
    in implementing that.
  • Not Synced
    Not all the trip information that we would
    want to save is something we want
  • Not Synced
    the driver to have access to, like to be
    able to get your trip and...
  • Not Synced
    in the other datacenter, we'd have to have
    the full rider information.
  • Not Synced
    If you're fare splitting with some friends
    it would need to be all rider information.
  • Not Synced
    So, there's a lot of things that we need
    to save here to save your trip
  • Not Synced
    that we don't want to expose
    to the driver.
  • Not Synced
    Also, you have to pretty much assume that
    the driver phones are more or less
  • Not Synced
    trustable, either because people are doing
    nefarious things with them,
  • Not Synced
    or people not the drivers have compromised
    them, or somebody else between you and
  • Not Synced
    the driver. Who knows?
  • Not Synced
    So for most of these reasons, we decided
    we had to go with the crypto approach
  • Not Synced
    and encrypt all the data that we store on
    the phones to prevent against tampering
  • Not Synced
    and leaking of any kind of PII.
  • Not Synced
    And also, towards all these security
    designs and also simple reliability of
  • Not Synced
    interacting with these phones, you want to
    keep the replication protocol as simple
  • Not Synced
    as possible to make it easy to [inaudible]
    easy to debug, remove failure cases,
  • Not Synced
    and you also want to minimize the extra
    bandwidth.
  • Not Synced
    Kinda glossed over the bandwidth unpacks
    when I said backend replication
  • Not Synced
    isn't really an option.
  • Not Synced
    But at least here when you're designing
    this replication protocol,
  • Not Synced
    with the application layering you can be
    much more in tune with what data
  • Not Synced
    you're serializing, and what
    you're deltafying or not
  • Not Synced
    and really mind your bandwidth impact.
  • Not Synced
    Especially since it's going over a mobile
    network, this becomes really salient.
  • Not Synced
    So, how do you keep it simple?
  • Not Synced
    In our case, we decided to go with a very
    simple key value store with all your
  • Not Synced
    typical operations: get, set, delete,
    and list all the keys please,
  • Not Synced
    with one caveat being you can only
    set a key once so you can't
  • Not Synced
    accidentally overwrite a key;
    it eliminates a whole class of weird
  • Not Synced
    programming errors or out of order message
    delivery errors you might have
  • Not Synced
    in such a system.
  • Not Synced
    This did however then force us to move,
    we'll call "versioning"
  • Not Synced
    into the keyspace though.
  • Not Synced
    You can't just say, "Oh' I've got a key
    for this trip and please update it
  • Not Synced
    to the new version on each state change."
  • Not Synced
    No; instead you have to have a key for
    trip and version, and you have to do
  • Not Synced
    a set of new one to leave the old one,
    and that at least gives you the nice
  • Not Synced
    property that if that fails partway
    through, between the sent and the delete,
  • Not Synced
    you fail into having two things to order
    out of the no things stored.
  • Not Synced
    So there are some nice properties to
    keeping a nice simple
  • Not Synced
    key value protocol here.
  • Not Synced
    And that makes failover resolution really
    easy because it's simply a matter of,
  • Not Synced
    "What keys do you have?
    What trips do you store?
  • Not Synced
    What keys do I have in the backend
    datacenter?"
  • Not Synced
    Compare those, and come to a resolution
    between those set of tricks.
  • Not Synced
    So that's a quick overview of how we built
    this system.
  • Not Synced
    ...Nikunj Aggarwal is going to give you
    a rundown of some more details of how we
  • Not Synced
    really got the reliability of this system
    to work at scale.
  • Not Synced
    (audience claps)
  • Not Synced
    Alright, hi! I'm Nikunj.
  • Not Synced
    So we talked about the idea
    and the motivation behind the idea,
  • Not Synced
    and now let's dive into how did we design
    such a solution, and what kind of cradles
  • Not Synced
    did we have to make while we were
    doing the design.
  • Not Synced
    So first thing, we wanted to ensure was
    that the system we built is non blocking
  • Not Synced
    but still provide eventual consistency.
  • Not Synced
    So basically...any backend application
    using this system should be able to make
  • Not Synced
    [inaudible] progress, even when
    the system is down.
  • Not Synced
    So the only trade-off, the application
    should be making is that it may
  • Not Synced
    take some time for the data to actually be
    stored on the phone.
  • Not Synced
    However, using this application should not
    affect any normal business operations
  • Not Synced
    for them.
  • Not Synced
    Secondly, we wanted to have an ability
    to move between datacenters without
  • Not Synced
    worrying about data already there.
  • Not Synced
    So when we failover from one datacenter to
    another, that datacenter still had states
  • Not Synced
    in them, and it still has its view
    of active drivers and trips.
  • Not Synced
    And no service in that datacenter is aware
    that a failure actually happened.
  • Not Synced
    So at some later time, if we fail back to
    the same datacenter, then it's view
  • Not Synced
    of the drivers and trips may be actually
    different than what the drivers
  • Not Synced
    actually have and if we trusted that
    datacenter, then the drivers may get an
  • Not Synced
    [inaudible], which is
    a very bad experience.
  • Not Synced
    So we need some way to reconcile that data
    between the drivers and the server.
  • Not Synced
    Finally, we want to be able to measure
    the success of the system all the time.
  • Not Synced
    So the system is only fully executed
    during a failure, and a datacenter failure
  • Not Synced
    is a pretty rare occurrence, and we don't
    want to be in a situation where
  • Not Synced
    we detect issues with the system when we
    need it the most.
  • Not Synced
    So what we want is an ability to
    constantly be able to measure the success
  • Not Synced
    of the system so that we are confident in
    it when a failure acutally happens.
  • Not Synced
    So in keeping all these issues in mind,
    this is a very high level
  • Not Synced
    view of the system.
  • Not Synced
    I'm not going to go into details of any
    of the services,
  • Not Synced
    since it's a mobile conference.
  • Not Synced
    So the first thing that happens is that
    driver makes an update,
  • Not Synced
    or as Josh called it, a state change,
    on his app.
  • Not Synced
    For example, he may pick up a passenger.
  • Not Synced
    Now that update comes as a request to the
    dispatching service.
  • Not Synced
    Now the dispatching service, depending on
    the type of request, it updates the trip
  • Not Synced
    model for that trip, and then it sends the
    update to the replication service.
  • Not Synced
    Now the replication service will in queue
    that request in its own data store
  • Not Synced
    and immediately return a successful
    response to the dispatching service,
  • Not Synced
    and then finally the dispatching service
    will update its own data store
  • Not Synced
    and then return a success to mobile.
  • Not Synced
    It may alter it on some other way of the
    mobile, for example things might have
  • Not Synced
    changed since the last time mobile pinged
    in, for example.
  • Not Synced
    ...If it's an UberPool trip,
    then the driver may have to pick up
  • Not Synced
    passenger.
  • Not Synced
    Or if the rider entered some destination,
    he might have to tell the driver
  • Not Synced
    about that.
  • Not Synced
    And in the background, the replication
    service encrypts that data, obviously,
  • Not Synced
    since we don't want drivers to have access
    to all that,
  • Not Synced
    and then sends it to a messaging service.
  • Not Synced
    So messaging service is something that's
    rebuilt as part of the system.
  • Not Synced
    It maintains a bidirectional communication
    channel with all drivers
  • Not Synced
    on the Uber platform.
  • Not Synced
    And this communication channel is actually
    separate from
  • Not Synced
    the original request response channel
    which we've been traditionally using
  • Not Synced
    at Uber for drivers to communicate
    with the server.
  • Not Synced
    So this way, we are not affecting any
    normal business operation
  • Not Synced
    due to this service.
  • Not Synced
    So the messaging service then sends the
    message to the phone
  • Not Synced
    and get an acknowledgement from them.
  • Not Synced
    So from this design, what we have achieved
    is that we've isolated the applications
  • Not Synced
    from any replication latencies or failures
    because our replication service
  • Not Synced
    returns immediately and the only extra
    thing the application is doing
  • Not Synced
    by opting in to this replication strategy
    is making an extra service call
  • Not Synced
    to the replication service, which is going
    to be pretty cheap since it's within
  • Not Synced
    the same datacenter,
    not traveing through internet.
  • Not Synced
    Secondly, now having this separate channel
    gives us the ability to arbitrarily query
  • Not Synced
    the states of the phone without affecting
    any normal business operations
  • Not Synced
    and we can use that phone as a basic key
    [inaudible] restore now.
  • Not Synced
    Next...Okay so now comes the issue of
    moving between datacenters.
  • Not Synced
    As I said earlier, when we failover we are
    actually leaving states behind
  • Not Synced
    in that datacenter.
  • Not Synced
    So how do datum stay in states?
  • Not Synced
    So the first approach we tried
    was actually do some manual cleanup.
  • Not Synced
    So we wrote some cleanup scripts and every
    time you failover
  • Not Synced
    from our priming datatcenter to our backup
    datacenter, somebody will run that script
  • Not Synced
    in a priming and it will go to the
    datastores for the dispatching service
  • Not Synced
    and it will clean out
    all the states there.
  • Not Synced
    However, this approach had operational
    paying because somebody had to run it.
  • Not Synced
    Moreover, the allowed ability to failover
    per city so you can actually choose
  • Not Synced
    to failover specific cities instead of the
    whole world, and in those cases
  • Not Synced
    the script started becoming complicated.
  • Not Synced
    So then we decided to tweak our design a
    little bit so that we solve this problem.
  • Not Synced
    So the first thing we did was...as Josh
    mentioned earlier, the key which is
  • Not Synced
    stored on the phone contains the trip
    identifier and the version within it.
  • Not Synced
    So the version used to be an incrementing
    number so that we can keep track of
  • Not Synced
    any followed progress you're making.
  • Not Synced
    However, we changed that through a
    modified retic log.
  • Not Synced
    So mucing that retic log, we can now
    compare data on the phone
  • Not Synced
    and data on the server.
  • Not Synced
    And if there is a miss--we can detect any
    causality [inaudible] using that.
  • Not Synced
    And we can also resolve that using a very
    basic conflict resolution strategy.
  • Not Synced
    So this way, we handle any issues
    with ongoing trips.
  • Not Synced
    Now next came the issue of computed trips.
  • Not Synced
    So, traditionally what we'd been doing is,
    when a trip is completed, will delete
  • Not Synced
    all the data about the trip
    from the phone.
  • Not Synced
    We did that because we didn't want the
    replication data on the phone
  • Not Synced
    to grow unbounded.
  • Not Synced
    And once a trip is completed, it's
    probably no longer required
  • Not Synced
    for restoration.
  • Not Synced
    However, that has side effect that mobile
    has no idea now that the script
  • Not Synced
    ever happened.
  • Not Synced
    So what will happen is if we failback to
    a datacenter with some stale data
  • Not Synced
    about this trip, then you might actually
    end up putting the right driver
  • Not Synced
    on that same trip, which is a pretty
    bad experience because he's suddenly
  • Not Synced
    now driving somebody which he already
    dropped off, and he's probably
  • Not Synced
    not gonna be paid for that.
  • Not Synced
    So what we did to fix that was...
    on trip completion, we would store
  • Not Synced
    a special key on the phone, and the
    version in that key has a flag in it.
  • Not Synced
    That's why I call it a modified retic log.
  • Not Synced
    So it has a flag that says that this trip
    has already been completed,
  • Not Synced
    and we store that on the phone.
  • Not Synced
    Now when the replication service sees that
    this driver has this flag for the trip,
  • Not Synced
    then can tell the dispatching service that
    "Hey, this trip is already been completed
  • Not Synced
    and you should probably delete it."
  • Not Synced
    So that way, we handle completed trips.
  • Not Synced
    So if you think about it, storing trip
    data is kind of expensive because we have
  • Not Synced
    this huge encrypted log of JSON maybe.
  • Not Synced
    But we can store the...
    large completed trips
  • Not Synced
    because there is no data
    associated with them.
  • Not Synced
    So we can probably store weeks worth
    of completed trips in the same amount
  • Not Synced
    of memory as we would store one trip data.
  • Not Synced
    So that's how we solve stale states.
  • Not Synced
    So now, next comes the issue of ensuring
    four nines for reliability.
  • Not Synced
    So we decided to exercise the system
    more often than a datacenter failure
  • Not Synced
    because we wanted to get confident
    that the system actually works.
  • Not Synced
    So our first approach was
    to do manual failovers.
  • Not Synced
    So basically what happened was that
    bunch of us will gather in a room
  • Not Synced
    every Monday and then pick a few cities
    and fail them over.
  • Not Synced
    And after we fail them over to one of the
    datacenter, we'll see...
  • Not Synced
    what was the success rate
    for the restoration
  • Not Synced
    and if there were any failures, then try
    to look at the logs and debug any issues
  • Not Synced
    there.
  • Not Synced
    However, there were several problems with
    this approach.
  • Not Synced
    First, it was very operationally painful.
    So we had to do this every week.
  • Not Synced
    And for a smal fraction of crypts,
    which did not get restored,
  • Not Synced
    we will actually have to do
    fare adjustment
  • Not Synced
    for both the rider and the driver.
  • Not Synced
    Secondly, it led to a very poor
    customer experience
  • Not Synced
    because for that same fraction,
    they were suddenly bumped off trip
  • Not Synced
    and they got totally confused,
    like what happened to them?
  • Not Synced
    Thirdly, it had a low coverage because
    we were covering only a few cities.
  • Not Synced
    However, in the past we've seen problems
    which affected only a specific city.
  • Not Synced
    Maybe because there was a new feature
    allowed in the city which was not
  • Not Synced
    [inaudible] yet.
  • Not Synced
    So this approach does not help us
    catch those cases until it's too late.
  • Not Synced
    Finally, we had no idea whether the
    backup datacenter can handle the load.
  • Not Synced
    So in our current architecture, we have a
    primary datacenter which handles
  • Not Synced
    all the requests and then backup
    datacenter which is waiting to handle
  • Not Synced
    all those requests in case
    the primary goes down.
  • Not Synced
    But how do we know that the
    backup datacenter
  • Not Synced
    can handle all those requests?
  • Not Synced
    So one way is maybe you can provision
    the same number of boxes
  • Not Synced
    and same type of hardware
    in the backup datacenter.
  • Not Synced
    But what if there's a configuration issue?
  • Not Synced
    In some of the services,
    we would never catch that.
  • Not Synced
    And even if they're exactly the same,
    how do you know that each service
  • Not Synced
    in the backup datacenter can handle
    a sudden flood of requests which comes
  • Not Synced
    when there is a failure?
  • Not Synced
    So we needed some way to fix
    all these problems.
  • Not Synced
    So then to understand how to get
    good confidence in the system and
  • Not Synced
    to measure it well, we looked at
    the key concepts behind the system
  • Not Synced
    which we really wanted to work.
  • Not Synced
    So first thing was we wanted to ensure
    that all mutations which are done
  • Not Synced
    by the dispatching service are actually
    still on the phone.
  • Not Synced
    So for example, a driver, right after he
    picks up a passenger,
  • Not Synced
    he may lose connectivity.
  • Not Synced
    And so replication data may not
    be sent to the phone immediately
  • Not Synced
    but we want to ensure that the data
    eventually makes it to the phone.
  • Not Synced
    Secondly, we wanted to make sure
    that the stored data can actually be used
  • Not Synced
    for replication.
  • Not Synced
    For example, there may be some
    encryption decryption issue with the data
  • Not Synced
    and the data gets corrupted
    and it's no longer needed.
  • Not Synced
    So even if you're storing the data,
    you cannot use it.
  • Not Synced
    So there's no point.
  • Not Synced
    Or, restoration actually involves
    rehydrating the states
  • Not Synced
    within the dispatching service
    using the data.
  • Not Synced
    So even if the data is fine,
    if there's any problem
  • Not Synced
    during that rehydration process,
    some service behaving [inaudible],
  • Not Synced
    you would still have no use for that data
    and you would still lose the trip,
  • Not Synced
    even though the data is perfectly fine.
  • Not Synced
    Finally, as I mentioned earlier,
    we needed a way to figure out
  • Not Synced
    whether the backup datacenters can handle
    the load.
  • Not Synced
    So to monitor the health of the system
    better, we wrote another service.
  • Not Synced
    Every hour it will get a list of all
    active drivers and trips
  • Not Synced
    from our dispatching service.
  • Not Synced
    And for all those drivers, it will use
    that messaging channel to ask for
  • Not Synced
    their replication data.
  • Not Synced
    And once it has the replication data,
    it will compare that data
  • Not Synced
    with the data which the application
    expects.
  • Not Synced
    And doing that, we get a lot of good
    metrics around, like...
  • Not Synced
    What percentage of drivers have data
    successfully stored to them?
  • Not Synced
    And you can even break down metrics
    by region or by any app versions.
  • Not Synced
    So this really helped us
    grill into the problem.
  • Not Synced
    Finally, to know whether the stored data
    can be used for replication,
  • Not Synced
    and that the backup datacenter
    can handle the load,
  • Not Synced
    what we do is we use all the data
    which we got in the previous step
  • Not Synced
    and we send that to our backup datacenter.
  • Not Synced
    And within the backup datacenter
    we perform what we call
  • Not Synced
    a shatter restoration.
  • Not Synced
    And since there is nobody else making any
    changes in that backup datacenter,
  • Not Synced
    after the restoration is completed,
    we can just query the dispatching service
  • Not Synced
    in the backup datacenter.
  • Not Synced
    And say, "Hey, how many active riders,
    drivers, and trips do you have?"
  • Not Synced
    And we can compare that number
    with the number we got
  • Not Synced
    in our snapshot
    from the primary datacenter.
  • Not Synced
    And using that, we get really valuable
    information around what's our success rate
  • Not Synced
    and we can do similar breakdowns by
    different parameters
  • Not Synced
    like region or app version.
  • Not Synced
    Finally, we also get metrics around
    how well the backup datacenter did.
  • Not Synced
    So did we subject it to a lot of load,
    or can it handle the traffic
  • Not Synced
    when there is a real failure?
  • Not Synced
    Also, any configuration issue
    in the backup datacenter
  • Not Synced
    can be easily caught by this approach.
  • Not Synced
    So using this service, we are
    constantly testing the system
  • Not Synced
    and making sure we have confidence in it
    and can use it during a failure.
  • Not Synced
    Cause if there's no confidence
    in the system, then it's pointless.
  • Not Synced
    So yeah, that was the idea
    behind the system
  • Not Synced
    and how we implemented it.
  • Not Synced
    I did not get a chance to go into
    different [inaudible] of detail,
  • Not Synced
    but if you guys have any questions,
    you can always reach out to us
  • Not Synced
    during the office hours.
  • Not Synced
    So thanks guys for coming
    and listening to us.
  • Not Synced
    (audience claps)
Title:
How Uber Uses your Phone as a Backup Datacenter
Description:

more » « less
Video Language:
English
Team:
Captions Requested
Duration:
22:19

English subtitles

Revisions Compare revisions