< Return to Video

#rC3 - Infrastructure Review

  • 0:00 - 0:12
    rC3 preroll music
  • 0:12 - 0:17
    ysf: Hello and welcome to the
    infrastructure review of the rC3 this
  • 0:17 - 0:23
    year, 2020. What the hell happened? How
    could it happen? I'm not alone this year.
  • 0:23 - 0:28
    With me is lindworm who will help me with
    the slides and everything else I'm going
  • 0:28 - 0:36
    to say before. And this is going to be a
    great fuck up like last year, maybe. We
  • 0:36 - 0:41
    have more teams, more people, more
    streams, more of everything. And the first
  • 0:41 - 0:45
    team and lindworm who I'm going to introduce
    is the SHOC. Are you there with me?
  • 0:45 - 0:52
    Lindworm: Oh, yeah, so I got to go to the
    SHOC. Yeah, it's kind of a stress this
  • 0:52 - 1:00
    year. We only had about 18 heralds for the
    main talks rC1 and rC2. And we have
  • 1:00 - 1:05
    introduced about 51 talks with that.
    Everybody from this home setup, which was
  • 1:05 - 1:10
    a very, very hard struggle. So we all had
    a metric ton of adrenaline and excitement
  • 1:10 - 1:16
    without… within us. So here you can see
    what you have seen, how a herald looks
  • 1:16 - 1:25
    from the front. And so it does look in the
    background. Oof. That was hard, really
  • 1:25 - 1:32
    hard for us. So you see all our different
    set ups here, do we have? And we are very,
  • 1:32 - 1:38
    very pleased to also have set up a
    completely new operation center: the
  • 1:38 - 1:46
    Herald News Show, which I really, really
    like you to review on YouTube. This was
  • 1:46 - 1:54
    such a struggle. And we have about, oh,
    wait a second, so as we said, we're a
  • 1:54 - 1:59
    little bit unprepared here, I need to have
    my notes up. There were 20 members that
  • 1:59 - 2:06
    formed a new team on the first day. They
    made 23 shows, 10 hours of video
  • 2:06 - 2:13
    recording, 20 times the pizza man rung at
    the door. And 23 mate bottles had been
  • 2:13 - 2:19
    drunk during the preps because all of
    those people needed to be online the
  • 2:19 - 2:25
    complete time. So I really applaud to
    them. That was really awesome, what they
  • 2:25 - 2:30
    brought over the team and what they
    brought over the stream. And this is an
  • 2:30 - 2:39
    awesome team I hope we see more of. ysf,
    would you take it over? ysf is muted
  • 2:39 - 2:46
    Oh, no. My, my bad. So is the heaven
    ready? We need to go to the heaven and
  • 2:46 - 2:51
    would have an infrastructure review of the
    heaven.
  • 2:51 - 3:29
    raziel: OK. Du hörst mich noch? Ja, hallo?
    Ich bin der raziel aus dem Heaven und ehm…
  • 3:29 - 3:39
    Yeah, heaven is ready, so welcome,
    everybody. I'm raziel from heaven, and I
  • 3:39 - 3:48
    will present you the infrastructure review
    from the heaven team. We had some angel
  • 3:48 - 3:56
    statistics scrapped out a few hours ago.
    And on this year, we have not so much
  • 3:56 - 4:05
    angels like last year, because we had a
    remote event, but we had a total of 1487
  • 4:05 - 4:17
    total angels from which 710 arrived and
    even more of 300 angels that at least
  • 4:17 - 4:28
    still did one shift. And in total the
    recorded work done to that point was
  • 4:28 - 4:41
    roughly 17 and 75 weeks of done working
    hours, and for the rC3 world we also
  • 4:41 - 4:51
    prepared a few goodies so people could
    come visit us. And so we provided them a
  • 4:51 - 5:01
    few badges there. And every angel that,
    for example, found our extinguished…
  • 5:01 - 5:09
    expired extinguisher and also extinguished
    fire in heaven. The first batch was
  • 5:09 - 5:22
    achieved from 232 of our angels and even
    less. But still a good number of 125
  • 5:22 - 5:28
    angels accomplished to help us and
    extinguish the fire that broke out during
  • 5:28 - 5:38
    an event. And with that numbers checked,
    we also will jump into our heaven. So I
  • 5:38 - 5:46
    would like to show you some expressions
    and impressions from it. We had quite the
  • 5:46 - 5:53
    team working to do exactly what the heaven
    could do: manage its people so we needed
  • 5:53 - 6:01
    our heaven office. And we also did this
    with respect to your privacy, so. We
  • 6:01 - 6:07
    painted our color… our clouds white as
    ever, so we cannot see your nicknames, and
  • 6:07 - 6:13
    you could do your angel work but not be
    bothered with us asking for your names.
  • 6:13 - 6:23
    And also, we had prepared some secret
    passage to our back office. And every time
  • 6:23 - 6:30
    on the real event, it would happen that
    some adventurers would find their way into
  • 6:30 - 6:35
    our back office. And so we needed to
    provide that opportunity as well, as you
  • 6:35 - 6:43
    can see here. And let me say that some
    adventurers tried to find the way in our
  • 6:43 - 6:50
    sacred digital back office, but only a few
    were successful. So we hope everyone found
  • 6:50 - 6:58
    its way back into the real world from our
    labyrinth. And we also did not spare any
  • 6:58 - 7:08
    expenses to do some additional update for
    our angels as well. As you can see, we
  • 7:08 - 7:13
    tried to do some multi-instance support.
    So some of our angels also accomplished to
  • 7:13 - 7:21
    split up and serve more than one angel at
    a time. And that was quite awesome. And so
  • 7:21 - 7:29
    we tried to provide the same things we
    would do on Congress, but now from our
  • 7:29 - 7:39
    remote offices. And one last thing that
    doesn't… normally doesn't need to be said.
  • 7:39 - 7:48
    But I think in this year and with this
    different kind of event, I think it's
  • 7:48 - 7:55
    necessary that the heaven as a
    representative, mostly for people trying
  • 7:55 - 8:05
    to help make this event awesome. And I
    think it's time to say the things we do
  • 8:05 - 8:12
    take for granted. And that is thank you
    for all your help. Thank you for all the
  • 8:12 - 8:20
    entities, all the teams, all the
    participants that achieved the goal to
  • 8:20 - 8:27
    bring our real Congress that many, many
    entities missed this year into a new
  • 8:27 - 8:34
    stage. We tried that online. It had its
    ups and downs. But I still think it was an
  • 8:34 - 8:40
    awesome adventure for everyone. And from
    the Heaven team I can only say thank you
  • 8:40 - 8:48
    and I hope to see you all again in the
    future on a real event. Bye! And have a
  • 8:48 - 9:07
    nice New Year.
    lindworm: Hello, hello, back again. So we
  • 9:07 - 9:18
    now are switching over to the Signal
    Angels. Are the signal angels ready?
  • 9:18 - 9:24
    Hello!
    trilader: Yeah, hello, uhm, welcome to the
  • 9:24 - 9:30
    infrastructure review for the Signal
    Angels, I have prepared some stuff for
  • 9:30 - 9:36
    you. This was for us… slides, please? This
    was for us the first time running a fully
  • 9:36 - 9:50
    remote Q&A session set, I guess? We had
    some experience with DiVOC and had gotten
  • 9:50 - 9:54
    some help from there on how to do this,
    but just to compare, our usual procedure
  • 9:54 - 9:59
    is to have a signal angel in the room.
    They collect the question on their laptop
  • 9:59 - 10:06
    there and they communicate with the Herald
    on stage and they have a microphone like
  • 10:06 - 10:10
    I'm wearing a headset. But in there we
    have a studio microphone and we speak
  • 10:10 - 10:18
    questions into it. Yeah, but remotely we
    really can't do that. Next slide. Because,
  • 10:18 - 10:23
    well, it would be quite a lot of hassle
    for everyone to set up good audio setups.
  • 10:23 - 10:30
    So we needed a new remote procedure. So we
    figured out that with a signal Angel and
  • 10:30 - 10:34
    the Herald could communicate via
    a pad and we could also collect the
  • 10:34 - 10:39
    question in there. And the Herald will
    read the question to the speaker and
  • 10:39 - 10:53
    collect feedback and stuff. So we had 175.
    No, 157 shifts, and sadly we couldn't fill
  • 10:53 - 11:03
    five of them in the beginning because
    there was not enough people already there.
  • 11:03 - 11:08
    And also technically it was more than five
    unfilled shifts because for some reasons
  • 11:08 - 11:17
    there were DJ sets and other things that
    aren't talks and also don't have Q&A. We
  • 11:17 - 11:22
    had 61 angels coordinated by four
    supporters, so me and three other people,
  • 11:22 - 11:26
    and we had a 60 additional angels that
    in theory wanted to do signal angel work
  • 11:26 - 11:35
    but didn't show up to the introduction
    meeting. Next! For, as I've said for each
  • 11:35 - 11:40
    session, each talk, we created a pad where
    we put in the questions from IRC,
  • 11:40 - 11:47
    Mastodon, and Twitter and. Well, we have a
    bit more pads than talks we actually
  • 11:47 - 11:54
    handled, and I have some statistics about
    an estimated number of questions per talk.
  • 11:54 - 11:59
    What we usually assume is that there's a
    question per line, but some questions are
  • 11:59 - 12:03
    really long and have to split over
    multiple lines. There are some structured
  • 12:03 - 12:08
    questions with headings and paragraphs
    some heralds or signal angels removed
  • 12:08 - 12:13
    questions after they were done. And also
    there were some chat and other
  • 12:13 - 12:19
    communication in there. So next slide, we
    took a Python script, download all the pad
  • 12:19 - 12:23
    contents, read them, counted the number of
    lines, remove the size of the static
  • 12:23 - 12:37
    header. And in the end we had 179 pads and
    1,627 lines if we discount the static
  • 12:37 - 12:43
    header of nine lines per pad. So that in
    theory leads to about nine questions in
  • 12:43 - 12:48
    quotation marks because it's not really
    questions but lines. But it's an estimate,
  • 12:48 - 12:56
    per talk. Thank you.
    ysf: ... talk and what I've learned is
  • 12:56 - 13:03
    never miss the introduction. So the next
    in line are the line producers ha ha ha ha
  • 13:03 - 13:33
    stb are you there?
    stb: I am here, in fact, so singing. So
  • 13:33 - 13:39
    the people a bit older might recognize
    this melody badly sung by yours truly and
  • 13:39 - 13:46
    other members of the line producers team,
    and I'll get to why that is relevant to
  • 13:46 - 13:53
    what we've been doing at this particular
    event. So what does, what do line
  • 13:53 - 13:58
    producers do? What does an,
    Aufnahmeleitung actually perform? It's
  • 13:58 - 14:01
    basically communication between everybody
    who's involved in the production, the
  • 14:01 - 14:07
    people behind the camera and also in front
    of the camera. And so our work started
  • 14:07 - 14:14
    really early, basically at the beginning
    of November, taking on like prepping
  • 14:14 - 14:18
    speakers in a technical setup and
    rehearsing with them a little bit and then
  • 14:18 - 14:25
    enabling the studios to allow them to
    actually do the production coordination on
  • 14:25 - 14:29
    an organizational side. The technical side
    was handled by the VOC, and we'll get to
  • 14:29 - 14:37
    hear about that in a minute. But getting
    all these people synced up and working
  • 14:37 - 14:43
    together well, that was quite a challenge.
    And that took a lot of Mumbles with a lot
  • 14:43 - 14:51
    of people in them. We only worked on the
    two main channels. There's quite a few
  • 14:51 - 14:58
    more channels that are run independently
    of kind of the central organization. And
  • 14:58 - 15:02
    again, we'll get to hear about the details
    of that in a minute. And so we provided
  • 15:02 - 15:07
    information. We tried to fill wiki pages
    with relevant information for everybody
  • 15:07 - 15:15
    involved. So that was our main task. So
    what does that mean specifically, the
  • 15:15 - 15:25
    production set up? We had 25 studios,
    mainly in Germany, also one in
  • 15:25 - 15:32
    Switzerland. These did produce recordings
    ahead of time for some speakers, and many
  • 15:32 - 15:39
    did live set ups for their own channels
    and also for the two main channels. And
  • 15:39 - 15:44
    I've listed everybody involved in the live
    production here. And there were 19
  • 15:44 - 15:50
    channels in total. So a lot of stuff
    happening. 25 studios, 19 channels that
  • 15:50 - 15:55
    broadcast content produced by these
    studios. So that's kind of the Eurovision
  • 15:55 - 16:00
    kind of thing, where you have different
    studios producing content and trying to
  • 16:00 - 16:06
    mix it all together. Again, the VOC took
    care of the technical side of things very
  • 16:06 - 16:11
    admirably, but getting everybody on the
    same page to actually do this was not
  • 16:11 - 16:19
    easy. For the talk program, we had over
    350 talks in total, 53 in the main channels
  • 16:19 - 16:25
    And so handling all that, making
    sure everybody has the speaker information
  • 16:25 - 16:33
    they need and all these organizational
    stuff, that was a lot of work. So we
  • 16:33 - 16:38
    didn't have a studio for the main
    channels, the 25 studios or the nine, the
  • 16:38 - 16:44
    live channels, the 12, they actually did
    provide the production facilities for the
  • 16:44 - 16:49
    speakers so we can look at the next slide.
    There's a couple more numbers and of
  • 16:49 - 16:56
    course, a couple pictures from us working
    basically from today. We had 53 channel...
  • 16:56 - 17:05
    53 talks in the main channel. 18 of them
    were prerecorded and played out. We had 3
  • 17:05 - 17:10
    where people were actually on location in
    a studio and gave their talk from there.
  • 17:10 - 17:16
    And we had 32 that were streamed live like
    I am speaking to you now with various
  • 17:16 - 17:22
    technical bits that again the VOC will go
    into in a minute. And we did a lot of
  • 17:22 - 17:26
    Q&As, I don't have the numbers how many
    talks actually had Q&As, but most of them
  • 17:26 - 17:33
    did, and those were always like. We had a
    total of 63 speakers we did prepare, at
  • 17:33 - 17:39
    least the live Q&A session for and helped
    them set up, we helped them record their
  • 17:39 - 17:45
    talks if they wanted to prerecord them. So
    we spent anywhere between one and two
  • 17:45 - 17:50
    hours with every speaker to make sure they
    would appear correctly and in good quality
  • 17:50 - 17:56
    on the screen. And then during the four
    days, we, of course, helped coordinate
  • 17:56 - 18:00
    between the master control room and the
    twelve live studios to make sure that the
  • 18:00 - 18:04
    speakers were where they were supposed to
    be and any technical glitches could be
  • 18:04 - 18:09
    worked out and decide on the spot. If, for
    example, the line producers made a mistake
  • 18:09 - 18:15
    and a talk couldn't happen as we had
    planned because we forgot something. So we
  • 18:15 - 18:20
    rescheduled and found a new spot for the
    speakers. So apologies again for that. And
  • 18:20 - 18:25
    thank you for your understanding and
    helping us bring you on screen on day two
  • 18:25 - 18:31
    and not day one. But I'm very glad that
    that we could work that out. And that's
  • 18:31 - 18:40
    pretty much it from the line producers, I
    think. Next up is the VOC.
  • 18:40 - 18:45
    ysf: Thank you stb. Yes, you're right, the
    next are the VOC and kunsi and
  • 18:45 - 18:54
    JW2CAlex are waiting for us.
    Franzi: ... is Franzi from the VOC. 2020
  • 18:54 - 19:05
    was the year... Hm? Hi, this is Franzi
    from the... from VOC. 2020 was the year of
  • 19:05 - 19:12
    distributed conferences. We had 2 DiVOCs
    and the FrOSCon to learn how we are going
  • 19:12 - 19:17
    to produce remote talks. We learned a lot
    of stuff on organization, Big Blue Button
  • 19:17 - 19:24
    and Jitsi recording. We had a lot of other
    events which was just streaming like
  • 19:24 - 19:33
    business as usual. So for rC3, we extended
    the streaming CDN with two new locations,
  • 19:33 - 19:42
    now 7 in total, with a total bandwidth of
    about 80 gigabits per second. We have two
  • 19:42 - 19:51
    new mirrors for media.ccc.de and are now
    also distributing the front end. We got
  • 19:51 - 19:58
    two new transcoder machines, Erfas and
    Enhanced cir setup we now have 10 Erfas
  • 19:58 - 20:07
    with own productions on media.ccc.de. So
    the question is, will it scale? On the
  • 20:07 - 20:10
    next slide...
    Alex: Yeah, next slide.
  • 20:10 - 20:21
    Franzi: ... we will see that it did
    scale. We did produce content for 25
  • 20:21 - 20:29
    studios and 19 channels, so we got lots of
    lots of recordings which will be published
  • 20:29 - 20:36
    on media.ccc.de in the next days and
    weeks. Some have already been published,
  • 20:36 - 20:43
    so there's a lot of content for you to
    watch. And now Alex will tell us something
  • 20:43 - 20:48
    about the technical part.
    Alex: My name is Alex, Pronouns it/its. I
  • 20:48 - 20:52
    will now tell you the technical part
    first, but more of the organization. I was
  • 20:52 - 20:57
    between the VOC and the line producing
    team. And now a bit how it worked. So we
  • 20:57 - 21:02
    had those two main channels, rc-one and
    rc-two. Those channels have been produced
  • 21:02 - 21:08
    by the various studios distributed around
    the whole country. And those streams,
  • 21:08 - 21:12
    this is now the upper path in the picture,
    went to our ingest relay, to the FEM, to
  • 21:12 - 21:16
    the master control room. In Ilmenau there
    were a team of people adding the
  • 21:16 - 21:21
    translations, making the mix, making the
    mixdown, making records and then
  • 21:21 - 21:26
    publishing it back to the streaming
    relays. All the other studios produced to
  • 21:26 - 21:31
    channels. Those channels took the also the
    signals from different studios, make a
  • 21:31 - 21:36
    mixdown, etc. publish to our CDN and
    relays and we publish to the studio
  • 21:36 - 21:41
    channels. As you can see, this is not the
    typical setup we had in the last year in
  • 21:41 - 21:47
    the presence. So, our next slide, we can
    see where this leads: Lots of
  • 21:47 - 21:53
    communication. We had the line producing
    team, we had some production in Ilmenau
  • 21:53 - 21:57
    that has to be coordinated. We have the
    studios, we have the local studio helping
  • 21:57 - 22:02
    Angels. We have some Mumbles there, some
    RocketChat here, some CDN people some web
  • 22:02 - 22:07
    where something happens. We have some
    documentation that should be. And then we
  • 22:07 - 22:13
    started to plot down the communication
    paths. Next slide, please. If you plotted
  • 22:13 - 22:17
    all of them, it really looks like the
    world, but this is actually the world, but
  • 22:17 - 22:20
    sometimes it feels like they're just
    getting lost in different paths. Who you
  • 22:20 - 22:25
    have to ask, who do you have to call?
    Where are you? What's the shortest path to
  • 22:25 - 22:33
    communicate? But let's have a look at the
    studios. First going to ChaosWest.
  • 22:33 - 22:40
    Franzi: Yes, on the next slide, you will
    see the studio set up at ChaosWest TV. So
  • 22:40 - 22:47
    thank you, ChaosWest for producing your
    channel.
  • 22:47 - 22:51
    Alex: At the next slide, you see the
    Wikipaka television and fernseh-streamen
  • 22:51 - 22:55
    (WTF) who have the internal motto:
    "Absolut nicht sendefähig - chaos of
  • 22:55 - 23:01
    recording". But even then, at some
    studios, you look more like studios, so
  • 23:01 - 23:08
    this time at the next slide at the hacc.
    Franzi: Yeah, at hacc, you will also see
  • 23:08 - 23:15
    some of the bloopers we had to deal with.
    So, for example, here you can see there
  • 23:15 - 23:25
    was a cat in the camera view, so, yeah.
    And Alex, tell us about the open
  • 23:25 - 23:28
    infrastructure orbit.
    Alex: The open infrastructure orbit
  • 23:28 - 23:32
    showed. In this picture, you can see it's
    really hard to see how you can make a
  • 23:32 - 23:35
    studio look really nice, even if you're
    alone, feeling a bit comfier, more
  • 23:35 - 23:40
    hackish. But you have also those normal
    productions as in the next slide. The
  • 23:40 - 23:46
    Chaosstudio Hamburg
    Franzi: Yeah, at Chaosstudio Hamburg, we
  • 23:46 - 23:53
    had two regular work cases like, you know,
    from all the other conferences, and they
  • 23:53 - 24:02
    were producing, onsite in a regular studio
    set up. And last but not least, we got
  • 24:02 - 24:08
    some impressions from ChaosZone TV.
    Alex: As you can see here, also quite
  • 24:08 - 24:13
    regular studio setup, quite regular. No.
    There was some Corona virus ongoing, and
  • 24:13 - 24:17
    this is we had a lot of distancing,
    wearing mask and all the stuff that
  • 24:17 - 24:23
    everyone is safe but c3yellow (c3gelb)
    will tell you some facts about it. But
  • 24:23 - 24:29
    let's look at the nice things. For
    example, the minor issue: On the second
  • 24:29 - 24:34
    day, we were sitting there looking at our
    nice Grafana. Oh, we got a lot of more
  • 24:34 - 24:39
    connections. The server load's increasing.
    The first question was: Have we enabled
  • 24:39 - 24:45
    our cache?". We don't know. But the number
    of connections is growing that people are
  • 24:45 - 24:50
    watching our streams, the interest goes
    up. And we were, well, at least the people
  • 24:50 - 24:57
    are watching the streams. If there is a
    website, who cares, the interest works.
  • 24:57 - 25:02
    But then we suddenly get the relations.
    Well, something did not really scale that
  • 25:02 - 25:10
    good. And then using the next slide, this
    view. This switched pretty fast from after
  • 25:10 - 25:15
    looking at this traffic graph. "Well,
    that's interesting" into "Well, we should
  • 25:15 - 25:19
    investigate". We get thousands of messages
    on Twitter DMs. We got thousands of
  • 25:19 - 25:23
    messages in RocketChat, IRC, and suddenly
    we had a lot of connections to handle; a
  • 25:23 - 25:28
    lot of inquiries to handle, a lot of phone
    calls, etc. to handle. And we have to
  • 25:28 - 25:31
    prioritize for us the hardware then the
    communication, because otherwise the
  • 25:31 - 25:39
    information won't stop. On the next slide
    you can see what our minor issue was. So
  • 25:39 - 25:43
    at first, we get a lot of connections to
    our streaming web pages, then to load
  • 25:43 - 25:49
    balancers, and finally to our DNS servers.
    A lot of them were quite malformed. It
  • 25:49 - 25:54
    looked like a storm. But the more
    important thing we had to deal was all
  • 25:54 - 26:00
    those passive aggressive messages from,
    from different persons who said: "Well,
  • 26:00 - 26:04
    you can't even handle streaming. What are
    you doing here?" And we worked together
  • 26:04 - 26:08
    with the c3infra team, thanks for that, how
    to scale and decentralize a bit more just to
  • 26:08 - 26:14
    provide the people the connection power
    they need. So I think in the last years,
  • 26:14 - 26:19
    we don't need to use more bandwith. We
    showed we can provide even more bandwith
  • 26:19 - 26:27
    if we need it. And then, noting everything
    down…
  • 26:27 - 26:36
    Franzi: So is it time to shut everything
    down? No, we won't shut everything down.
  • 26:36 - 26:43
    The studios can keep their endpoints, can
    continue to stream on their endpoints as
  • 26:43 - 26:48
    they wish. We want to keep in touch with
    you and the studios, produce content with
  • 26:48 - 26:57
    you, improve our software stack, improve
    other things like the ISDN, the Internet
  • 26:57 - 27:06
    Streaming Digital Node, the project for
    small camera recording setups for sending
  • 27:06 - 27:13
    to speakers needs developers for the
    software. Also, KEVIN needs developers and
  • 27:13 - 27:21
    testers. What's KEVIN? Oh, we have
    prepared another slide or the next slide.
  • 27:21 - 27:28
    KEVIN is short for Killer Experimental
    Video Internet Noise, because we initially
  • 27:28 - 27:36
    wanted to use OBS.Ninja, but there are a
    couple of licensing issues. There is not
  • 27:36 - 27:45
    everything in OBS.Ninja is open source
    like we wanted, so we decided to code our
  • 27:45 - 27:53
    own OBS.Ninja-style software. So if you
    are interested in doing so, please get
  • 27:53 - 28:02
    into contact with us or visit the wiki. So
    that's all from the VOC. And we are now
  • 28:02 - 28:11
    heading over to c3lingo.
    ysf: Exactly. c3lingo oskar should be
  • 28:11 - 28:23
    waiting Studio 2, aren't you?
  • 28:23 - 28:28
    oskar: Yeah, hallo. Hi, yeah, I'm oskar
  • 28:28 - 28:41
    from c3lingo. We will jump straight into
    the stats on our slides. As you can see
  • 28:41 - 28:48
    here, we translated 138 talks this time,
    as you can see, it's also way less
  • 28:48 - 28:54
    languages than in the other chaos events
    that we had since our second languages
  • 28:54 - 28:58
    team that does everything that is not
    English and German was only five people
  • 28:58 - 29:03
    strong this time. So we only managed to do
    five talks into French and three talks
  • 29:03 - 29:13
    into Brazilian Portuguese. And then on the
    next slide… We are looking at our coverage
  • 29:13 - 29:17
    for the talks and we can see that on the
    main talks we managed to cover all talks
  • 29:17 - 29:22
    that were happening from English to German
    and German to English, depending on what
  • 29:22 - 29:30
    the source language was. And then, on the
    other languages track, we only managed to
  • 29:30 - 29:35
    do 15 percent of the talks from the main
    channels. And then on the further
  • 29:35 - 29:39
    channels, which is a couple of others that
    also were provided to us in the
  • 29:39 - 29:46
    translation team, we managed to do 68% of
    the talks, but none of them were
  • 29:46 - 29:53
    translated into other languages than
    English and German. On the next slide,
  • 29:53 - 29:59
    some global stats. We have 36
    interpreters, which in total managed to
  • 29:59 - 30:06
    translate 106 hours and 7 minutes of talks
    into another language simultaneously. And
  • 30:06 - 30:11
    the maximum number of hours one person did
    was 16 hours and the minimum number of
  • 30:11 - 30:17
    hours, the average number of hours people
    did was around 3 hours of translation
  • 30:17 - 30:27
    across the entire event. All right. Then I
    also have some anecdotes to tell and some
  • 30:27 - 30:31
    some mentions I want to do. We had two new
    interpreters that we want to say "hi" to,
  • 30:31 - 30:36
    and we had a couple of issues with the
    digital thing that didn't have before with
  • 30:36 - 30:41
    regular events where people were present.
    For example, the issue of sometimes when
  • 30:41 - 30:46
    two people are translating one person's
    starts interpreting something on wrong
  • 30:46 - 30:50
    stream. Maybe they were watching the wrong
    one. And then the partner just thinks they
  • 30:50 - 30:54
    have more delay or something. Or, for
    example, a partner having a smaller delay
  • 30:54 - 30:59
    and then thinking the partner can suddenly
    read minds because they can translate
  • 30:59 - 31:03
    faster than the other person is actually
    seeing the stream. Those are issues that
  • 31:03 - 31:09
    we usually didn't have with the regular
    stream, but only with the regular events,
  • 31:09 - 31:18
    not with remote events. And yeah, some
    hurdles to overcome. Another thing was,
  • 31:18 - 31:24
    for example, when on the r3s stage, the
    audio cut out sometimes for us and but
  • 31:24 - 31:28
    because one of our translators had also
    already translated the talk twice, at
  • 31:28 - 31:33
    least partially to because and it was
    already canceled after those, they
  • 31:33 - 31:38
    basically knew most of the content and
    could basically do a Powerpoint Karaoke
  • 31:38 - 31:43
    translation and was able to do most of the
    talk just from the slides without any
  • 31:43 - 31:54
    audio. Yeah, and then there also was...
    The last thing I want to say is actually I
  • 31:54 - 31:59
    wanted to say, give a big shout out to the
    two of our team members that weren't able
  • 31:59 - 32:02
    to interpret with us this time because
    they put their heart and soul into this
  • 32:02 - 32:07
    event happening. And that's stb and katti,
    and that's basically everything from
  • 32:07 - 32:16
    c3lingo. Thanks.
  • 32:16 - 32:29
    ysf: muted
  • 32:29 - 32:37
    Hello, c3subtitles is it now. td will show
    the right text to his slides you already
  • 32:37 - 32:48
    saw a minute ago.
    td: OK. OK, hi, so I'm td from the
  • 32:48 - 32:55
    c3subtitles team. And next slide, please.
    So just to quickly let you know how we get
  • 32:55 - 33:00
    from the recorded talks to the released
    subtitles. Well we take the recording
  • 33:00 - 33:05
    videos and apply speech recognition
    software to get a raw transcript. And then
  • 33:05 - 33:08
    Angels work on that transcript to correct
    all the mistakes that the speech
  • 33:08 - 33:13
    recognition software makes. And we again
    apply some autotiming magic to to get some
  • 33:13 - 33:18
    raw subtitles. And then again Angels do
    quality control on these tracks to get
  • 33:18 - 33:24
    released subtitles. Next slide, please. So
    as you can see, we have various subtitle
  • 33:24 - 33:30
    tracks in different stages of completion.
    And these are seconds of material that we
  • 33:30 - 33:35
    have can see all the numbers are going up
    and to the right as they should be. So
  • 33:35 - 33:42
    next slide, please. In total, we had 68
    distinct angels that worked 4 shifts on
  • 33:42 - 33:47
    average. 83 percent of our angels returned
    for a second shift. 10 percent of our
  • 33:47 - 33:55
    angels worked 12 or more shifts. And in
    sum we had 382 hours of angel work for 47
  • 33:55 - 34:01
    hours of material. So far we've had two
    releases for rc3 and hopefully more yet to
  • 34:01 - 34:06
    come, and 37 releases for all the
    congresses, mostly on the first few days
  • 34:06 - 34:11
    where we didn't have many recordings. We
    have 41 hours still in the transcribing
  • 34:11 - 34:17
    stage of material, 26 hours of material in
    the timing stage and 51 hours material in
  • 34:17 - 34:22
    the quality control stage. So there's
    still lots of work to be done. Next slide,
  • 34:22 - 34:26
    please. When you have transcripts, you can
    do fun stuff with them. For example, you
  • 34:26 - 34:32
    can see that important to people in this
    talk are "people". We are working on other
  • 34:32 - 34:37
    cool features that are yet to come. Stay
    tuned for that. Next slide, please. So to
  • 34:37 - 34:42
    keep track of all these tasks, we've been
    using a state-of-the-art high-performance
  • 34:42 - 34:47
    lock-free NoSQL columnar data store,
    a.k.a. a kanban board in the previous
  • 34:47 - 34:51
    years. And because we don't have any
    windows in the CCL building anymore, we
  • 34:51 - 34:56
    had to virtualize that. So we're using
    kanban software now. At this point, I
  • 34:56 - 35:00
    would like to thank all our hard-working
    angels for the work. And next slide
  • 35:00 - 35:05
    please. If you're feeling bored between
    congresses then you can work on some
  • 35:05 - 35:09
    transcripts. Just go to c3subtitles.de. If
    you're interested in our work, follow us
  • 35:09 - 35:15
    on Twitter. And there's also a link to the
    release subtitles here. So that's all.
  • 35:15 - 35:23
    Thank you.
    ysf: Thank you, td. And before we go into
  • 35:23 - 35:29
    the POC, where Drake is waiting, I'm sure
    everyone is asking why are those guys
  • 35:29 - 35:38
    saying "next slide"? So wait. In
    the end, we have the infrastructure review
  • 35:38 - 35:44
    of the infrastructure review meeting going
    on. So be patient. Now, Drake, are you
  • 35:44 - 35:56
    ready in Studio 1?
  • 35:56 - 36:01
    Drake: OK. Hello, I'm Drake from the Phone
    Operations Center, and
  • 36:01 - 36:04
    I like to present to you our
    numbers and maybe some
  • 36:04 - 36:12
    anecdotes at the end of our part. So
    please switch to the next slide. Let's get
  • 36:12 - 36:21
    into the numbers first. So first off,
    first off, you registered about 1950 ...
  • 36:21 - 36:29
    5195 sip extensions, which is about 500
    more than you registered on the last
  • 36:29 - 36:38
    congress. Also, you did about 21 000
    calls, a little bit less than on the last
  • 36:38 - 36:44
    congress. But, yeah, we are still quite
    proud of what you have used our system
  • 36:44 - 36:50
    with. And yeah, it ran quite stable. And
    as you may notice on the bottom, we also
  • 36:50 - 36:57
    had about 23 DECT antennas at the congress
    or at this event. So please switch to the
  • 36:57 - 37:07
    next slide. And this is our new feature,
    it's called the... next slide ..., it
  • 37:07 - 37:12
    is called the eventphone decentralized
    DECT infrastructure, which we especially
  • 37:12 - 37:19
    prepared for this event, the EPDDI. So we
    had about 23 RFPs online throughout
  • 37:19 - 37:30
    Germany with 68 DECT telephones of which
    is up to it. But it's not only the the
  • 37:30 - 37:36
    German part that we covered. We actually
    had one mobile station walking out through
  • 37:36 - 37:42
    Austria, through Passau, I think. So
    indeed we had an European Eventphone DECT
  • 37:42 - 37:52
    decentralized infrastructure. Next slide
    please. We also have some anecdotes, so
  • 37:52 - 37:57
    maybe some of you have noticed that we had
    a public phone, a working public phone in
  • 37:57 - 38:04
    the RC World where you could call other
    people on the SIP telephone system and
  • 38:04 - 38:10
    also other people started to play with our
    system. And I think about yesterday
  • 38:10 - 38:19
    someone started to introduce c3fire so you
    could actually control a flame thrower
  • 38:19 - 38:25
    through our telephone system. And I like
    to present here a video. Next slide
  • 38:25 - 38:35
    please. Maybe you can play it. I have
    quite a delay in waiting for the video to
  • 38:35 - 38:43
    play. So what you can see here is the
    c3fire system actually controlled by a
  • 38:43 - 38:54
    DECT telephone somewhere in Germany. So
    next slide please. We also provided you
  • 38:54 - 39:04
    with SSTV servers via the phone
    number 229, where you could receive some
  • 39:04 - 39:10
    pictures from event phone, like a postcard
    basically. So basically you could call the
  • 39:10 - 39:19
    number and receive a picture or some other
    pictures, some more pictures. And next
  • 39:19 - 39:28
    slide please. Yeah basically, that's all
    from the Eventphone and with that we say
  • 39:28 - 39:34
    thank you all for the nice and awesome
    event and yeah, bye from the first
  • 39:34 - 39:44
    certified assembly POC. Bye.
    ysf: Thank you, POC, and hello GSM Lynxes
  • 39:44 - 39:51
    is waiting for us.
  • 39:51 - 39:56
    lynxes: Yeah, hallo, I'm lynxes, I'm from
  • 39:56 - 40:03
    the GSM team. This year was quite
    different as you can imagine. However,
  • 40:03 - 40:11
    next slide please. So but we managed to
    get a small network running and also a
  • 40:11 - 40:20
    couple of SIM cards registering, so where are
    we now. So next slide please. As you can
  • 40:20 - 40:24
    see, we are just there in the red dot.
    There's not even a single line for our
  • 40:24 - 40:32
    five extensions but we managed 130 calls
    over five extensions. And next slide
  • 40:32 - 40:44
    please. So we got, so we got five
    extensions registered with four SIM cards
  • 40:44 - 40:52
    and three locations with mixed
    technologies also two users so far sadly.
  • 40:52 - 40:58
    And one network with more or less zero
    problems. And so let's take a look on the
  • 40:58 - 41:06
    coverage. So next slide please. So we are
    quite lucky that we managed to get an
  • 41:06 - 41:15
    international network running. So we got
    two base stations in Berlin. One in the
  • 41:15 - 41:20
    hackerspace AfRA, another one north of
    Berlin. And yeah one of our members is
  • 41:20 - 41:33
    currently in Mexico. And he's providing
    the remote chaos networks there. Yes, so
  • 41:33 - 41:43
    that's basically our network. So before we
    going to the next slide, we have what we
  • 41:43 - 41:51
    have done so far is, we are just two
    people instead of 10 to 20 and had some
  • 41:51 - 42:00
    fun with improving our network and
    preparing for the next congress. And next
  • 42:00 - 42:06
    slide please. And yeah, now I'm closing
    with the EDGE computing. We improved our
  • 42:06 - 42:15
    EDGE capabilities and yeah, I wish you a
    hopefully better year and yeah maybe see
  • 42:15 - 42:22
    you next year remote or in person. Have
    fun.
  • 42:22 - 42:31
    ysf: Thanks and I give a hand to Iindworm
    for doing the "slide DJ" all the time, and
  • 42:31 - 42:37
    he now switch to the Haecksen who are
    next and they bring an image and melzai is
  • 42:37 - 42:47
    waiting for us in Studio 3.
  • 42:47 - 42:50
    melzai: Hello, what's phones without
    people?
  • 42:50 - 42:53
    So I'm giving now an introduction
    over here. How many people we needed to
  • 42:53 - 42:59
    run the whole Haecksen assembly. We had
    around 20 organizing haecksen and we had
  • 42:59 - 43:04
    around 20 speakers in our events. And we
    had in total around 40 events, but I'm
  • 43:04 - 43:10
    pretty sure that I even don`t know all of
    these. As you realize, the world is pretty
  • 43:10 - 43:15
    large. So we needed around seven million
    pixels to display the whole Haecksen
  • 43:15 - 43:23
    world. And that needed around 400 commits
    at our github corner of the internet.
  • 43:23 - 43:29
    Around 130 people receive the fireplace
    badge in our case. And around 100 people
  • 43:29 - 43:36
    tested our swimming pool and received that
    badge. So great a year for non ???. Also
  • 43:36 - 43:43
    around 49 people showed some very deep
    dedication and checked on all memorials at
  • 43:43 - 43:47
    our Haecksen assembly. Congratulations for
    that. There were quite a many of these
  • 43:47 - 43:53
    ones. Our events are run on our BigBlueButton
    from the Congress and so we had
  • 43:53 - 44:00
    starting from day 0 no lags and we're able
    to host up to 133 people in one session.
  • 44:00 - 44:05
    And that was quite stable. We also
    introduced four new members around 13 new
  • 44:05 - 44:11
    Haecksen joinded just for the Congress.
    And we increased about to the size of 440
  • 44:11 - 44:17
    Haecksen overall. Also somewhat, we got new
    Twitter accounts supporting us, so we have
  • 44:17 - 44:22
    added over 200 more Twitter accounts. And
    so, you know, our messages are getting
  • 44:22 - 44:28
    heard. But besides the ritual, we also did
    some quite physical things. First of all,
  • 44:28 - 44:33
    we distributed over 50 physical goodie
    bags to the people with microcontrollers
  • 44:33 - 44:39
    and self-sewed masks in it, as you can see
    on the picture. And also sadly, we shopped
  • 44:39 - 44:44
    so many rC3 Haecksen-themed trunks that
    they are now out of stock. But they will
  • 44:44 - 44:54
    be back in January. Thank you.
    ysf: No, thank you. And I'm going to send
  • 44:54 - 45:00
    thanks to the Choaspatinnen…
    Chaospat*innen… who are waiting in Studio
  • 45:00 - 45:11
    One.
    Mike: Hi, all this is Mike from the
  • 45:11 - 45:16
    Chaospat*innen team. We've been welcoming
    new attendees and underrepresented
  • 45:16 - 45:21
    minorities to the chaos community for over
    eight years. We match up our mentees with
  • 45:21 - 45:25
    experienced chaos mentors. These mentors
    help their mentees navigate our world of
  • 45:25 - 45:30
    chaos events. DiVOC was our first remote
    event and it was a good proof of concept
  • 45:30 - 45:38
    for rc3. This year, we had 65 amazing
    mentees and mentors, two in-world
  • 45:38 - 45:43
    mentee/mentor matchup sessions, one great
    assembly event hosted by two of our new
  • 45:43 - 45:50
    mentees, and a wonderful world map
    assembly built with more than 1337
  • 45:50 - 45:58
    kilograms of multicolor pixels. Next
    slide, please. And here's a small part of
  • 45:58 - 46:04
    our assembly with our signature propeller
    hat tables. And thank you to the amazing
  • 46:04 - 46:09
    Chaospat*innen team: fragilant, jali,
    azriel and lilafish. And to our great
  • 46:09 - 46:14
    mentees and mentors. We're looking forward
    to meeting all of the new mentees at the
  • 46:14 - 46:26
    next chaos event.
  • 46:26 - 46:33
    lindworm: Yeah, I think that was my call.
  • 46:33 - 46:50
    So next up, we'll have the, let me see,
    the c3adventure! Are you ready?
  • 46:50 - 46:54
    Roang: Hello, my name is Roang
    Mewp: and I'm Mewp
  • 46:54 - 46:59
    Roang: and we will talk about the
    c3adventure, the 2D world, and what we did
  • 46:59 - 47:11
    to bring it all online. Next slide please.
    OK, so when we started out, we looked into
  • 47:11 - 47:20
    how we could bring a Congress-like
    adventure to the remote experience. And on
  • 47:20 - 47:30
    October we started with the development
    and we had some trouble in that we had
  • 47:30 - 47:36
    multiple upstream merges that gave us some
    problems. And also due to just Congress
  • 47:36 - 47:41
    being Congress, or remote experience being
    a remote experience, we needed to
  • 47:41 - 47:49
    introduce features a bit late or add
    features on the first day. So auth was
  • 47:49 - 47:58
    merged just 4:40 AM in the first day. And
    on the second day, we finally fixed the
  • 47:58 - 48:04
    instance jumps – you know, when you walk
    from one map to the next – we had some
  • 48:04 - 48:08
    problems there. But on the second day it
    all went up. And I hope you have all
  • 48:08 - 48:15
    enjoyed the badges that have finally been
    updated and brought into the world today.
  • 48:15 - 48:23
    What does that all mean? Since we started
    implementing, there have been 400 git
  • 48:23 - 48:29
    commits in our repository all-in-all,
    including the upstream merges. But I think
  • 48:29 - 48:36
    the more interesting stuff is what has
    been done since the whole thing went live.
  • 48:36 - 48:42
    We had 200 additional commits, fixing
    stuff and making the experience better for
  • 48:42 - 48:52
    you. Next slide. In order to bring this
    all online, we not only had to think about
  • 48:52 - 48:57
    the product itself, not only think about
    the world itself, but we also had to think
  • 48:57 - 49:03
    about the deployment. The first commit on
    the deployer, it's a background service
  • 49:03 - 49:10
    that brings the experience to you, has
    been done on 26th of November. We started
  • 49:10 - 49:16
    the first instance, the first clone of the
    work adventure through this deployer on
  • 49:16 - 49:23
    8th of December and a couple of days
    beforehand, I was getting a bit swamped. I
  • 49:23 - 49:27
    couldn't do all of the work anymore,
    because I had to coordinate both of the
  • 49:27 - 49:32
    projects. And so my colleague took over
    for me, and helped me out a lot. So I'll
  • 49:32 - 49:39
    give over to him to explain what he did.
    Mewp: Yeah. So imagine that on Day -5 I
  • 49:39 - 49:47
    get a message from a friend that, "Hey,
    help is needed!" So I say, "OK, let's do
  • 49:47 - 49:56
    it." And Roang tells me that, "OK, so we
    can spawn a instance and to scale it
  • 49:56 - 50:04
    somehow and do that." And I spawned the
    deployer and my music stops. I streamed
  • 50:04 - 50:09
    music from the internet, and I wondered
    why did it stop? And I have noticed that,
  • 50:09 - 50:16
    oh, there are a lot of logs now. Like, a
    lot. And I have finally Day -4 noticed
  • 50:16 - 50:27
    that the deployer was spawning copies of
    itself each few seconds in the log. So
  • 50:27 - 50:33
    that was the state back then. Since Day -4
    until Day 1, we have basically written the
  • 50:33 - 50:45
    thing. And that's, well… Day 1 we were
    ready. Well, almost ready. I mean, we have
  • 50:45 - 50:51
    like 14 instances deployed. And I forgot
    to mention that, when we were about to
  • 50:51 - 51:00
    deploy 200 ones at once, it wouldn't work
    because all of the things would time out.
  • 51:00 - 51:09
    So we patched things quickly, and 13
    o'clock we had our first deployment. This
  • 51:09 - 51:17
    worked, and everything was fine, and…
    wait… Why is everybody on one instance?
  • 51:17 - 51:24
    So, it turns out that we had a bug, not in
    the deployer, in the app that would move
  • 51:24 - 51:31
    you from the lobby to the lobby on a
    different map. So during the first day, we
  • 51:31 - 51:36
    have we've had a lot of issues of people
    not seeing each other because they were on
  • 51:36 - 51:45
    different instances of the lobby. So we
    are working hard, and… next slide, please,
  • 51:45 - 51:56
    so we can see that… we are working hard to
    reconfigure that to bring you together in
  • 51:56 - 52:02
    the assembly. I think we have succeeded.
    You can see the population graph on this
  • 52:02 - 52:10
    slide. The first day was our almost most
    popular one. And the next day it would
  • 52:10 - 52:24
    seem, that's OK, not as popular, but we
    have hit the peak of 1600 users that day.
  • 52:24 - 52:30
    What else about this? The most popular
    instance was lobby, of course. The second
  • 52:30 - 52:38
    most popular instance was hardware hacking
    area for a while. Then the third, I think.
  • 52:38 - 52:51
    Next slide please. We have counted, well,
    first of all, we've had in total about 205
  • 52:51 - 52:57
    assemblies. The number was increase day-
    by-day, because people, through the whole
  • 52:57 - 53:05
    congress, they were working on their maps.
    For a while, CERT had over a thousand maps
  • 53:05 - 53:11
    active in their assembly. Which led to the
    map server crashing. Some of you might
  • 53:11 - 53:19
    have noticed that. It stopped working
    quite a few times during Day 3. And they
  • 53:19 - 53:29
    have reduced the number of maps to 255.
    And that was fine. At the end of Day 3, I
  • 53:29 - 53:42
    have counted for about 628 maps, and this
    is less than the, if, than was available
  • 53:42 - 53:50
    in reality, because it was the middle of
    the night (as always), and it was it
  • 53:50 - 53:56
    wasn't trivial to count them. But in the
    maps I have found, we have found over two
  • 53:56 - 54:02
    million used tiles. So that's something
    you can really explore. I wish I could
  • 54:02 - 54:12
    have, but deploying this was also fun.
    Next slide, please. And what… Yeah?
  • 54:12 - 54:18
    Roang: Just a quick interject. I really
    want to thank everyone that has put work
  • 54:18 - 54:23
    into their maps and made this whole
    experience work. We, we provided the
  • 54:23 - 54:29
    infrastructure, but you provided the fun.
    And so I really want to thank everyone.
  • 54:29 - 54:34
    Mewp: Yeah, the more things happen on the
    infrastructure, the more fun we have. We
  • 54:34 - 54:43
    especially don't like to sleep. So we
    didn't. I basically exchanged with Roang
  • 54:43 - 54:50
    the way that I slept five hours and during
    the night and he slept five hours in the
  • 54:50 - 54:57
    day. And the rest of the time, we were up.
    The record, though, is incorrect. Roang is
  • 54:57 - 55:05
    now 30 hours up straight, because the
    budgets were too important to bring to you
  • 55:05 - 55:14
    to go to sleep. The thing you see on this
    graph is undeployed instances. We were
  • 55:14 - 55:20
    redeploying things constantly. Usually in
    the form of redeploying half of the
  • 55:20 - 55:24
    infrastructure at any given time. The way
    it was developed, you wouldn't have
  • 55:24 - 55:29
    noticed that. You wouldn't be kicked off
    your instances, but for a brief period of
  • 55:29 - 55:40
    time you wouldn't be able to enter any
    one. But… Next slide. I have been joking
  • 55:40 - 55:46
    for a few days at the Congress that they
    have been implementing a sort of
  • 55:46 - 55:50
    Kubernetes thing, because it's
    automatically deploy things, and manage
  • 55:50 - 55:57
    things, and so on. And I have noticed by
    Day 3 that I have achieved true
  • 55:57 - 56:05
    enlightenment and true automation, because
    we have decided to deploy everything at
  • 56:05 - 56:11
    once at some point. The reason was that we
    are being DDOSed, and we had to change
  • 56:11 - 56:21
    something to mitigate that. And so we
    did that, and everything was fine. But we
  • 56:21 - 56:27
    made a typo. We made a typo and the
    deployment failed. And one the deployment
  • 56:27 - 56:39
    failed, it deleted all the servers. So,
    yeah, 405 servers got deleted by what I'm
  • 56:39 - 56:48
    remembering was a single line. So it was
    brought out automatically, and that wasn't
  • 56:48 - 56:55
    a problem. It was all fine, but well, to
    err is human, to automate mistakes is
  • 56:55 - 57:04
    devops. Next slide? What's important is
    that these 405 servers were provided by
  • 57:04 - 57:08
    Hetzner. We couldn't have done that
    without their infrastructure, without
  • 57:08 - 57:16
    their cloud. The reason we got up so
    quickly after this was that the servers
  • 57:16 - 57:21
    were deleted, but they could have been
    reprovisioned almost instantly. So the
  • 57:21 - 57:28
    whole thing took like 10 minutes to get it
    back up. And, next slide. That's all.
  • 57:28 - 57:39
    Thank you all for testing our
    infrastructure, and see you next year.
  • 57:39 - 57:46
    ysf: Thank you, c3adventure! So this was
    clearly the first conference that didn't
  • 57:46 - 57:54
    clap for falling mate bottles! If that's
    not the thing, maybe we try next year? The
  • 57:54 - 58:03
    Lounge. And I know I have to ask for the
    next slide too. The rc3 Lounge artists.
  • 58:03 - 58:09
    And I was asked to read every country
    where someone is in, because everyone had
  • 58:09 - 58:17
    to make the Lounge what it was: an awesome
    experience. So there were: Berlin, Mexico City
  • 58:17 - 58:26
    Honduras, London, Zürich, Stockholm,
    Amsterdam, Rostock, Glasgow, Leipzig,
  • 58:26 - 58:36
    Santiago de Chile, Prag, Hamburg,
    Mallorca, Krakow, Tokyo, Philadelphia.
  • 58:36 - 58:45
    Frankfurt am Main, Köln, Moscow, Taipei
    Taiwan, Hannover, Shanghai, Seoul… Seoul,
  • 58:45 - 58:55
    I think, sorry. Vienna, Hong Kong,
    Karlsruhe and Guatamala. Thank you guys
  • 58:55 - 59:03
    for making the Lounge. So the next is the
    Hub and they should be waiting in
  • 59:03 - 59:32
    Studio Two.
    audible echo
  • 59:32 - 59:35
    XXX: …software is based on Django. And
  • 59:35 - 59:41
    it's intended to be used for the next
    event. The problem is it was a new
  • 59:41 - 59:53
    software. We had to do a lot of
    integrations, yeah, live during Day 0.
  • 59:53 - 60:14
    Well, OK. No. OK, yeah, hi. I'm presenting
    the Hub, which is a software we wrote for
  • 60:14 - 60:20
    this conference. Yeah. It's based on
    different components, all of them are
  • 60:20 - 60:28
    based on Django. It's intended to be used
    on future events as well. Our main problem
  • 60:28 - 60:34
    was it's a new software. We wrote it and,
    yeah, a lot of the integrations were only
  • 60:34 - 60:42
    possible on Day 0 or Day 1. And yeah. So
    even still today on Day 4, we did a lot of
  • 60:42 - 60:47
    updates, commits to the repository, and
    even that numbers on the screens are
  • 60:47 - 60:56
    already outdated again. But yeah, as you
    could possibly see, we have a lot of
  • 60:56 - 61:02
    commits all day, night, or all night long.
    Only a small digit, 6 AM. I am sorry for
  • 61:02 - 61:10
    that. Next slide, please. And yeah,
    because the numbers you're quite busy
  • 61:10 - 61:15
    using the platform, some of these numbers
    on the screen are already outdated again.
  • 61:15 - 61:25
    Out of the 360 assemblies which were
    registered, only 300 got accepted. Most of
  • 61:25 - 61:33
    them were, yeah, event or people wanting
    to do a workshop and trying to register an
  • 61:33 - 61:40
    assembly. Or, duplicates. So, please
    organize yourself. Events, currently we have over
  • 61:40 - 61:47
    940 in the system. You're still clicking
    events, nice. Thanks for that. The events
  • 61:47 - 61:53
    are coordinating with the studios, so we
    are integrating all of the events of all
  • 61:53 - 61:59
    the studios, and the individual ones, and
    the self organized sessions. All of them. A new
  • 61:59 - 62:08
    feature, the badges. Currently you have
    created 411. And, yeah, from these badges
  • 62:08 - 62:18
    redeemed, we have 9269 achievements and
    19 000 stickers. Documentation, sadly, was
  • 62:18 - 62:27
    a 404, because yeah. We were really busy
    doing stuff. Some documentation has
  • 62:27 - 62:33
    already been written, but yeah. More
    documentation is, will become available
  • 62:33 - 62:40
    later. We will open source the whole thing
    of course, but right now we're still in
  • 62:40 - 62:46
    production and cleaning up things. And
    yeah. Finally, for some numbers. Total
  • 62:46 - 62:54
    requests per second were about 400. In the
    night, when the world was redeploying,
  • 62:54 - 63:01
    then we only had about 50 requests per
    second, but it maxed up to 700 requests
  • 63:01 - 63:09
    per second. And the authentication for the
    world, for the 2D adventure, it was about
  • 63:09 - 63:17
    220 requests per second. More or less
    stable due to some bugs and due to some
  • 63:17 - 63:23
    heavy usage. So, yeah, we appreciate that
    you used the platform, used the new Hub,
  • 63:23 - 63:35
    and hope to see you on the next event.
    Thanks.
  • 63:35 - 63:42
    ysf: Hello Hub. Thank you Hub. And the
    next is betalars waiting for us. He's from
  • 63:42 - 63:53
    the c3auti team, and he will tell us what
    he does and his team did this year.
  • 63:53 - 64:04
    betalars: Hi, I'm betalars from c3auti,
    and we've been really busy this year as
  • 64:04 - 64:15
    you can probably see by the numbers on my
    next slide. We have 37 confirmed Auti-Angles
  • 64:15 - 64:25
    and today we surpassed the 200
    hours mark. We have 10 Orga Mumbles
  • 64:25 - 64:30
    leading up to the event and there are
    almost five million unique pixels in our
  • 64:30 - 64:37
    repository. I'm pretty convinced we've
    managed to create the smallest Fairydust
  • 64:37 - 64:45
    of rC3, provided by an actual space
    engineer. And the Tree of Solitude is not
  • 64:45 - 64:52
    the only thing we've managed to create,
    contribute to this wonderful experience.
  • 64:52 - 65:02
    On our next slide, you can see that we
    also contributed six panel sessions for
  • 65:02 - 65:08
    autistic creatures to discuss their
    experiences and five Play sessions for
  • 65:08 - 65:18
    them to socialize. We helped to contribute
    a talk, a podcast, and an external panel
  • 65:18 - 65:26
    to the big streams. And on our own panels,
    we've had up to 80 participants that need
  • 65:26 - 65:33
    to be split up to five breakout rooms so
    they could all have a meaningful
  • 65:33 - 65:45
    discussion. And all their ideas and thoughts
    were anonymized and stored on more than 1000
  • 65:45 - 65:55
    lines of markdown documentation that you can
    find on the Internet. But 1000 lines of
  • 65:55 - 66:01
    markdown wouldn't be enough for me to
    express the gratitude I have towards all
  • 66:01 - 66:09
    the amazing creatures that helped us make
    this experience happen and for all the
  • 66:09 - 66:17
    amazing teams that worked with us. I'm so
    happy to see you again soon, but now I
  • 66:17 - 66:26
    think I will need some solitude for
    myself.
  • 66:26 - 66:32
    ysf: Thank you betalars. So, lindworm, are
    you ready? The next one is the video, as
  • 66:32 - 66:46
    far as I know. It's from the C3 Inclusion
    Operation Center. I don't know the short
  • 66:46 - 66:55
    name; C3IOC? And it's counting down three
    two one go.
  • 66:55 - 67:19
    video without audio
  • 67:19 - 67:26
    So, video is like a very difficult thing to play in
    those days, because we only used to do
  • 67:26 - 67:33
    stuff live. Live means a lot of pixels and
    traffic is done from this here, from this
  • 67:33 - 67:40
    glass, to all the wires and cables and
    back to the glass of your screen. And this
  • 67:40 - 67:47
    is like magic to me, somehow. Although, I.
    am only. being. a robot. to talk.
  • 67:47 - 67:58
    synchronistically. with all the.... It's
    been around enough time, I think, to
  • 67:58 - 68:05
    switch back to Lindy with the video. I
    tell you what we are you going to…
  • 68:05 - 68:18
    video without audio
  • 68:18 - 68:23
    nwng: Hello everyone, I'm nwng from the
    new C3 Inclusion Operation Center. This
  • 68:23 - 68:27
    year, we've been working on accessibility
    guides that help the organizing teams and
  • 68:27 - 68:33
    assemblies improve the event for everyone,
    and especially people with disabilities.
  • 68:33 - 68:36
    We have also worked with other teams
    individually to figure out what can still
  • 68:36 - 68:40
    be improved in their specific range of
    functions - but there are still a lot to
  • 68:40 - 68:45
    catch up on! Additionally, we have
    published a completely free and accessible
  • 68:45 - 68:51
    CSS design template that features dark
    mode and an accessible font selection. And
  • 68:51 - 68:56
    it still looks good without Javascript.
    100 Internet points for that! For you
  • 68:56 - 69:00
    visitors, we have been collecting your
    feedback through mail or twitter – and
  • 69:00 - 69:04
    won't stop after the Congress! If you
    stumbled across some barriers, please get
  • 69:04 - 69:11
    in touch via c3ioc.de or @c3inclusion on
    twitter to tell us about your findings!
  • 69:11 - 69:19
    Thanks a lot for having us.
    ysf: Thank you for the video. Finally,
  • 69:19 - 69:28
    technical's working! We should… does
    someone know computers? Maybe? Kritis is
  • 69:28 - 69:33
    one of them, and he is waiting in Studio
    One to tell us something about C3 Yellow
  • 69:33 - 69:45
    or c3gelb wie wir hier sagen.
  • 69:45 - 69:47
    Kritis: Yeah, welcome. I'm still looking
  • 69:47 - 69:50
    at this hard drive. Maybe you remember
    this from the very beginning? It has to be
  • 69:50 - 69:56
    disinfected really thoroughly, and I guess
    I can take it out by the end of the event.
  • 69:56 - 70:05
    And for… the next slide with the words,
    please. We did found roughly 0777 hands
  • 70:05 - 70:13
    wash options and 0x3FF waste disposal
    possibilities. We checked the correct date
  • 70:13 - 70:22
    on almost all of the 175 disinfectant
    options you had around here. And because
  • 70:22 - 70:27
    at a certain point of time, people from
    CERT were not reachable in the CERT room
  • 70:27 - 70:31
    because they were running around
    everywhere else in this great 2D world. We
  • 70:31 - 70:34
    had the chance to bypass and channel all
    the information because there were two
  • 70:34 - 70:40
    digital cats on a digital tree. And so we
    got the right help to the right option.
  • 70:40 - 70:45
    Next slide, please. We have a couple of
    options ongoing. A lot of work had been
  • 70:45 - 70:51
    done before. We had all the studios with
    all the corona things going on before, but
  • 70:51 - 70:58
    now we think we should really watch into
    an angel disinfectant swimming basin for
  • 70:58 - 71:04
    the next time, to have there the maximum
    option of cleanliness. And we will talk
  • 71:04 - 71:11
    with the BOC. If we can maybe achieve to
    use this Globuli maxi-cubes for the
  • 71:11 - 71:17
    Tschunk in the upcoming time. Apart from
    that, in order to get more Bachblüten and
  • 71:17 - 71:24
    everything else, we need someone who is
    able to help us with the Potenzieren
  • 71:24 - 71:32
    for homoeopathic substances. So if you
    feel welcome with that, please just drop
  • 71:32 - 71:40
    us a line to: info@c3gelb.de. Thank you
    very much and good luck.
  • 71:40 - 71:45
    ysf: Thank you Kritis. Finally happy to
    hear your voice. I only know you from
  • 71:45 - 71:51
    Twitter, where we treat our stuff
    together, or I yours and you, mine, don't.
  • 71:51 - 71:57
    Maybe you're going to change it… please?
    And, talking about messages. Chaos Post
  • 71:57 - 72:06
    was here too, and trilader, whom we
    already heard earlier, has more to say.
  • 72:06 - 72:11
    trilader: OK, welcome. It's me again. I've
    changed outfits a bit. I'm not here for
  • 72:11 - 72:16
    the Signal Angels anymore, but for Chaos
    Post. So, yeah. We had an online office
  • 72:16 - 72:23
    this year again, as we had with the DiVOCs
    before. And I've got some mail numbers for
  • 72:23 - 72:29
    you that should be on the screen right
    now. If it's not, if it's on the title
  • 72:29 - 72:38
    page, please switch to the first one where
    it lists a lot of numbers. We had 576
  • 72:38 - 72:46
    messages delivered total. This is numbers
    from around half to six. And 12 of them we
  • 72:46 - 72:51
    weren't able to deliver because, well,
    non-existent mailboxes or full mailboxes
  • 72:51 - 72:59
    mostly. We delivered mail to 43 TLDs, the
    most going to Germany, to .de domains,
  • 72:59 - 73:06
    followed by .com, .org, .net, and to
    Austria with .at; We had a couple of
  • 73:06 - 73:12
    motifs you could choose from, the most
    popular one was "Fairydust at Sunset", 95
  • 73:12 - 73:18
    people selected that. Next slide. About
    our service quality. We had a minimum
  • 73:18 - 73:25
    delay from the message coming in, us
    checking it, and it going out for about a
  • 73:25 - 73:30
    bit more than four seconds. The maximum
    delay was about seven hours. That was
  • 73:30 - 73:36
    overnight, when no agents were ready, or
    they were all asleep, or having… being
  • 73:36 - 73:41
    busy with, I don't know, the Lounge or
    something? And on average a message took
  • 73:41 - 73:47
    you, took us 33 minutes from you putting
    it into our mailbox to it getting out.
  • 73:47 - 73:53
    Some fun facts: We had issues delivering
    to T-Online at the first two days, but we
  • 73:53 - 73:58
    managed to get that fixed. A different
    mail provider refused our mail because it
  • 73:58 - 74:05
    contained the string c3world, the domain
    in the mail text. And apparently new
  • 74:05 - 74:09
    domains are scary, and you can't trust
    them or something. We had created a ticket
  • 74:09 - 74:15
    with them, they fixed it, and it was super
    fast, super nice service. Yeah. Also, some
  • 74:15 - 74:21
    people tried to sent digital postcards to
    Mastodon accounts because they looked like
  • 74:21 - 74:26
    email addresses or something. Another
    thing that's not on a slide is we had
  • 74:26 - 74:32
    another new feature this time. That was
    our named recipients. So you could, for
  • 74:32 - 74:40
    example, send mail to CERT without knowing
    their address. And they also have a really
  • 74:40 - 74:44
    nice postcard wall, where you can see all
    the postcards you sent them. The link for
  • 74:44 - 74:56
    that is on our Twitter. Thank you.
    ysf: Thank you Chaos Post. lindworm, are
  • 74:56 - 75:00
    you there?
    lindworm: Ja, ja. Ich bin da, Ich bin da.
  • 75:00 - 75:05
    Hallo, you're hearing me?
    ysf: I hear you.
  • 75:05 - 75:13
    lindworm: So I have to switch some more.
    It's kind of stressy for me, really.
  • 75:13 - 75:21
    ysf: You're doing an awesome job. Thank
    you for doing it. So just out of
  • 75:21 - 75:28
    curiosity, and did you have a problem
    accepting any cookies or so?
  • 75:28 - 75:36
    lindworm: No, not really.
    ysf: I heard somewhere. That some really
  • 75:36 - 75:39
    smart people had problems using the site
    because of cookies.
  • 75:39 - 75:45
    lindworm: Oh, no, that was not my problem.
    I only couldn't use the site because of
  • 75:45 - 75:54
    overcrowding. That was often one of my my
    little problems. And please, I hope you
  • 75:54 - 75:59
    don't see what I'm doing right now in the
    background with starting our pets and so
  • 75:59 - 76:12
    on. And what I wanted to say to all of
    you, this was the first Congress where we
  • 76:12 - 76:19
    have so many women and so many non-cis
    people running that show and being up
  • 76:19 - 76:25
    front the camera and making everything up.
    I would really thank you all. Thank you,
  • 76:25 - 76:31
    that you made that possible. And thank you
    that we get more and more diverse, year by
  • 76:31 - 76:39
    year.
    ysf: I can only second that. And now we
  • 76:39 - 76:43
    are switching to C3 Infrastructure.
    lindworm: Yeah, we need to.
  • 76:43 - 76:50
    ysf: I'm sure a lot of questions will be
    answered by them.
  • 76:50 - 76:58
    lindworm: And I try to make up the slides
    for that, but I do not find them right
  • 76:58 - 77:03
    now.
    patrick: Look mom, I'm on TV.
  • 77:03 - 77:11
    thies: Yeah. Welcome to the infrastructure
    review of the Team Infrastructure. I'm not
  • 77:11 - 77:16
    quite sure if we have the newest revision
    of the slides, cause my version of the
  • 77:16 - 77:22
    stream isn't loading right now. So maybe
    lindworm, is it possible to press
  • 77:22 - 77:30
    control-R? And you're seeing a burning
    computer, then we have the actual slides.
  • 77:30 - 77:35
    Patrick: Let's just Powerpoint Karaoke
    without the background music.
  • 77:35 - 77:44
    thies: Yeah, and without the PowerPoint
    presentation in realtime. Now I'm seeing
  • 77:44 - 77:48
    me. Let's wait a few seconds until we see
    a slide.
  • 77:48 - 77:52
    Patrick: We want to wait the entire stream
    delay.
  • 77:52 - 78:00
    thies: It's just about 30 to one minute.
    Patrick: Well done.
  • 78:00 - 78:10
    thies: Yeah, I'm thies and I'm waiting.
    And this is Patrick, and he's waiting too.
  • 78:10 - 78:20
    Yeah, but that's in the middle of the
    slides. Can we go… OK. Yeah. I'm now
  • 78:20 - 78:27
    seeing something in the middle of the
    slides, but it seems fine. OK, yeah. We
  • 78:27 - 78:37
    are the team C3 Infra. rC3 Infra. We are
    creating the infrastructure. Next slide.
  • 78:37 - 78:50
    We had about nine terabytes of RAM and
    1,700 CPU cores. The whole event there's
  • 78:50 - 78:58
    only one dead SSD that died because
    everything's broken. We had five dead RAID
  • 78:58 - 79:03
    controllers, and didn't bother to replace
    the RAID controllers, just replaced them
  • 79:03 - 79:14
    with new servers. And 100 percent uptime.
    Next slide. We looked about 42 hours on
  • 79:14 - 79:23
    starting screens of enterprise servers. 20
    minutes max is what HP delivered. And we
  • 79:23 - 79:32
    are now certified enterprise observers. We
    had only 27%-ish of visitors using IPv6.
  • 79:32 - 79:40
    So that's even less than Google publishes.
    And even though we had almost full IPv6
  • 79:40 - 79:48
    coverage – except some really, really shady
    out-of-band management networks – we're
  • 79:48 - 79:55
    still not at the IPv6 coverage that we are
    hoping for. I'm not quite sure if that's
  • 79:55 - 80:05
    the right slides. But I'm not quite sure
    where we are in the text. Yeah, Patrick.
  • 80:05 - 80:11
    Patrick: Yeah, so before the Congress
    there was one prediction: there's no way
  • 80:11 - 80:18
    it cannot be not DNS. And while it was DNS
    at least once, so we checked that box. And
  • 80:18 - 80:27
    let's go over to the next topic, OS. We
    provisioned about 300 nodes, and it was an
  • 80:27 - 80:33
    Ansible-powered madness. So, yeah, there
    was full disk encryption on all nodes. No
  • 80:33 - 80:38
    IP logged in the access logs, we took
    extra care of that. And we configured
  • 80:38 - 80:43
    minimal logging wherever possible, so the
    case of some problems we only had WARNINGs
  • 80:43 - 80:51
    available. And there are no INFO logs, no
    DEBUG logs; just the minimal logging
  • 80:51 - 80:56
    configuration. And with some software, we
    had to pipe logs to /dev/null because the
  • 80:56 - 81:01
    software just wouldn't stop logging IP's,
    and we didn't want that. So no personal
  • 81:01 - 81:07
    data in logs, so no GDPR headache, and
    your data is safe with us. The Ansible
  • 81:07 - 81:12
    madness I've talked about was a magical
    deployment that deep bootstrapped into the
  • 81:12 - 81:18
    live system and assimilated into the rC3
    infrastructure while it's still running.
  • 81:18 - 81:27
    So if you didn't boot your machine then
    what? They're just running. When a OS
  • 81:27 - 81:32
    deployment was broken, it was almost
    always due to a network or routing. At
  • 81:32 - 81:37
    least the OS team claims that, and this
    claim is disputed by the network team of
  • 81:37 - 81:43
    course. One time, the deployment broke
    because of a trigger happy infra angel.
  • 81:43 - 81:52
    But let's not talk about that. Of course,
    at this point, we want to announce our
  • 81:52 - 81:58
    great cooperation with our gold sponsor
    ddos24.net, who provided an excellent
  • 81:58 - 82:06
    service of handcrafted request to our
    infrastructure. That was a great demand or
  • 82:06 - 82:14
    great public demand, with a million
    requests per second for a while. But even
  • 82:14 - 82:22
    during the highest or peak demand, we were
    able to serve most of these services. We
  • 82:22 - 82:28
    provide the infrastructure to the VOC, and
    they quickly made use of the provided
  • 82:28 - 82:36
    infrastructure deployed there. Overall, an
    amazing time to market. We had six
  • 82:36 - 82:41
    locations, and those six locations where
    some wildly different, special snowflakes
  • 82:41 - 82:49
    overall. So we had Düsseldorf, 816 CPU
    cores there, two terabytes of RAM, and we
  • 82:49 - 82:55
    had 10 gigabits per second interconnect.
    There was also a 1 terabit per second
  • 82:55 - 83:00
    Infiniband available, but sadly, we
    couldn't use that. It would have been
  • 83:00 - 83:05
    nice. The machines that had a weird and
    ancient IPMI, which made it hard to deploy
  • 83:05 - 83:10
    there. And the admin on location never
    deployed bare metal hardware to a
  • 83:10 - 83:15
    datacenter, so there were also some
    learning experience there. Fun fact about
  • 83:15 - 83:21
    Düsseldorf, this was the data center with
    the maximum heat. One server, seven units,
  • 83:21 - 83:30
    over 9000 watts of power. 11.6 to be
    exact. Which is why they had some to take
  • 83:30 - 83:40
    some creative heat management solutions.
    Next was Frankfurt, there we had 620
  • 83:40 - 83:48
    gigabits of total uplink capacity, and we
    actually only used 22 gigabit during peak
  • 83:48 - 83:54
    demand. Again, by our premium sponsor:
    ddos24.net. There was zero network
  • 83:54 - 84:03
    congestion and 1.5 gigabits per second
    were IPv6. So there was no real traffic
  • 84:03 - 84:09
    challenge. For the network engineers of
    you, it was a full Layer 3 architecture
  • 84:09 - 84:16
    with MPLS between the WAN routers. And
    there was a night shift on the 26the and
  • 84:16 - 84:25
    27th for more servers, because some
    shipments didn't arrive yet. The fun fact
  • 84:25 - 84:30
    about this datacenter was the maximum
    bandwidth. Some servers there had 50
  • 84:30 - 84:37
    gigabit uplink on the server configured.
    It was the data center with the maximum
  • 84:37 - 84:42
    manual intervention. Of course, we had the
    most infrastructure there and it wasn't
  • 84:42 - 84:48
    oversubscribed at any point. We had some
    hardware in Stuttgart, which was basically
  • 84:48 - 84:53
    the easiest deployment. There were also
    some night shifts, but the thanks to
  • 84:53 - 84:59
    neuner and team this was a really easy
    deployment. It was also the most silent
  • 84:59 - 85:07
    DC, so no incident from Day -5 until now.
    So if you're currently watching from
  • 85:07 - 85:14
    Stuttgart now, you can create some issues
    because now we said it. Wolfsberg was the
  • 85:14 - 85:18
    smallest DC. We only had three servers and
    we managed to kill one hardware RAID
  • 85:18 - 85:26
    controller, so we only could use two
    servers there. So, yeah. And then Hamburg
  • 85:26 - 85:31
    was the data center with the minimum
    uptime. We never could deploy to this data
  • 85:31 - 85:35
    center because there was a broken netboot
    and we couldn't provision anything there.
  • 85:35 - 85:42
    And of course, the sixth data center was
    the Hetzler Cloud, where we deployed it on
  • 85:42 - 85:48
    all locations. Deployment fun facts: we
    received a covid warning from the data
  • 85:48 - 85:53
    center. Luckily, it didn't affect us. It
    was at another location. But thanks for
  • 85:53 - 86:00
    the heads-up and the warning. The team
    leader of a sponsor needed to install
  • 86:00 - 86:07
    Proxmox in a DC with no knowledge, without
    any clue what they were doing. We
  • 86:07 - 86:11
    installed Proxmox in the Hamburg DC, and
    no server actually wanted to talk to us,
  • 86:11 - 86:17
    so we had to give up on that. And there
    had to be a lorry relocated before we
  • 86:17 - 86:27
    could deploy other servers. So that's that
    was standing in the way there. Now, let's
  • 86:27 - 86:35
    get to Jitsi. Our peak count were 1,105
    users at the same time, on the same
  • 86:35 - 86:42
    cluster. I don't know if it was at the
    same time as the peak user count, but the
  • 86:42 - 86:44
    peak conference count was 204 conferences.
  • 86:44 - 86:50
    I hope we can still beat
    that today, but this is data from
  • 86:50 - 86:59
    yesterday. The peak conference size was 94
    participants in a single conference. And
  • 86:59 - 87:07
    let me give condolences to your computer,
    because that must have been hard on it.
  • 87:07 - 87:14
    Our peak outgoing video traffic on the
    Jitsi video bridges was 1.3 gigabits per
  • 87:14 - 87:24
    second. And we had about three quarters of
    the participants were streaming video and
  • 87:24 - 87:32
    one quarter of them had video disabled.
    Interesting ratio. Our Jitsi deployment
  • 87:32 - 87:38
    was completely automated with Ansible, so
    it was zero to Jitsi in 15 minutes. We
  • 87:38 - 87:43
    broke up the Jitsi cluster into four
    shards to have better scalability and
  • 87:43 - 87:48
    resilience. So if one shard went down, it
    would only affect part of the conferences
  • 87:48 - 87:53
    and not all of them. Because there are
    some infrastructure components that you
  • 87:53 - 88:00
    can't really scale or cluster, so we went
    with with the sharding route. Our Jitsi
  • 88:00 - 88:08
    video bridges were at about 42% peak usage
    – excluding our smallest video bridge,
  • 88:08 - 88:11
    which was only eight cores and eight
    gigabytes, which we added in the beginning
  • 88:11 - 88:17
    to test some stuff out, and it remained in
    there. And yes, we overprovisioned a bit.
  • 88:17 - 88:22
    There will also be a blog post on our
    Jitsi Meet deployment coming in the
  • 88:22 - 88:31
    future. And for the next time we, for the
    upcoming days, we will enable 4K streaming
  • 88:31 - 88:41
    on there. So why not use that? And we want
    to say thanks to the FFMEET Projekt, who
  • 88:41 - 88:46
    contacted us after our initial load test
    and gave us some tips to handle load
  • 88:46 - 88:58
    effectively and so on. We also tried
    making DECT call-out working. Spent 48
  • 88:58 - 89:07
    hours trying to get it to work, but there
    were some troubles there. So sadly, no
  • 89:07 - 89:15
    adding DECT participants to your Jitsi
    conferences for now. jitsi.rc3.world will
  • 89:15 - 89:23
    be running over New Year. So you can use
    that to get together with your friends and
  • 89:23 - 89:28
    so on over the New Year. Stay separate,
    don't visit each other please. Don't
  • 89:28 - 89:36
    contribute to covid-19 spread. You've got
    the alternative there. Now let's go over
  • 89:36 - 89:41
    to monitoring. thies.
    thies: Yeah, thanks. First of all, it's
  • 89:41 - 89:47
    really funny how you edit this page, but
    reveal.js doesn't work that way until
  • 89:47 - 89:52
    lindworm reloads the page, which hopefully
    doesn't do right now. Everything's fine,
  • 89:52 - 89:58
    so you can leave it to be. Yeah,
    monitoring. We had to Prometheus and
  • 89:58 - 90:05
    Alertmanager set up completely driven out
    of our solemnly one and only source of
  • 90:05 - 90:14
    truth: our Netbox. We received about
    34 858 critical alerts. It's – looking at
  • 90:14 - 90:21
    my mobile phone – it's definitely more
    right now. And about 13,070 warnings. Also
  • 90:21 - 90:30
    definitely more right now. And we tended
    about 100 of them. The rest was kind of
  • 90:30 - 90:42
    useless. Next slide, please. As it's
    important to have an abuse hotline and an
  • 90:42 - 90:48
    abuse contact, we received two network
    abuse messages, both from Hetzner – one of
  • 90:48 - 90:52
    our providers – letting us know that
    someone doesn't like our infrastructure as
  • 90:52 - 91:02
    much as we do. Props to ddos24.net. And we
    got one call it our abuse hotline, and it
  • 91:02 - 91:09
    was one person who wanted to buy a ticket
    from us – Sadly, we were out of tickets.
  • 91:09 - 91:16
    Next slide, please. Some other stuff. We
    got a premium Ansible deployment brought
  • 91:16 - 91:26
    to you by turing-complete YAML. That sounds
    scary. And we had about 130k DNS updates
  • 91:26 - 91:32
    thanks to the World team. At this point
    they're really stressing our DNS API with
  • 91:32 - 91:39
    the re-deployments. And also our DNS,
    Prometheus, and Grafana are deployed on
  • 91:39 - 91:48
    and by NixOS thanks to flüpke and head
    over to flüpkes interweb thingy. He wrote
  • 91:48 - 91:55
    some blog posts about how to deploy stuff
    with his NixOS. And the next slide,
  • 91:55 - 92:02
    please. And the last slide from the team
    is the list of our sponsors. Huge thanks
  • 92:02 - 92:08
    to all of them. It won't be possible to
    create such a huge event and such loads of
  • 92:08 - 92:15
    infrastructure without them. And that's
    everything we have.
  • 92:15 - 92:26
    ysf: Amazing. Thank you for all you've
    done. Truly incredible, and showing
  • 92:26 - 92:31
    everything to the public. So I promised
    that there will be a kind of behind the
  • 92:31 - 92:37
    scenes look of this infrastructure talk or
    review. And I really have nothing to do
  • 92:37 - 92:41
    with it. Everything was done by completely
    different people. I'm only a Herald,
  • 92:41 - 92:47
    somehow lost and tumbled into this stream.
    And so I'm just going to say switch to
  • 92:47 - 93:04
    wherever. Show us the magic.
    Karlsruhe: Three hours ago, I got the
  • 93:04 - 93:11
    call… Hello and welcome from the last part
    of the infrastructure review and greetings
  • 93:11 - 93:16
    from Karlsruhe. So three hours ago, I got
    a call from lindworm and he asked me, how
  • 93:16 - 93:23
    is it with this last talk we have? It may
    be a bit complicated. And he told me, OK,
  • 93:23 - 93:29
    we have a speaker. I'm the Herald. Oh,
    that's always so. And then we realized,
  • 93:29 - 93:35
    yeah, we don't have only one speaker, we
    have 24. And so that's why we called
  • 93:35 - 93:42
    ChaosWest and built up an infrastructure
    which dampfkatze will explain you now in a
  • 93:42 - 93:48
    short minute. I think so.
    dampfkatze: Thank you. Yes. Oh, I lost the
  • 93:48 - 93:58
    sticker. OK, after we called ChaosWest, we
    came up with this monstrosity of the video
  • 93:58 - 94:09
    cluster. And we start here. The teams
    streamed via OBS.Ninja onto three
  • 94:09 - 94:20
    ChoasWest studios. They were brought
    together via RTMP on our Mix1 local
  • 94:20 - 94:31
    studio, and then we pumped that into Mix2,
    which pumped it further to the VOC. The
  • 94:31 - 94:38
    slides were brought in via another
    OBS.Ninja directly onto Mix2. They came
  • 94:38 - 94:44
    from lindworm. Also, the closing you will
    see shortly hopefully will also come from
  • 94:44 - 94:53
    there. And ysf and lindworm were directly
    connected via OBS.Ninja onto our Mix1
  • 94:53 - 95:03
    computer. And Mix2 also has the studio
    camera you're watching right now. And for
  • 95:03 - 95:10
    the background communication, we had a
    Mumble connected with our audio matrix.
  • 95:10 - 95:18
    And lindworm, ysf, and the teams, and we
    in the studio locally could all talk
  • 95:18 - 95:24
    together. And now back to the closing
    with… No, to the Herald News Show, I
  • 95:24 - 95:33
    think. lindworm will introduce it to you.
    lindworm is live.
  • 95:33 - 95:52
    lindworm: Is ysf still there? Or do you
    come with me? So it will take a second or
  • 95:52 - 96:02
    billions of years. So thank you very much
    for this review. It was as chaotic as the
  • 96:02 - 96:05
    Congress.
  • 96:05 - 96:17
    postroll music
  • 96:17 - 96:28
    Subtitles created by c3subtitles.de
    in the year 2021. Join, and help us!
Title:
#rC3 - Infrastructure Review
Description:

more » « less
Video Language:
English
Duration:
01:36:44

English subtitles

Revisions