rC3 preroll music
ysf: Hello and welcome to the
infrastructure review of the rC3 this
year, 2020. What the hell happened? How
could it happen? I'm not alone this year.
With me is lindworm who will help me with
the slides and everything else I'm going
to say before. And this is going to be a
great fuck up like last year, maybe. We
have more teams, more people, more
streams, more of everything. And the first
team and lindworm who I'm going to introduce
is the SHOC. Are you there with me?
Lindworm: Oh, yeah, so I got to go to the
SHOC. Yeah, it's kind of a stress this
year. We only had about 18 heralds for the
main talks rC1 and rC2. And we have
introduced about 51 talks with that.
Everybody from this home setup, which was
a very, very hard struggle. So we all had
a metric ton of adrenaline and excitement
without… within us. So here you can see
what you have seen, how a herald looks
from the front. And so it does look in the
background. Oof. That was hard, really
hard for us. So you see all our different
set ups here, do we have? And we are very,
very pleased to also have set up a
completely new operation center: the
Herald News Show, which I really, really
like you to review on YouTube. This was
such a struggle. And we have about, oh,
wait a second, so as we said, we're a
little bit unprepared here, I need to have
my notes up. There were 20 members that
formed a new team on the first day. They
made 23 shows, 10 hours of video
recording, 20 times the pizza man rung at
the door. And 23 mate bottles had been
drunk during the preps because all of
those people needed to be online the
complete time. So I really applaud to
them. That was really awesome, what they
brought over the team and what they
brought over the stream. And this is an
awesome team I hope we see more of. ysf,
would you take it over? ysf is muted
Oh, no. My, my bad. So is the heaven
ready? We need to go to the heaven and
would have an infrastructure review of the
heaven.
raziel: OK. Du hörst mich noch? Ja, hallo?
Ich bin der raziel aus dem Heaven und ehm…
Yeah, heaven is ready, so welcome,
everybody. I'm raziel from heaven, and I
will present you the infrastructure review
from the heaven team. We had some angel
statistics scrapped out a few hours ago.
And on this year, we have not so much
angels like last year, because we had a
remote event, but we had a total of 1487
total angels from which 710 arrived and
even more of 300 angels that at least
still did one shift. And in total the
recorded work done to that point was
roughly 17 and 75 weeks of done working
hours, and for the rC3 world we also
prepared a few goodies so people could
come visit us. And so we provided them a
few badges there. And every angel that,
for example, found our extinguished…
expired extinguisher and also extinguished
fire in heaven. The first batch was
achieved from 232 of our angels and even
less. But still a good number of 125
angels accomplished to help us and
extinguish the fire that broke out during
an event. And with that numbers checked,
we also will jump into our heaven. So I
would like to show you some expressions
and impressions from it. We had quite the
team working to do exactly what the heaven
could do: manage its people so we needed
our heaven office. And we also did this
with respect to your privacy, so. We
painted our color… our clouds white as
ever, so we cannot see your nicknames, and
you could do your angel work but not be
bothered with us asking for your names.
And also, we had prepared some secret
passage to our back office. And every time
on the real event, it would happen that
some adventurers would find their way into
our back office. And so we needed to
provide that opportunity as well, as you
can see here. And let me say that some
adventurers tried to find the way in our
sacred digital back office, but only a few
were successful. So we hope everyone found
its way back into the real world from our
labyrinth. And we also did not spare any
expenses to do some additional update for
our angels as well. As you can see, we
tried to do some multi-instance support.
So some of our angels also accomplished to
split up and serve more than one angel at
a time. And that was quite awesome. And so
we tried to provide the same things we
would do on Congress, but now from our
remote offices. And one last thing that
doesn't… normally doesn't need to be said.
But I think in this year and with this
different kind of event, I think it's
necessary that the heaven as a
representative, mostly for people trying
to help make this event awesome. And I
think it's time to say the things we do
take for granted. And that is thank you
for all your help. Thank you for all the
entities, all the teams, all the
participants that achieved the goal to
bring our real Congress that many, many
entities missed this year into a new
stage. We tried that online. It had its
ups and downs. But I still think it was an
awesome adventure for everyone. And from
the Heaven team I can only say thank you
and I hope to see you all again in the
future on a real event. Bye! And have a
nice New Year.
lindworm: Hello, hello, back again. So we
now are switching over to the Signal
Angels. Are the signal angels ready?
Hello!
trilader: Yeah, hello, uhm, welcome to the
infrastructure review for the Signal
Angels, I have prepared some stuff for
you. This was for us… slides, please? This
was for us the first time running a fully
remote Q&A session set, I guess? We had
some experience with DiVOC and had gotten
some help from there on how to do this,
but just to compare, our usual procedure
is to have a signal angel in the room.
They collect the question on their laptop
there and they communicate with the Herald
on stage and they have a microphone like
I'm wearing a headset. But in there we
have a studio microphone and we speak
questions into it. Yeah, but remotely we
really can't do that. Next slide. Because,
well, it would be quite a lot of hassle
for everyone to set up good audio setups.
So we needed a new remote procedure. So we
figured out that with a signal Angel and
the Herald could communicate via
a pad and we could also collect the
question in there. And the Herald will
read the question to the speaker and
collect feedback and stuff. So we had 175.
No, 157 shifts, and sadly we couldn't fill
five of them in the beginning because
there was not enough people already there.
And also technically it was more than five
unfilled shifts because for some reasons
there were DJ sets and other things that
aren't talks and also don't have Q&A. We
had 61 angels coordinated by four
supporters, so me and three other people,
and we had a 60 additional angels that
in theory wanted to do signal angel work
but didn't show up to the introduction
meeting. Next! For, as I've said for each
session, each talk, we created a pad where
we put in the questions from IRC,
Mastodon, and Twitter and. Well, we have a
bit more pads than talks we actually
handled, and I have some statistics about
an estimated number of questions per talk.
What we usually assume is that there's a
question per line, but some questions are
really long and have to split over
multiple lines. There are some structured
questions with headings and paragraphs
some heralds or signal angels removed
questions after they were done. And also
there were some chat and other
communication in there. So next slide, we
took a Python script, download all the pad
contents, read them, counted the number of
lines, remove the size of the static
header. And in the end we had 179 pads and
1,627 lines if we discount the static
header of nine lines per pad. So that in
theory leads to about nine questions in
quotation marks because it's not really
questions but lines. But it's an estimate,
per talk. Thank you.
ysf: ... talk and what I've learned is
never miss the introduction. So the next
in line are the line producers ha ha ha ha
stb are you there?
stb: I am here, in fact, so singing. So
the people a bit older might recognize
this melody badly sung by yours truly and
other members of the line producers team,
and I'll get to why that is relevant to
what we've been doing at this particular
event. So what does, what do line
producers do? What does an,
Aufnahmeleitung actually perform? It's
basically communication between everybody
who's involved in the production, the
people behind the camera and also in front
of the camera. And so our work started
really early, basically at the beginning
of November, taking on like prepping
speakers in a technical setup and
rehearsing with them a little bit and then
enabling the studios to allow them to
actually do the production coordination on
an organizational side. The technical side
was handled by the VOC, and we'll get to
hear about that in a minute. But getting
all these people synced up and working
together well, that was quite a challenge.
And that took a lot of Mumbles with a lot
of people in them. We only worked on the
two main channels. There's quite a few
more channels that are run independently
of kind of the central organization. And
again, we'll get to hear about the details
of that in a minute. And so we provided
information. We tried to fill wiki pages
with relevant information for everybody
involved. So that was our main task. So
what does that mean specifically, the
production set up? We had 25 studios,
mainly in Germany, also one in
Switzerland. These did produce recordings
ahead of time for some speakers, and many
did live set ups for their own channels
and also for the two main channels. And
I've listed everybody involved in the live
production here. And there were 19
channels in total. So a lot of stuff
happening. 25 studios, 19 channels that
broadcast content produced by these
studios. So that's kind of the Eurovision
kind of thing, where you have different
studios producing content and trying to
mix it all together. Again, the VOC took
care of the technical side of things very
admirably, but getting everybody on the
same page to actually do this was not
easy. For the talk program, we had over
350 talks in total, 53 in the main channels
And so handling all that, making
sure everybody has the speaker information
they need and all these organizational
stuff, that was a lot of work. So we
didn't have a studio for the main
channels, the 25 studios or the nine, the
live channels, the 12, they actually did
provide the production facilities for the
speakers so we can look at the next slide.
There's a couple more numbers and of
course, a couple pictures from us working
basically from today. We had 53 channel...
53 talks in the main channel. 18 of them
were prerecorded and played out. We had 3
where people were actually on location in
a studio and gave their talk from there.
And we had 32 that were streamed live like
I am speaking to you now with various
technical bits that again the VOC will go
into in a minute. And we did a lot of
Q&As, I don't have the numbers how many
talks actually had Q&As, but most of them
did, and those were always like. We had a
total of 63 speakers we did prepare, at
least the live Q&A session for and helped
them set up, we helped them record their
talks if they wanted to prerecord them. So
we spent anywhere between one and two
hours with every speaker to make sure they
would appear correctly and in good quality
on the screen. And then during the four
days, we, of course, helped coordinate
between the master control room and the
twelve live studios to make sure that the
speakers were where they were supposed to
be and any technical glitches could be
worked out and decide on the spot. If, for
example, the line producers made a mistake
and a talk couldn't happen as we had
planned because we forgot something. So we
rescheduled and found a new spot for the
speakers. So apologies again for that. And
thank you for your understanding and
helping us bring you on screen on day two
and not day one. But I'm very glad that
that we could work that out. And that's
pretty much it from the line producers, I
think. Next up is the VOC.
ysf: Thank you stb. Yes, you're right, the
next are the VOC and kunsi and
JW2CAlex are waiting for us.
Franzi: ... is Franzi from the VOC. 2020
was the year... Hm? Hi, this is Franzi
from the... from VOC. 2020 was the year of
distributed conferences. We had 2 DiVOCs
and the FrOSCon to learn how we are going
to produce remote talks. We learned a lot
of stuff on organization, Big Blue Button
and Jitsi recording. We had a lot of other
events which was just streaming like
business as usual. So for rC3, we extended
the streaming CDN with two new locations,
now 7 in total, with a total bandwidth of
about 80 gigabits per second. We have two
new mirrors for media.ccc.de and are now
also distributing the front end. We got
two new transcoder machines, Erfas and
Enhanced cir setup we now have 10 Erfas
with own productions on media.ccc.de. So
the question is, will it scale? On the
next slide...
Alex: Yeah, next slide.
Franzi: ... we will see that it did
scale. We did produce content for 25
studios and 19 channels, so we got lots of
lots of recordings which will be published
on media.ccc.de in the next days and
weeks. Some have already been published,
so there's a lot of content for you to
watch. And now Alex will tell us something
about the technical part.
Alex: My name is Alex, Pronouns it/its. I
will now tell you the technical part
first, but more of the organization. I was
between the VOC and the line producing
team. And now a bit how it worked. So we
had those two main channels, rc-one and
rc-two. Those channels have been produced
by the various studios distributed around
the whole country. And those streams,
this is now the upper path in the picture,
went to our ingest relay, to the FEM, to
the master control room. In Ilmenau there
were a team of people adding the
translations, making the mix, making the
mixdown, making records and then
publishing it back to the streaming
relays. All the other studios produced to
channels. Those channels took the also the
signals from different studios, make a
mixdown, etc. publish to our CDN and
relays and we publish to the studio
channels. As you can see, this is not the
typical setup we had in the last year in
the presence. So, our next slide, we can
see where this leads: Lots of
communication. We had the line producing
team, we had some production in Ilmenau
that has to be coordinated. We have the
studios, we have the local studio helping
Angels. We have some Mumbles there, some
RocketChat here, some CDN people some web
where something happens. We have some
documentation that should be. And then we
started to plot down the communication
paths. Next slide, please. If you plotted
all of them, it really looks like the
world, but this is actually the world, but
sometimes it feels like they're just
getting lost in different paths. Who you
have to ask, who do you have to call?
Where are you? What's the shortest path to
communicate? But let's have a look at the
studios. First going to ChaosWest.
Franzi: Yes, on the next slide, you will
see the studio set up at ChaosWest TV. So
thank you, ChaosWest for producing your
channel.
Alex: At the next slide, you see the
Wikipaka television and fernseh-streamen
(WTF) who have the internal motto:
"Absolut nicht sendefähig - chaos of
recording". But even then, at some
studios, you look more like studios, so
this time at the next slide at the hacc.
Franzi: Yeah, at hacc, you will also see
some of the bloopers we had to deal with.
So, for example, here you can see there
was a cat in the camera view, so, yeah.
And Alex, tell us about the open
infrastructure orbit.
Alex: The open infrastructure orbit
showed. In this picture, you can see it's
really hard to see how you can make a
studio look really nice, even if you're
alone, feeling a bit comfier, more
hackish. But you have also those normal
productions as in the next slide. The
Chaosstudio Hamburg
Franzi: Yeah, at Chaosstudio Hamburg, we
had two regular work cases like, you know,
from all the other conferences, and they
were producing, onsite in a regular studio
set up. And last but not least, we got
some impressions from ChaosZone TV.
Alex: As you can see here, also quite
regular studio setup, quite regular. No.
There was some Corona virus ongoing, and
this is we had a lot of distancing,
wearing mask and all the stuff that
everyone is safe but c3yellow (c3gelb)
will tell you some facts about it. But
let's look at the nice things. For
example, the minor issue: On the second
day, we were sitting there looking at our
nice Grafana. Oh, we got a lot of more
connections. The server load's increasing.
The first question was: Have we enabled
our cache?". We don't know. But the number
of connections is growing that people are
watching our streams, the interest goes
up. And we were, well, at least the people
are watching the streams. If there is a
website, who cares, the interest works.
But then we suddenly get the relations.
Well, something did not really scale that
good. And then using the next slide, this
view. This switched pretty fast from after
looking at this traffic graph. "Well,
that's interesting" into "Well, we should
investigate". We get thousands of messages
on Twitter DMs. We got thousands of
messages in RocketChat, IRC, and suddenly
we had a lot of connections to handle; a
lot of inquiries to handle, a lot of phone
calls, etc. to handle. And we have to
prioritize for us the hardware then the
communication, because otherwise the
information won't stop. On the next slide
you can see what our minor issue was. So
at first, we get a lot of connections to
our streaming web pages, then to load
balancers, and finally to our DNS servers.
A lot of them were quite malformed. It
looked like a storm. But the more
important thing we had to deal was all
those passive aggressive messages from,
from different persons who said: "Well,
you can't even handle streaming. What are
you doing here?" And we worked together
with the c3infra team, thanks for that, how
to scale and decentralize a bit more just to
provide the people the connection power
they need. So I think in the last years,
we don't need to use more bandwith. We
showed we can provide even more bandwith
if we need it. And then, noting everything
down…
Franzi: So is it time to shut everything
down? No, we won't shut everything down.
The studios can keep their endpoints, can
continue to stream on their endpoints as
they wish. We want to keep in touch with
you and the studios, produce content with
you, improve our software stack, improve
other things like the ISDN, the Internet
Streaming Digital Node, the project for
small camera recording setups for sending
to speakers needs developers for the
software. Also, KEVIN needs developers and
testers. What's KEVIN? Oh, we have
prepared another slide or the next slide.
KEVIN is short for Killer Experimental
Video Internet Noise, because we initially
wanted to use OBS.Ninja, but there are a
couple of licensing issues. There is not
everything in OBS.Ninja is open source
like we wanted, so we decided to code our
own OBS.Ninja-style software. So if you
are interested in doing so, please get
into contact with us or visit the wiki. So
that's all from the VOC. And we are now
heading over to c3lingo.
ysf: Exactly. c3lingo oskar should be
waiting Studio 2, aren't you?
oskar: Yeah, hallo. Hi, yeah, I'm oskar
from c3lingo. We will jump straight into
the stats on our slides. As you can see
here, we translated 138 talks this time,
as you can see, it's also way less
languages than in the other chaos events
that we had since our second languages
team that does everything that is not
English and German was only five people
strong this time. So we only managed to do
five talks into French and three talks
into Brazilian Portuguese. And then on the
next slide… We are looking at our coverage
for the talks and we can see that on the
main talks we managed to cover all talks
that were happening from English to German
and German to English, depending on what
the source language was. And then, on the
other languages track, we only managed to
do 15 percent of the talks from the main
channels. And then on the further
channels, which is a couple of others that
also were provided to us in the
translation team, we managed to do 68% of
the talks, but none of them were
translated into other languages than
English and German. On the next slide,
some global stats. We have 36
interpreters, which in total managed to
translate 106 hours and 7 minutes of talks
into another language simultaneously. And
the maximum number of hours one person did
was 16 hours and the minimum number of
hours, the average number of hours people
did was around 3 hours of translation
across the entire event. All right. Then I
also have some anecdotes to tell and some
some mentions I want to do. We had two new
interpreters that we want to say "hi" to,
and we had a couple of issues with the
digital thing that didn't have before with
regular events where people were present.
For example, the issue of sometimes when
two people are translating one person's
starts interpreting something on wrong
stream. Maybe they were watching the wrong
one. And then the partner just thinks they
have more delay or something. Or, for
example, a partner having a smaller delay
and then thinking the partner can suddenly
read minds because they can translate
faster than the other person is actually
seeing the stream. Those are issues that
we usually didn't have with the regular
stream, but only with the regular events,
not with remote events. And yeah, some
hurdles to overcome. Another thing was,
for example, when on the r3s stage, the
audio cut out sometimes for us and but
because one of our translators had also
already translated the talk twice, at
least partially to because and it was
already canceled after those, they
basically knew most of the content and
could basically do a Powerpoint Karaoke
translation and was able to do most of the
talk just from the slides without any
audio. Yeah, and then there also was...
The last thing I want to say is actually I
wanted to say, give a big shout out to the
two of our team members that weren't able
to interpret with us this time because
they put their heart and soul into this
event happening. And that's stb and katti,
and that's basically everything from
c3lingo. Thanks.
ysf: muted
Hello, c3subtitles is it now. td will show
the right text to his slides you already
saw a minute ago.
td: OK. OK, hi, so I'm td from the
c3subtitles team. And next slide, please.
So just to quickly let you know how we get
from the recorded talks to the released
subtitles. Well we take the recording
videos and apply speech recognition
software to get a raw transcript. And then
Angels work on that transcript to correct
all the mistakes that the speech
recognition software makes. And we again
apply some autotiming magic to to get some
raw subtitles. And then again Angels do
quality control on these tracks to get
released subtitles. Next slide, please. So
as you can see, we have various subtitle
tracks in different stages of completion.
And these are seconds of material that we
have can see all the numbers are going up
and to the right as they should be. So
next slide, please. In total, we had 68
distinct angels that worked 4 shifts on
average. 83 percent of our angels returned
for a second shift. 10 percent of our
angels worked 12 or more shifts. And in
sum we had 382 hours of angel work for 47
hours of material. So far we've had two
releases for rc3 and hopefully more yet to
come, and 37 releases for all the
congresses, mostly on the first few days
where we didn't have many recordings. We
have 41 hours still in the transcribing
stage of material, 26 hours of material in
the timing stage and 51 hours material in
the quality control stage. So there's
still lots of work to be done. Next slide,
please. When you have transcripts, you can
do fun stuff with them. For example, you
can see that important to people in this
talk are "people". We are working on other
cool features that are yet to come. Stay
tuned for that. Next slide, please. So to
keep track of all these tasks, we've been
using a state-of-the-art high-performance
lock-free NoSQL columnar data store,
a.k.a. a kanban board in the previous
years. And because we don't have any
windows in the CCL building anymore, we
had to virtualize that. So we're using
kanban software now. At this point, I
would like to thank all our hard-working
angels for the work. And next slide
please. If you're feeling bored between
congresses then you can work on some
transcripts. Just go to c3subtitles.de. If
you're interested in our work, follow us
on Twitter. And there's also a link to the
release subtitles here. So that's all.
Thank you.
ysf: Thank you, td. And before we go into
the POC, where Drake is waiting, I'm sure
everyone is asking why are those guys
saying "next slide"? So wait. In
the end, we have the infrastructure review
of the infrastructure review meeting going
on. So be patient. Now, Drake, are you
ready in Studio 1?
Drake: OK. Hello, I'm Drake from the Phone
Operations Center, and
I like to present to you our
numbers and maybe some
anecdotes at the end of our part. So
please switch to the next slide. Let's get
into the numbers first. So first off,
first off, you registered about 1950 ...
5195 sip extensions, which is about 500
more than you registered on the last
congress. Also, you did about 21 000
calls, a little bit less than on the last
congress. But, yeah, we are still quite
proud of what you have used our system
with. And yeah, it ran quite stable. And
as you may notice on the bottom, we also
had about 23 DECT antennas at the congress
or at this event. So please switch to the
next slide. And this is our new feature,
it's called the... next slide ..., it
is called the eventphone decentralized
DECT infrastructure, which we especially
prepared for this event, the EPDDI. So we
had about 23 RFPs online throughout
Germany with 68 DECT telephones of which
is up to it. But it's not only the the
German part that we covered. We actually
had one mobile station walking out through
Austria, through Passau, I think. So
indeed we had an European Eventphone DECT
decentralized infrastructure. Next slide
please. We also have some anecdotes, so
maybe some of you have noticed that we had
a public phone, a working public phone in
the RC World where you could call other
people on the SIP telephone system and
also other people started to play with our
system. And I think about yesterday
someone started to introduce c3fire so you
could actually control a flame thrower
through our telephone system. And I like
to present here a video. Next slide
please. Maybe you can play it. I have
quite a delay in waiting for the video to
play. So what you can see here is the
c3fire system actually controlled by a
DECT telephone somewhere in Germany. So
next slide please. We also provided you
with SSTV servers via the phone
number 229, where you could receive some
pictures from event phone, like a postcard
basically. So basically you could call the
number and receive a picture or some other
pictures, some more pictures. And next
slide please. Yeah basically, that's all
from the Eventphone and with that we say
thank you all for the nice and awesome
event and yeah, bye from the first
certified assembly POC. Bye.
ysf: Thank you, POC, and hello GSM Lynxes
is waiting for us.
lynxes: Yeah, hallo, I'm lynxes, I'm from
the GSM team. This year was quite
different as you can imagine. However,
next slide please. So but we managed to
get a small network running and also a
couple of SIM cards registering, so where are
we now. So next slide please. As you can
see, we are just there in the red dot.
There's not even a single line for our
five extensions but we managed 130 calls
over five extensions. And next slide
please. So we got, so we got five
extensions registered with four SIM cards
and three locations with mixed
technologies also two users so far sadly.
And one network with more or less zero
problems. And so let's take a look on the
coverage. So next slide please. So we are
quite lucky that we managed to get an
international network running. So we got
two base stations in Berlin. One in the
hackerspace AfRA, another one north of
Berlin. And yeah one of our members is
currently in Mexico. And he's providing
the remote chaos networks there. Yes, so
that's basically our network. So before we
going to the next slide, we have what we
have done so far is, we are just two
people instead of 10 to 20 and had some
fun with improving our network and
preparing for the next congress. And next
slide please. And yeah, now I'm closing
with the EDGE computing. We improved our
EDGE capabilities and yeah, I wish you a
hopefully better year and yeah maybe see
you next year remote or in person. Have
fun.
ysf: Thanks and I give a hand to Iindworm
for doing the "slide DJ" all the time, and
he now switch to the Haecksen who are
next and they bring an image and melzai is
waiting for us in Studio 3.
melzai: Hello, what's phones without
people?
So I'm giving now an introduction
over here. How many people we needed to
run the whole Haecksen assembly. We had
around 20 organizing haecksen and we had
around 20 speakers in our events. And we
had in total around 40 events, but I'm
pretty sure that I even don`t know all of
these. As you realize, the world is pretty
large. So we needed around seven million
pixels to display the whole Haecksen
world. And that needed around 400 commits
at our github corner of the internet.
Around 130 people receive the fireplace
badge in our case. And around 100 people
tested our swimming pool and received that
badge. So great a year for non ???. Also
around 49 people showed some very deep
dedication and checked on all memorials at
our Haecksen assembly. Congratulations for
that. There were quite a many of these
ones. Our events are run on our BigBlueButton
from the Congress and so we had
starting from day 0 no lags and we're able
to host up to 133 people in one session.
And that was quite stable. We also
introduced four new members around 13 new
Haecksen joinded just for the Congress.
And we increased about to the size of 440
Haecksen overall. Also somewhat, we got new
Twitter accounts supporting us, so we have
added over 200 more Twitter accounts. And
so, you know, our messages are getting
heard. But besides the ritual, we also did
some quite physical things. First of all,
we distributed over 50 physical goodie
bags to the people with microcontrollers
and self-sewed masks in it, as you can see
on the picture. And also sadly, we shopped
so many rC3 Haecksen-themed trunks that
they are now out of stock. But they will
be back in January. Thank you.
ysf: No, thank you. And I'm going to send
thanks to the Choaspatinnen…
Chaospat*innen… who are waiting in Studio
One.
Mike: Hi, all this is Mike from the
Chaospat*innen team. We've been welcoming
new attendees and underrepresented
minorities to the chaos community for over
eight years. We match up our mentees with
experienced chaos mentors. These mentors
help their mentees navigate our world of
chaos events. DiVOC was our first remote
event and it was a good proof of concept
for rc3. This year, we had 65 amazing
mentees and mentors, two in-world
mentee/mentor matchup sessions, one great
assembly event hosted by two of our new
mentees, and a wonderful world map
assembly built with more than 1337
kilograms of multicolor pixels. Next
slide, please. And here's a small part of
our assembly with our signature propeller
hat tables. And thank you to the amazing
Chaospat*innen team: fragilant, jali,
azriel and lilafish. And to our great
mentees and mentors. We're looking forward
to meeting all of the new mentees at the
next chaos event.
lindworm: Yeah, I think that was my call.
So next up, we'll have the, let me see,
the c3adventure! Are you ready?
Roang: Hello, my name is Roang
Mewp: and I'm Mewp
Roang: and we will talk about the
c3adventure, the 2D world, and what we did
to bring it all online. Next slide please.
OK, so when we started out, we looked into
how we could bring a Congress-like
adventure to the remote experience. And on
October we started with the development
and we had some trouble in that we had
multiple upstream merges that gave us some
problems. And also due to just Congress
being Congress, or remote experience being
a remote experience, we needed to
introduce features a bit late or add
features on the first day. So auth was
merged just 4:40 AM in the first day. And
on the second day, we finally fixed the
instance jumps – you know, when you walk
from one map to the next – we had some
problems there. But on the second day it
all went up. And I hope you have all
enjoyed the badges that have finally been
updated and brought into the world today.
What does that all mean? Since we started
implementing, there have been 400 git
commits in our repository all-in-all,
including the upstream merges. But I think
the more interesting stuff is what has
been done since the whole thing went live.
We had 200 additional commits, fixing
stuff and making the experience better for
you. Next slide. In order to bring this
all online, we not only had to think about
the product itself, not only think about
the world itself, but we also had to think
about the deployment. The first commit on
the deployer, it's a background service
that brings the experience to you, has
been done on 26th of November. We started
the first instance, the first clone of the
work adventure through this deployer on
8th of December and a couple of days
beforehand, I was getting a bit swamped. I
couldn't do all of the work anymore,
because I had to coordinate both of the
projects. And so my colleague took over
for me, and helped me out a lot. So I'll
give over to him to explain what he did.
Mewp: Yeah. So imagine that on Day -5 I
get a message from a friend that, "Hey,
help is needed!" So I say, "OK, let's do
it." And Roang tells me that, "OK, so we
can spawn a instance and to scale it
somehow and do that." And I spawned the
deployer and my music stops. I streamed
music from the internet, and I wondered
why did it stop? And I have noticed that,
oh, there are a lot of logs now. Like, a
lot. And I have finally Day -4 noticed
that the deployer was spawning copies of
itself each few seconds in the log. So
that was the state back then. Since Day -4
until Day 1, we have basically written the
thing. And that's, well… Day 1 we were
ready. Well, almost ready. I mean, we have
like 14 instances deployed. And I forgot
to mention that, when we were about to
deploy 200 ones at once, it wouldn't work
because all of the things would time out.
So we patched things quickly, and 13
o'clock we had our first deployment. This
worked, and everything was fine, and…
wait… Why is everybody on one instance?
So, it turns out that we had a bug, not in
the deployer, in the app that would move
you from the lobby to the lobby on a
different map. So during the first day, we
have we've had a lot of issues of people
not seeing each other because they were on
different instances of the lobby. So we
are working hard, and… next slide, please,
so we can see that… we are working hard to
reconfigure that to bring you together in
the assembly. I think we have succeeded.
You can see the population graph on this
slide. The first day was our almost most
popular one. And the next day it would
seem, that's OK, not as popular, but we
have hit the peak of 1600 users that day.
What else about this? The most popular
instance was lobby, of course. The second
most popular instance was hardware hacking
area for a while. Then the third, I think.
Next slide please. We have counted, well,
first of all, we've had in total about 205
assemblies. The number was increase day-
by-day, because people, through the whole
congress, they were working on their maps.
For a while, CERT had over a thousand maps
active in their assembly. Which led to the
map server crashing. Some of you might
have noticed that. It stopped working
quite a few times during Day 3. And they
have reduced the number of maps to 255.
And that was fine. At the end of Day 3, I
have counted for about 628 maps, and this
is less than the, if, than was available
in reality, because it was the middle of
the night (as always), and it was it
wasn't trivial to count them. But in the
maps I have found, we have found over two
million used tiles. So that's something
you can really explore. I wish I could
have, but deploying this was also fun.
Next slide, please. And what… Yeah?
Roang: Just a quick interject. I really
want to thank everyone that has put work
into their maps and made this whole
experience work. We, we provided the
infrastructure, but you provided the fun.
And so I really want to thank everyone.
Mewp: Yeah, the more things happen on the
infrastructure, the more fun we have. We
especially don't like to sleep. So we
didn't. I basically exchanged with Roang
the way that I slept five hours and during
the night and he slept five hours in the
day. And the rest of the time, we were up.
The record, though, is incorrect. Roang is
now 30 hours up straight, because the
budgets were too important to bring to you
to go to sleep. The thing you see on this
graph is undeployed instances. We were
redeploying things constantly. Usually in
the form of redeploying half of the
infrastructure at any given time. The way
it was developed, you wouldn't have
noticed that. You wouldn't be kicked off
your instances, but for a brief period of
time you wouldn't be able to enter any
one. But… Next slide. I have been joking
for a few days at the Congress that they
have been implementing a sort of
Kubernetes thing, because it's
automatically deploy things, and manage
things, and so on. And I have noticed by
Day 3 that I have achieved true
enlightenment and true automation, because
we have decided to deploy everything at
once at some point. The reason was that we
are being DDOSed, and we had to change
something to mitigate that. And so we
did that, and everything was fine. But we
made a typo. We made a typo and the
deployment failed. And one the deployment
failed, it deleted all the servers. So,
yeah, 405 servers got deleted by what I'm
remembering was a single line. So it was
brought out automatically, and that wasn't
a problem. It was all fine, but well, to
err is human, to automate mistakes is
devops. Next slide? What's important is
that these 405 servers were provided by
Hetzner. We couldn't have done that
without their infrastructure, without
their cloud. The reason we got up so
quickly after this was that the servers
were deleted, but they could have been
reprovisioned almost instantly. So the
whole thing took like 10 minutes to get it
back up. And, next slide. That's all.
Thank you all for testing our
infrastructure, and see you next year.
ysf: Thank you, c3adventure! So this was
clearly the first conference that didn't
clap for falling mate bottles! If that's
not the thing, maybe we try next year? The
Lounge. And I know I have to ask for the
next slide too. The rc3 Lounge artists.
And I was asked to read every country
where someone is in, because everyone had
to make the Lounge what it was: an awesome
experience. So there were: Berlin, Mexico City
Honduras, London, Zürich, Stockholm,
Amsterdam, Rostock, Glasgow, Leipzig,
Santiago de Chile, Prag, Hamburg,
Mallorca, Krakow, Tokyo, Philadelphia.
Frankfurt am Main, Köln, Moscow, Taipei
Taiwan, Hannover, Shanghai, Seoul… Seoul,
I think, sorry. Vienna, Hong Kong,
Karlsruhe and Guatamala. Thank you guys
for making the Lounge. So the next is the
Hub and they should be waiting in
Studio Two.
audible echo
XXX: …software is based on Django. And
it's intended to be used for the next
event. The problem is it was a new
software. We had to do a lot of
integrations, yeah, live during Day 0.
Well, OK. No. OK, yeah, hi. I'm presenting
the Hub, which is a software we wrote for
this conference. Yeah. It's based on
different components, all of them are
based on Django. It's intended to be used
on future events as well. Our main problem
was it's a new software. We wrote it and,
yeah, a lot of the integrations were only
possible on Day 0 or Day 1. And yeah. So
even still today on Day 4, we did a lot of
updates, commits to the repository, and
even that numbers on the screens are
already outdated again. But yeah, as you
could possibly see, we have a lot of
commits all day, night, or all night long.
Only a small digit, 6 AM. I am sorry for
that. Next slide, please. And yeah,
because the numbers you're quite busy
using the platform, some of these numbers
on the screen are already outdated again.
Out of the 360 assemblies which were
registered, only 300 got accepted. Most of
them were, yeah, event or people wanting
to do a workshop and trying to register an
assembly. Or, duplicates. So, please
organize yourself. Events, currently we have over
940 in the system. You're still clicking
events, nice. Thanks for that. The events
are coordinating with the studios, so we
are integrating all of the events of all
the studios, and the individual ones, and
the self organized sessions. All of them. A new
feature, the badges. Currently you have
created 411. And, yeah, from these badges
redeemed, we have 9269 achievements and
19 000 stickers. Documentation, sadly, was
a 404, because yeah. We were really busy
doing stuff. Some documentation has
already been written, but yeah. More
documentation is, will become available
later. We will open source the whole thing
of course, but right now we're still in
production and cleaning up things. And
yeah. Finally, for some numbers. Total
requests per second were about 400. In the
night, when the world was redeploying,
then we only had about 50 requests per
second, but it maxed up to 700 requests
per second. And the authentication for the
world, for the 2D adventure, it was about
220 requests per second. More or less
stable due to some bugs and due to some
heavy usage. So, yeah, we appreciate that
you used the platform, used the new Hub,
and hope to see you on the next event.
Thanks.
ysf: Hello Hub. Thank you Hub. And the
next is betalars waiting for us. He's from
the c3auti team, and he will tell us what
he does and his team did this year.
betalars: Hi, I'm betalars from c3auti,
and we've been really busy this year as
you can probably see by the numbers on my
next slide. We have 37 confirmed Auti-Angles
and today we surpassed the 200
hours mark. We have 10 Orga Mumbles
leading up to the event and there are
almost five million unique pixels in our
repository. I'm pretty convinced we've
managed to create the smallest Fairydust
of rC3, provided by an actual space
engineer. And the Tree of Solitude is not
the only thing we've managed to create,
contribute to this wonderful experience.
On our next slide, you can see that we
also contributed six panel sessions for
autistic creatures to discuss their
experiences and five Play sessions for
them to socialize. We helped to contribute
a talk, a podcast, and an external panel
to the big streams. And on our own panels,
we've had up to 80 participants that need
to be split up to five breakout rooms so
they could all have a meaningful
discussion. And all their ideas and thoughts
were anonymized and stored on more than 1000
lines of markdown documentation that you can
find on the Internet. But 1000 lines of
markdown wouldn't be enough for me to
express the gratitude I have towards all
the amazing creatures that helped us make
this experience happen and for all the
amazing teams that worked with us. I'm so
happy to see you again soon, but now I
think I will need some solitude for
myself.
ysf: Thank you betalars. So, lindworm, are
you ready? The next one is the video, as
far as I know. It's from the C3 Inclusion
Operation Center. I don't know the short
name; C3IOC? And it's counting down three
two one go.
video without audio
So, video is like a very difficult thing to play in
those days, because we only used to do
stuff live. Live means a lot of pixels and
traffic is done from this here, from this
glass, to all the wires and cables and
back to the glass of your screen. And this
is like magic to me, somehow. Although, I.
am only. being. a robot. to talk.
synchronistically. with all the.... It's
been around enough time, I think, to
switch back to Lindy with the video. I
tell you what we are you going to…
video without audio
nwng: Hello everyone, I'm nwng from the
new C3 Inclusion Operation Center. This
year, we've been working on accessibility
guides that help the organizing teams and
assemblies improve the event for everyone,
and especially people with disabilities.
We have also worked with other teams
individually to figure out what can still
be improved in their specific range of
functions - but there are still a lot to
catch up on! Additionally, we have
published a completely free and accessible
CSS design template that features dark
mode and an accessible font selection. And
it still looks good without Javascript.
100 Internet points for that! For you
visitors, we have been collecting your
feedback through mail or twitter – and
won't stop after the Congress! If you
stumbled across some barriers, please get
in touch via c3ioc.de or @c3inclusion on
twitter to tell us about your findings!
Thanks a lot for having us.
ysf: Thank you for the video. Finally,
technical's working! We should… does
someone know computers? Maybe? Kritis is
one of them, and he is waiting in Studio
One to tell us something about C3 Yellow
or c3gelb wie wir hier sagen.
Kritis: Yeah, welcome. I'm still looking
at this hard drive. Maybe you remember
this from the very beginning? It has to be
disinfected really thoroughly, and I guess
I can take it out by the end of the event.
And for… the next slide with the words,
please. We did found roughly 0777 hands
wash options and 0x3FF waste disposal
possibilities. We checked the correct date
on almost all of the 175 disinfectant
options you had around here. And because
at a certain point of time, people from
CERT were not reachable in the CERT room
because they were running around
everywhere else in this great 2D world. We
had the chance to bypass and channel all
the information because there were two
digital cats on a digital tree. And so we
got the right help to the right option.
Next slide, please. We have a couple of
options ongoing. A lot of work had been
done before. We had all the studios with
all the corona things going on before, but
now we think we should really watch into
an angel disinfectant swimming basin for
the next time, to have there the maximum
option of cleanliness. And we will talk
with the BOC. If we can maybe achieve to
use this Globuli maxi-cubes for the
Tschunk in the upcoming time. Apart from
that, in order to get more Bachblüten and
everything else, we need someone who is
able to help us with the Potenzieren
for homoeopathic substances. So if you
feel welcome with that, please just drop
us a line to: info@c3gelb.de. Thank you
very much and good luck.
ysf: Thank you Kritis. Finally happy to
hear your voice. I only know you from
Twitter, where we treat our stuff
together, or I yours and you, mine, don't.
Maybe you're going to change it… please?
And, talking about messages. Chaos Post
was here too, and trilader, whom we
already heard earlier, has more to say.
trilader: OK, welcome. It's me again. I've
changed outfits a bit. I'm not here for
the Signal Angels anymore, but for Chaos
Post. So, yeah. We had an online office
this year again, as we had with the DiVOCs
before. And I've got some mail numbers for
you that should be on the screen right
now. If it's not, if it's on the title
page, please switch to the first one where
it lists a lot of numbers. We had 576
messages delivered total. This is numbers
from around half to six. And 12 of them we
weren't able to deliver because, well,
non-existent mailboxes or full mailboxes
mostly. We delivered mail to 43 TLDs, the
most going to Germany, to .de domains,
followed by .com, .org, .net, and to
Austria with .at; We had a couple of
motifs you could choose from, the most
popular one was "Fairydust at Sunset", 95
people selected that. Next slide. About
our service quality. We had a minimum
delay from the message coming in, us
checking it, and it going out for about a
bit more than four seconds. The maximum
delay was about seven hours. That was
overnight, when no agents were ready, or
they were all asleep, or having… being
busy with, I don't know, the Lounge or
something? And on average a message took
you, took us 33 minutes from you putting
it into our mailbox to it getting out.
Some fun facts: We had issues delivering
to T-Online at the first two days, but we
managed to get that fixed. A different
mail provider refused our mail because it
contained the string c3world, the domain
in the mail text. And apparently new
domains are scary, and you can't trust
them or something. We had created a ticket
with them, they fixed it, and it was super
fast, super nice service. Yeah. Also, some
people tried to sent digital postcards to
Mastodon accounts because they looked like
email addresses or something. Another
thing that's not on a slide is we had
another new feature this time. That was
our named recipients. So you could, for
example, send mail to CERT without knowing
their address. And they also have a really
nice postcard wall, where you can see all
the postcards you sent them. The link for
that is on our Twitter. Thank you.
ysf: Thank you Chaos Post. lindworm, are
you there?
lindworm: Ja, ja. Ich bin da, Ich bin da.
Hallo, you're hearing me?
ysf: I hear you.
lindworm: So I have to switch some more.
It's kind of stressy for me, really.
ysf: You're doing an awesome job. Thank
you for doing it. So just out of
curiosity, and did you have a problem
accepting any cookies or so?
lindworm: No, not really.
ysf: I heard somewhere. That some really
smart people had problems using the site
because of cookies.
lindworm: Oh, no, that was not my problem.
I only couldn't use the site because of
overcrowding. That was often one of my my
little problems. And please, I hope you
don't see what I'm doing right now in the
background with starting our pets and so
on. And what I wanted to say to all of
you, this was the first Congress where we
have so many women and so many non-cis
people running that show and being up
front the camera and making everything up.
I would really thank you all. Thank you,
that you made that possible. And thank you
that we get more and more diverse, year by
year.
ysf: I can only second that. And now we
are switching to C3 Infrastructure.
lindworm: Yeah, we need to.
ysf: I'm sure a lot of questions will be
answered by them.
lindworm: And I try to make up the slides
for that, but I do not find them right
now.
patrick: Look mom, I'm on TV.
thies: Yeah. Welcome to the infrastructure
review of the Team Infrastructure. I'm not
quite sure if we have the newest revision
of the slides, cause my version of the
stream isn't loading right now. So maybe
lindworm, is it possible to press
control-R? And you're seeing a burning
computer, then we have the actual slides.
Patrick: Let's just Powerpoint Karaoke
without the background music.
thies: Yeah, and without the PowerPoint
presentation in realtime. Now I'm seeing
me. Let's wait a few seconds until we see
a slide.
Patrick: We want to wait the entire stream
delay.
thies: It's just about 30 to one minute.
Patrick: Well done.
thies: Yeah, I'm thies and I'm waiting.
And this is Patrick, and he's waiting too.
Yeah, but that's in the middle of the
slides. Can we go… OK. Yeah. I'm now
seeing something in the middle of the
slides, but it seems fine. OK, yeah. We
are the team C3 Infra. rC3 Infra. We are
creating the infrastructure. Next slide.
We had about nine terabytes of RAM and
1,700 CPU cores. The whole event there's
only one dead SSD that died because
everything's broken. We had five dead RAID
controllers, and didn't bother to replace
the RAID controllers, just replaced them
with new servers. And 100 percent uptime.
Next slide. We looked about 42 hours on
starting screens of enterprise servers. 20
minutes max is what HP delivered. And we
are now certified enterprise observers. We
had only 27%-ish of visitors using IPv6.
So that's even less than Google publishes.
And even though we had almost full IPv6
coverage – except some really, really shady
out-of-band management networks – we're
still not at the IPv6 coverage that we are
hoping for. I'm not quite sure if that's
the right slides. But I'm not quite sure
where we are in the text. Yeah, Patrick.
Patrick: Yeah, so before the Congress
there was one prediction: there's no way
it cannot be not DNS. And while it was DNS
at least once, so we checked that box. And
let's go over to the next topic, OS. We
provisioned about 300 nodes, and it was an
Ansible-powered madness. So, yeah, there
was full disk encryption on all nodes. No
IP logged in the access logs, we took
extra care of that. And we configured
minimal logging wherever possible, so the
case of some problems we only had WARNINGs
available. And there are no INFO logs, no
DEBUG logs; just the minimal logging
configuration. And with some software, we
had to pipe logs to /dev/null because the
software just wouldn't stop logging IP's,
and we didn't want that. So no personal
data in logs, so no GDPR headache, and
your data is safe with us. The Ansible
madness I've talked about was a magical
deployment that deep bootstrapped into the
live system and assimilated into the rC3
infrastructure while it's still running.
So if you didn't boot your machine then
what? They're just running. When a OS
deployment was broken, it was almost
always due to a network or routing. At
least the OS team claims that, and this
claim is disputed by the network team of
course. One time, the deployment broke
because of a trigger happy infra angel.
But let's not talk about that. Of course,
at this point, we want to announce our
great cooperation with our gold sponsor
ddos24.net, who provided an excellent
service of handcrafted request to our
infrastructure. That was a great demand or
great public demand, with a million
requests per second for a while. But even
during the highest or peak demand, we were
able to serve most of these services. We
provide the infrastructure to the VOC, and
they quickly made use of the provided
infrastructure deployed there. Overall, an
amazing time to market. We had six
locations, and those six locations where
some wildly different, special snowflakes
overall. So we had Düsseldorf, 816 CPU
cores there, two terabytes of RAM, and we
had 10 gigabits per second interconnect.
There was also a 1 terabit per second
Infiniband available, but sadly, we
couldn't use that. It would have been
nice. The machines that had a weird and
ancient IPMI, which made it hard to deploy
there. And the admin on location never
deployed bare metal hardware to a
datacenter, so there were also some
learning experience there. Fun fact about
Düsseldorf, this was the data center with
the maximum heat. One server, seven units,
over 9000 watts of power. 11.6 to be
exact. Which is why they had some to take
some creative heat management solutions.
Next was Frankfurt, there we had 620
gigabits of total uplink capacity, and we
actually only used 22 gigabit during peak
demand. Again, by our premium sponsor:
ddos24.net. There was zero network
congestion and 1.5 gigabits per second
were IPv6. So there was no real traffic
challenge. For the network engineers of
you, it was a full Layer 3 architecture
with MPLS between the WAN routers. And
there was a night shift on the 26the and
27th for more servers, because some
shipments didn't arrive yet. The fun fact
about this datacenter was the maximum
bandwidth. Some servers there had 50
gigabit uplink on the server configured.
It was the data center with the maximum
manual intervention. Of course, we had the
most infrastructure there and it wasn't
oversubscribed at any point. We had some
hardware in Stuttgart, which was basically
the easiest deployment. There were also
some night shifts, but the thanks to
neuner and team this was a really easy
deployment. It was also the most silent
DC, so no incident from Day -5 until now.
So if you're currently watching from
Stuttgart now, you can create some issues
because now we said it. Wolfsberg was the
smallest DC. We only had three servers and
we managed to kill one hardware RAID
controller, so we only could use two
servers there. So, yeah. And then Hamburg
was the data center with the minimum
uptime. We never could deploy to this data
center because there was a broken netboot
and we couldn't provision anything there.
And of course, the sixth data center was
the Hetzler Cloud, where we deployed it on
all locations. Deployment fun facts: we
received a covid warning from the data
center. Luckily, it didn't affect us. It
was at another location. But thanks for
the heads-up and the warning. The team
leader of a sponsor needed to install
Proxmox in a DC with no knowledge, without
any clue what they were doing. We
installed Proxmox in the Hamburg DC, and
no server actually wanted to talk to us,
so we had to give up on that. And there
had to be a lorry relocated before we
could deploy other servers. So that's that
was standing in the way there. Now, let's
get to Jitsi. Our peak count were 1,105
users at the same time, on the same
cluster. I don't know if it was at the
same time as the peak user count, but the
peak conference count was 204 conferences.
I hope we can still beat
that today, but this is data from
yesterday. The peak conference size was 94
participants in a single conference. And
let me give condolences to your computer,
because that must have been hard on it.
Our peak outgoing video traffic on the
Jitsi video bridges was 1.3 gigabits per
second. And we had about three quarters of
the participants were streaming video and
one quarter of them had video disabled.
Interesting ratio. Our Jitsi deployment
was completely automated with Ansible, so
it was zero to Jitsi in 15 minutes. We
broke up the Jitsi cluster into four
shards to have better scalability and
resilience. So if one shard went down, it
would only affect part of the conferences
and not all of them. Because there are
some infrastructure components that you
can't really scale or cluster, so we went
with with the sharding route. Our Jitsi
video bridges were at about 42% peak usage
– excluding our smallest video bridge,
which was only eight cores and eight
gigabytes, which we added in the beginning
to test some stuff out, and it remained in
there. And yes, we overprovisioned a bit.
There will also be a blog post on our
Jitsi Meet deployment coming in the
future. And for the next time we, for the
upcoming days, we will enable 4K streaming
on there. So why not use that? And we want
to say thanks to the FFMEET Projekt, who
contacted us after our initial load test
and gave us some tips to handle load
effectively and so on. We also tried
making DECT call-out working. Spent 48
hours trying to get it to work, but there
were some troubles there. So sadly, no
adding DECT participants to your Jitsi
conferences for now. jitsi.rc3.world will
be running over New Year. So you can use
that to get together with your friends and
so on over the New Year. Stay separate,
don't visit each other please. Don't
contribute to covid-19 spread. You've got
the alternative there. Now let's go over
to monitoring. thies.
thies: Yeah, thanks. First of all, it's
really funny how you edit this page, but
reveal.js doesn't work that way until
lindworm reloads the page, which hopefully
doesn't do right now. Everything's fine,
so you can leave it to be. Yeah,
monitoring. We had to Prometheus and
Alertmanager set up completely driven out
of our solemnly one and only source of
truth: our Netbox. We received about
34 858 critical alerts. It's – looking at
my mobile phone – it's definitely more
right now. And about 13,070 warnings. Also
definitely more right now. And we tended
about 100 of them. The rest was kind of
useless. Next slide, please. As it's
important to have an abuse hotline and an
abuse contact, we received two network
abuse messages, both from Hetzner – one of
our providers – letting us know that
someone doesn't like our infrastructure as
much as we do. Props to ddos24.net. And we
got one call it our abuse hotline, and it
was one person who wanted to buy a ticket
from us – Sadly, we were out of tickets.
Next slide, please. Some other stuff. We
got a premium Ansible deployment brought
to you by turing-complete YAML. That sounds
scary. And we had about 130k DNS updates
thanks to the World team. At this point
they're really stressing our DNS API with
the re-deployments. And also our DNS,
Prometheus, and Grafana are deployed on
and by NixOS thanks to flüpke and head
over to flüpkes interweb thingy. He wrote
some blog posts about how to deploy stuff
with his NixOS. And the next slide,
please. And the last slide from the team
is the list of our sponsors. Huge thanks
to all of them. It won't be possible to
create such a huge event and such loads of
infrastructure without them. And that's
everything we have.
ysf: Amazing. Thank you for all you've
done. Truly incredible, and showing
everything to the public. So I promised
that there will be a kind of behind the
scenes look of this infrastructure talk or
review. And I really have nothing to do
with it. Everything was done by completely
different people. I'm only a Herald,
somehow lost and tumbled into this stream.
And so I'm just going to say switch to
wherever. Show us the magic.
Karlsruhe: Three hours ago, I got the
call… Hello and welcome from the last part
of the infrastructure review and greetings
from Karlsruhe. So three hours ago, I got
a call from lindworm and he asked me, how
is it with this last talk we have? It may
be a bit complicated. And he told me, OK,
we have a speaker. I'm the Herald. Oh,
that's always so. And then we realized,
yeah, we don't have only one speaker, we
have 24. And so that's why we called
ChaosWest and built up an infrastructure
which dampfkatze will explain you now in a
short minute. I think so.
dampfkatze: Thank you. Yes. Oh, I lost the
sticker. OK, after we called ChaosWest, we
came up with this monstrosity of the video
cluster. And we start here. The teams
streamed via OBS.Ninja onto three
ChoasWest studios. They were brought
together via RTMP on our Mix1 local
studio, and then we pumped that into Mix2,
which pumped it further to the VOC. The
slides were brought in via another
OBS.Ninja directly onto Mix2. They came
from lindworm. Also, the closing you will
see shortly hopefully will also come from
there. And ysf and lindworm were directly
connected via OBS.Ninja onto our Mix1
computer. And Mix2 also has the studio
camera you're watching right now. And for
the background communication, we had a
Mumble connected with our audio matrix.
And lindworm, ysf, and the teams, and we
in the studio locally could all talk
together. And now back to the closing
with… No, to the Herald News Show, I
think. lindworm will introduce it to you.
lindworm is live.
lindworm: Is ysf still there? Or do you
come with me? So it will take a second or
billions of years. So thank you very much
for this review. It was as chaotic as the
Congress.
postroll music
Subtitles created by c3subtitles.de
in the year 2021. Join, and help us!