[Script Info] Title: [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,9:59:59.99,9:59:59.99,Default,,0000,0000,0000,,I am Nicolas Dandrimont.<br/>I am going to talk to you about a year of fedmsg in Debian.<br/>We had a problem before with infrastructure in distributions.<br/>Services are bit like people.<br/>There are dozen of services maintained by many people<br/>and each of those services has its own way of communicating with the rest of the world<br/>Meaning that if you want to spin up a new service<br/>that needs to talk to other services in the distribution<br/>which is basically any service you want to include<br/>you will need to implement a bunch of communication systems<br/>For instance, in the Debian infrastructure<br/>we have our archive software, which is dak,<br/>that mostly uses emails and databases to communicate.<br/>The metadat is available in a RFC822 format with no real API.<br/>The database is not public either.<br/>The build queue management software, which is wanna-build,<br/>polls a database every so often to know what needs to get built.<br/>There is no API outside of its database<br/>that isn't public either<br/>Out bug tracking system, which is called debbugs,<br/>works via email, stores its data in flat files, for now,<br/>and exposes a read-only SOAP API.<br/>Our source control managament pushes in the distro-provided repos on alioth<br/>can trigger an IRC bot or some emails<br/>but there is no real central notification mechanism.<br/>We have some kludges that are available to overcome those issues.<br/>We have the Ultimate Debian Database<br/>which contains a snapshot of a lot of the databases that are underlying the Debian infrastructure<br/>This means that every so often,<br/>there is a cron that runs and imports data from a service here, a service there.<br/>There is no realtime data.<br/>It's useful for distro-wide Q&amp;A stuff because you don't need to have realtime data<br/>But when you want some notification for trying to build a new package or something<br/>That doesn't work very well<br/>and the consistency between the data sources is not guaranteed.<br/>We have another central notification system which the package tracking system<br/>which also is cron-triggered or email-triggered<br/>You can update the data from the BTS using ??<br/>But you can subscribe to email updates on a given package<br/>But the messages are not uniform,<br/>they can be machine parsed.<br/>There are a few headers but they are not sufficient to know what the messages are about.<br/>And it's still not realtime.<br/>The Fedora people invented something that could improve stuff which is called fedmsg.<br/>It was actually introduced in 2009.<br/>It's an unified message bus that can reduce the coupling between different services of a distribution.<br/>That services can subscribe to one or several message topics, register callbacks and react to events<br/>that are triggered by all the services in the distribution.<br/>There is a bunch of stuff that are already implemented in fedmsg.<br/>You get a stream of data with all the activity in your infrastructure which allows you to do statistics for instance<br/>You decouple interdepent services because you can swap one thing with another<br/>Or just listen to the messages and start doing stuff directly without having to ?? a database or something.<br/>You can get a pluggable unified notification system that can gather all the events in the project and send them by email, by IRC<br/>on your mobile phone, on your desktop, everywhere you want.<br/>Fedora people use fedmsg to implement a badge system<br/>which is some kind of gamification of the development process of the distribution<br/>They implemented a live web dashboard<br/>They implemented IRC feed.<br/>And then they als go some bot bans on social networks because they were flooding<br/>How does it work?<br/>Well, the first idea was to use AMQP as implemented by qpid<br/>Basically, you take all your services and you have them send their messages in a central broker<br/>and then you have several listeners that can send messages to clients.<br/>There were a few issues with this.<br/>Basically, you have a single point of failure at the central broker<br/>And the brokers weren't really reliable.<br/>When they tested it under load, the brokers were tipping over ??<br/>The actual implementation of fedmsg uses 0mq.<br/>Basically what you get is not a single broker.<br/>You get a mesh of interconnected services.<br/>Basically, you can connect only to the service that you want to listen to.<br/>The big drawback of this is that each and every service has to open up a port on the public Internet<br/>for people to be able to connect to it.<br/>There are some solutions for that which I will talk about.<br/>But the main advantages is that you have no central broker<br/>And they got like a hundred-fold speedup over the previous implementation.<br/>You also have an issue with service discovery<br/>You can write a broker which gives you back your single point of failure<br/>You can use DNS which means that can say "Hey I added a new service, let's use this SRV record to get to it"<br/>Or you can distribute a text file.<br/>Last year, during the Google Summer of Code, I mentored Simon Choping<br/>who implemented the DNS solution for integration in fedmsg in Debian.<br/>The Fedora people as they control their whole infrastructure just distribute a text file<br/>with the list of servers that are sending fedmsg messages<br/>How do you use it?<br/>This is the Fedora topology.<br/>I didn't have much time to do the Debian one.<br/>It's really simpler. I'll talk about it later.<br/>Basically, the messages are split in topics where you have a hierarchy of topics.<br/>It's really easy to filter out the things that you want to listen to.<br/>For instance, you can filter all the messages that concern package upload by using the dak service.<br/>Or everything that involves a given package or something else.<br/>Publishing messages is really trivial.<br/>From Python, you only have to import the module,<br/>do fedmsg.publish with a dict of the data that you want to send<br/>And that's it, your message is published.<br/>From the shell, it's really easy too.<br/>You just have a command called fedmsg-logger that you can pipe some input to<br/>And it goes on the bus, so it's really simple.<br/>Receiving messages is trivial too.<br/>In Python, you load the configuration<br/>and you just have an iterator<br/>(video problems, resume at 10:10)<br/>was a replay mechanism with just a sequence number<br/>which will have your client query the event senders for new messages that you would have missed<br/>in case of a network failure ??<br/>That's how basically the system works.<br/>Now, what about fedmsg in Debian<br/>During the last Google Summer of code, a lot happened thanks to Simon Chopin's involvement<br/>He did most of the packaging of fedmsg and its dependencies<br/>It means that you can just apt-get install fedmsg and get it running<br/>It's available in sid, jessie and wheezy-backports<br/>He adapted the code of fedmsg to make it distribution agnostic<br/>So he had a lot of support from upstream developers in Fedora to make that happen<br/>They are really excited to have their stuff being used by Debian or by other organizations<br/>?? fedmsg was the right solution for event notification<br/>And finally, we bootstrapped the Debian bus by using mailing-list subscriptions<br/>to get bug notifications and package upload notifications<br/>and on mentors.debian.net which is a service I can control, so it's easy to add new stuff to it.<br/>What then?<br/>After the Google Summer of Code, there was some packaging adaptations to make it easier to run services based on fedmsg,<br/>proper backports and maintainance of the bus<br/>Which mostly means keeping the software up-to-date<br/>because the upstream is really active and responsive to bug reports<br/>It's really nice to work with them<br/>Since July 14th 2013 which is the day we started sending messages on the bus<br/>we had around 200k messages split accross 155k bug mails and 45k uploads<br/>which proves that Debian is a really active project, I guess<br/>[laughs]<br/>The latest developments with fedmsg is the packaging of Datanommer<br/>Which is a database component that can store messages that has been sent to the bus<br/>It allows Fedora to do queries on their messages<br/>and give people the achievements that they did like "yeah, you got a hundred build failures"<br/>or stuff like that [laughs]<br/>One big issue with fedmsg that I said earlier is that Debian services are widely distributed<br/>Some of the times, firewall restrictions are out of Debian control<br/>which is also the case of with the Fedora infrastructure<br/>because some of their servers are hosted within Redhat<br/>and Redhat networking sometimes don't want to open firewall ports<br/>So we need a way for services to push their messages instead of having clients pull the messages<br/>There is a component in fedmsg which have been created by the Fedora people which is called fedmsg-relay<br/>Which basically is just a tube where you push your message using a 0mq socket<br/>and it then pushes it to the subscribers on the other side<br/>It just allows to bypass firwalls<br/>The issue is that it uses a non-standard port and a non-standard protocol<br/>It's just 0mq so it basically put your data on the wire and that's it.<br/>So, I am pondering a way for services to push their messages using more classic web services<br/>You will take your JSON dictionary and push it by POST through HTTPS<br/>And then after that send the message to the bus<br/>Which I think will make it easier to integrate with other Debian services<br/>This was a really short talk<br/>I hope there is some discussions afterwards<br/>In conclusion, ??<br/>I am really glad ??<br/>For the moment, it's really apart from the Debian infrastructure<br/>So the big challenge will be to try to integrate fedmsg to Debian infrastructure<br/>Use it for real<br/>If you want to contact me, I am olasd<br/>I am here for the whole conference<br/>If you want to talk to me about it, if you want to help me,<br/>I am a little bit alone on this project, so I'll be glad if someone would join<br/>I'll be glad to hold an hacking session later this week<br/>Thanks for your attention<br/>[applause]<br/>Was it this clear?