English sottotitoli

← 1025_gitify_your_life.ogv

Ottieni il codice di inserimento
1 Language

Mostrare Revisione 1 creata 07/22/2016 da DrZaius.

  1. Time we start with the next talk
  2. I welcome Richard Hartmann
  3. He is involved in Debian since many years
  4. and he became recently Debian Developer
  5. and he will talk about gitify your life.
  6. ?, blogs, configs, data and backup. gitify everything
  7. Richard Hartmann
  8. Thank you. [applause]
  9. Thank you for coming
  10. expecially those who ? years attended all ?
  11. Short thing about myself
  12. As ? said I'm Richard Hartmann
  13. In my day job I am backbone manager at Globalways
  14. I'm involved in freenode and OFTC and...
  15. should I speak louder?
  16. I'm not...
  17. test, test... good back there?
  18. Can you turn up the volume a little bit?
  19. test, test... ok, perfect.
  20. Since about a week I've been a Debian Developer (yay)
  21. [applause] and I'm the author of vcsh.
  22. Raise of hands: who of you know what git is?
  23. perfect
  24. That's just as in ? perfect, we can skip it.
  25. Let's move to the first tool, etckeeper.
  26. Some of maybe most of this audience will have heard of it,
  27. it's a tool to basicly store your /etc in pretty much every version control system you can think of
  28. It's implemented in POSIX shell
  29. it autocommits every thing in /etc basically at every opportunity
  30. you may need to write excludes, for example
  31. before your network config ?
  32. but else, yeah, that's really cool
  33. the autocommit
  34. it hooks into most of the important or maybe even all of the important package management systems
  35. so when you install your packages, even on SuSE or whatever
  36. you can just have it commit automatically, which is very nice
  37. You can obviously commit manually
  38. if you for example change your X config
  39. it supports as I said various backends
  40. it's quite nice to recover from failures
  41. for example ? used it to recover from saturday's power outages
  42. because some servers lost stuff and with etckeeper you can just replay all the data which was...
  43. rather nice.
  44. Then there is bup.
  45. bup is a backup tool based on the git pack file format
  46. it's written in python
  47. it's very very fast
  48. and it's very space efficient.
  49. The author of bup managed to reduce his own personal backup size
  50. from 120 GiB to 45 GiB
  51. just by migrating away from rsnapshot over to bup
  52. which is quite good
  53. I mean, it's almost or a little bit more than a third, so
  54. very good
  55. This happens because it has built-in deduplication
  56. because obviously git pack files also have deduplication
  57. You can restore every single mount point
  58. or every single point in time
  59. every single backup can be monted as FUSE filesystem or a ? filesystem
  60. independently of each other
  61. so you can even compare different versions of what you have in your backups
  62. which again is very nice
  63. the one thing which is a real downside for serious deployments
  64. there is no way to get data out of your... archive or out of your backups
  65. which again is a direct consequence of using git pack files
  66. there is a branch which supports deleting old data
  67. but this is not in mainline and it hasn't been in mainline for...
  68. I think one or two years
  69. so I'm not sure if it will ever happen but...
  70. yeah
  71. at least in theory it would exist.
  72. Then for your websites, for your wikis, for your whatever there is ikiwiki.
  73. ikiwiki is a wiki compiler,
  74. as the name implies,
  75. and it converts various different files into HTML files
  76. it's written in Perl
  77. it supports various backends
  78. again most of the ones you can possibly think of
  79. oh, I can even slow down, good
  80. it's able to parse various markup languages, more on that on the next slide
  81. there are several different ways to actually edit any kind of content within ikiwiki
  82. it has templating support, it has CSS support
  83. these are quite extensive, but they may be improved, but that's for another time
  84. it acts as a wiki, as a CMS, as a blog, as a lot of different things
  85. it automatically generates RSS and Atom feeds for every single page, for every subdirectory
  86. so you can easily subscribe to topical content
  87. if you are for example only interested in one part of a particular page
  88. just subscribe to this part by RSS
  89. and you don't have to check if there updates for it
  90. which is very convenient to keep track of comments somewhere or something
  91. It supports OpenID, which means you dont have to go through all the trouble of...
  92. having a user database or doing very...
  93. or doing a lot of antispam measures
  94. because it turns out OpenID is relatively well...
  95. suited for just...
  96. stopping spam. For some reason, maybe they just
  97. haven't picked it up yet, I don't know
  98. but it's quite nice, because you don't have to do any actual work
  99. and people can still edit your content, and you can track back changes at least to some extent
  100. it supports various markup languages, the best ones, well, debatable, but in my opinion is Markdown
  101. it supports WikiText, reStructuredText, Textile and HTML and there are ikiwiki specific extensions
  102. for example normal wikilinks which are a lot more powerful than the normal linking style in MarkDown
  103. which kind of sucks, but... whatever
  104. it also supports directives, which basically tell ikiwiki to do special things with the page
  105. for example you can tag your blog pages
  106. or you can make...
  107. generate pages which automatically pull in content from different other pages and stuff like this.
  108. that's all done by directives.
  109. How does it work?
  110. You can edit webpages directly, if you want to, on the web
  111. then you will have a rebuild of the content
  112. but only the parts with changes
  113. so if you... hello?
  114. if you change only one single file it will only rebuild one single file
  115. if you change for example the navigation it will rebuild everything because obviously...
  116. it is used by everything.
  117. If it has to generate pages automatically, for example the index pages or something
  118. if you just create a new subdirectory, or if you have...
  119. if you have commands which have to appear on your site
  120. it will automatically generate those MarkDown files and commit them
  121. or you put them in your souce directory and you just commit them and...
  122. and have them part of your site, or you can autocommit them if you want.
  123. That's possible as well.
  124. You can obviously change... pull in changes in your local repository if you want to look at them
  125. Common uses would be public wiki...
  126. private notes, for just note keeping of your personal TODO list or whatever
  127. having an actual blog, which a lot of people in this room probably do
  128. that's, yeah, I mean a lot of people on Planet Debian have their blog on ikiwiki, for good reasons
  129. and an actual CMS for company websites or stuff
  130. which also tends to work quite well.
  131. The three main ways to interact with ikiwiki are webbased text editing, which is quite useful for new users, but is quite boring, in my opinion,
  132. there is also a WYSIWYG editor which is even more fancy for non-technical users
  133. and there is just plain old CLI-based editing way:
  134. just edit files and commit them back into repository pushes up and everything gets rebuilt automatically , which is...
  135. in my opinion the best way to interact with ikiwiki, because
  136. you are able to stay on the command line and simply push out your...
  137. your stuff onto the web and you don'tactually have to leave the command line
  138. which is pretty kinda neat.
  139. There are also some more advanced use cases
  140. as I said you can interface with the source files directly
  141. you can maintain...
  142. something is wrong
  143. for example you can maintain your your wiki and your docs and your...
  144. source code in one single directory
  145. and it would simply...
  146. and simply have parts of your subdirectory structure rendered.
  147. for example git-annex does this
  148. there is a doc directory, which is rendered to the website
  149. but is also part of the normal source directory
  150. which means that everybody who checks out a copy of the repository
  151. will have the complete forum, bug reports, TODO lists
  152. user comments,
  153. everything on their local filesystem, without having to leave - again - their command line,
  154. which doesn't break media, and so is just very convenient to have one single resource for everything regarding one single program.
  155. And another nice thing is if you create different branches
  156. for preview, staging areas you can have workflows where some people are just allowed to create ...
  157. pages, other people then look over those pages and merge them back into master and then push them on the website
  158. which basically allows you to...
  159. to have content control or real publishing workflow, if you have a need to do this
  160. Next stop: git-annex.
  161. The beef.
  162. It's basically a tool to manage files with git without checking those files into git
  163. ?
  164. Yeah, what is git-annex?
  165. It's based on git,
  166. it maintains the metadata about files,
  167. as in location, and file names and everything, in your git repository
  168. but it doesn't actually maintain the file content within the git repository
  169. more on that later
  170. this saves a lot of time and space.
  171. You still able to use any git-annex repository as a normal git repository
  172. which ? means you're even able to have a mix of...
  173. for example, say, all your ? files
  174. should be maintained by normal git,
  175. and then you have all the merging which git does for you and everything
  176. and then you have for example your photographs,
  177. or your videos for web publishing
  178. which are maintained in the annex
  179. which means you don't have to have a copy of those files in each and every single location
  180. A very nice thing about git-annex is that it's written with very low bandwidth and flaky connections in mind
  181. quite a lot of you will know that Joey lives basically in the middle of nowhere
  182. which is a great thing to be forced to write really efficient code
  183. which doesn't use a lot of data, and that shows:
  184. it's really quick
  185. and even if you had a really really bad connection
  186. in backwaters or whatever...
  187. during holidays or during normal living
  188. it's still able to transfer the data which you need to transfer,
  189. it's very very nice
  190. There are various workflows: we'll see four of them in a few minutes
  191. So. It's written in Haskell, so it's probably strongly typed and nobody can write patches for it
  192. it uses rsync to actually transfer the data,
  193. which means it doesn't try to reinvent any wheels
  194. it's really just based on top of established and well know and well debugged programs
  195. In indirect mode, which in my personal opinion is the better mode,
  196. what it does is
  197. it moves the actual files into a different location, namely .git/annex/objects
  198. it then makes those files read only, so you cannot event accidentally delete those files
  199. even if you rm -f them, it will still tell you no, I can't delete them,
  200. which is very secure
  201. may be incovenient, but you can work on this
  202. it replaces those files with symlinks of the same name, and those just point at the object
  203. and if there is an object behind this symlink or not...
  204. that basically returns whether you are able on this particular machine, or in this particular repository
  205. but you will definitely have the informations about the name of the file, the theorethical location of the file...
  206. the hash of the file will be in every single repository
  207. There is also a direct mode
  208. initially mainly written for windows and Mac OS X
  209. because Windows just doesn't support symlinks properly
  210. and OS X was supporting symlinks,
  211. apparently has lots of developers who think it is a great idea to follow symlinks...
  212. and display the actual target of the symlink instead of the symlink
  213. so you have cryptic filenames which are very hard to deal with
  214. obviously people who are used to GUI tools which then only display really really cryptic names ?
  215. so there is direct mode which doesn't do the symlink stuff
  216. it basically rewrites the files on the fly
  217. git still thinks it would be managing symlinks, but...
  218. git-annex just pulls them up from under git, and pushes in the actual content.
  219. You keep on nodding, so... I'm probably doing good
  220. and if you want you can always delete old data, or you can keep it...
  221. or you can just... for example what I'm doing:
  222. you can have one or two machines which slurp up all your data...
  223. and have an everlasting archive of everything which you've ever put into your annexes...
  224. and other machines, for example laptops with smaller SSDs
  225. those just have the data which you are actually interested in at the moment
  226. How does this work in the background?
  227. Each repository has a UUID
  228. It also has a name, which makes it easier for you to actually interact with the repository...
  229. but in the background it's just the UUID for obvious reasons...
  230. because it just makes ? and synchronization easy, period
  231. It's also tracking informations in a special branch called git-annex
  232. this branch means that all...
  233. this branch ? every single repository has full and complete informations...
  234. about all files, about the locations of all files, about the last status of those files...
  235. if those files have been added to some repository
  236. or they have been deleted,
  237. or if they are being over there forever
  238. so in every single repository you can just lookup the status of this file or of all files in all of your repositories
  239. which is, yeah, convenient
  240. The tracking information is very simple
  241. and it's designed to be merged very...
  242. it's a little bit more complicated than applying union merge,
  243. but basically what it does is it adds a timestamp
  244. and tells if the file is there or not and it has the UUID of the repository
  245. and from this informations, along with the timestamps you can simply reproduce...
  246. the whole lifecycle of your files through your whole cloud of git-annex repositories
  247. in this one particular annex.
  248. One really nice which you can do is...
  249. if you are on the command line, which again in my opinion is the better mode...
  250. you can simply run git-annex sync
  251. which basically does a commit...
  252. oh, it does a git-annex add, then it does a commit,
  253. then it merges from the other repositories
  254. into your own master, into your own git-annex branch
  255. then it merges the log files
  256. that's where the git-annex branch comes in
  257. and then it pushes to all other known repositories
  258. which is basically a one-shot command to syncronize all the metadata about all the files with all the other repositories
  259. and it takes no time at all
  260. given a network connection
  261. Data integrity is something which is very important for...
  262. yeah, for all of the tools, but git-annex was really designed with data integrity in mind
  263. by default it uses a SHA-2 256 with file extension...
  264. to store the objects, so it renames the file to its own shasum
  265. which allows you to always verify the data even without git-annex
  266. you are able to say by means of globbing...
  267. which files, or which directory, or which types of files should have how many copies in different repositories
  268. so for example what I do:
  269. all my raw files, all theraw photographs are in at least three different locations,
  270. all the JPEGs are only in two, because JPEGs can be regenerated
  271. raws can not.
  272. All remotes and all special remotes can always be verified
  273. with special remotes this may take quite some bandwidth
  274. with actual normal git-annex remotes you run the verification locally
  275. and just report back the results with obviously saves a lot of bandwidth and transfer time
  276. verification obviously takes the amount of requires copies into account
  277. so if you would have to have 3 different copies
  278. and your whole repository cloud only has 2, it will complain
  279. it will tell you "yes, checksum is great, but you don't have enough copies, please do something about it".
  280. and even if you ? right now, delete all copies from git annex
  281. you would still be able to get all your data out of git annex
  282. because what it boils down to, in indirect mode, it's just symlinks to other objects
  283. these objects have their own checksum as their file name
  284. so you'll even be able to verify, without git-annex,
  285. just by means of a little bit of shell scripting,
  286. that all your files are correct,
  287. that you don't have any bit flips or anything on your local disk.
  288. direct mode doesn't really need a recovery ?, because...
  289. the actual file is just in place of the symlink
  290. but on the other hand you won't be...
  291. you still need to look at the git-annex branch to determine the actual checksums
  292. which you wouldn't have to do with the indirect mode.
  293. There are a lot of special remotes. And what are special remotes?
  294. these are able to store data in non git-annex remotes
  295. because, let's face it, on most servers, or most servers where you could store data
  296. you aren't actually able to get a shell and execute commands
  297. you can just push data to it, you can receive data
  298. but you cannot actually execute anything on this computer.
  299. That's what special remotes are for.
  300. All special remotes support encrypted data storage
  301. so you just gpg encrypt your data and then send it off
  302. which means that the remotes can only see the file names
  303. but they cannot see anything else about the contents of your files
  304. obviously you don't want to trust amazon or anyone to store your plain text data
  305. that would just be stupid
  306. There is a hook system, which allows you to write a lot of new special remotes
  307. you'll see a list of... quite an extensive list of stuff in a second
  308. Normal, built-in, special remotes which are supported by haskell out of the box
  309. by git-annex out of the box
  310. and actually implemented in haskell
  311. are Amazon Glacier, Amazion S3, bup, directory — a normal directory on your system
  312. rsync, webdav, http or ftp and the hook system
  313. there is a guy who brought most of those
  314. we can support archive.org, IMAP, box.com, Google Drive... you can read them yourself, I mean...
  315. but those are quite a lot of different special remotes, if you...
  316. already have storage on any of those services, just start pushing encrypted data to it if you want, and you're basically done.
  317. There is an ongoing project called the git-annex assistant
  318. last year, and I think this year it just ended, didn't it?
  319. so, pretty much exactly one year ago Joey has started to to raise funds
  320. by means of a kickstarter to just focus on writing git-annex assistant for a few months
  321. he got so much that he could do it for a whole year
  322. and he's just restarted the whole thing with his own fundraising campaign without kickstarter and he got another full year
  323. yeah... are you still accepting funds?
  324. ok, so, if you use it at least consider donating
  325. because honesty you can't write patches for it anyway, because it's in haskell, so...
  326. that's... the other means of actually contributing
  327. git-annex boils down to be a daemon, which runs in the background
  328. and keeps track of all of your files, of newly added files
  329. and then starts transferring those files, if configured to do so
  330. it starts transferring files to other people or to other repositories
  331. this is all managed by means of a web gui
  332. which in turns means that it's really, well, not easy, but easier to port to for example windows or android
  333. which both work, to some extent
  334. not fully, but they are useful, or useable, more or less
  335. at least on android it's really quite good, I couldn't test it on windows, because...
  336. and it also makes it accessible for non technical users
  337. so for example if you want to share some of your photographs with your parents
  338. or with friends, or if you want to share, I don't know, videos with other people
  339. you just put them into one of those repositories
  340. and even those non-technical people just magically see stuff appear in their own repository
  341. and can just pull the data if they want to
  342. or if you configured it to do so, it would it would even transfer all the data automatically
  343. which is... it's ?
  344. It supports content notifications, but not content transfer
  345. by means of xmpp or jabber
  346. which used to work quite well with google talk, I think it's not...
  347. oh, it still works, ok
  348. at least at the moment, we'll see when they just ? google ? with google+, but...
  349. at least at the moment it still works, if you have a google account you can simply transfer all your data
  350. you can transfer the metadata about your data, you cannot actually transfer the files through jabber
  351. but that's probably something which will happen within the next year
  352. there are quite ? rulesets for content distribution
  353. so for example I can show you...
  354. you can say "put all raw files into this archive, and all jpegs on my laptop", or whatever
  355. or "if I still have more than 500 GB free on this please put data in
  356. and as soon as only have 20 left stop putting data into this one repository"
  357. which obviously is quite convenient
  358. as I said there is a windows port, and now on to usecases.
  359. First usecase: the archivist.
  360. What the archivist does is: basically he just collects data
  361. either to ? or just to collect
  362. and if you have this usecase what you probably want to do, you want to have offline disks
  363. to store at your mom's, or to put into a drawer
  364. or just you don't have enough sata ports in your computer because you just have so much data
  365. so, what you can do is you can just push this data to either connected machines or to disconnected drives...
  366. or to some webservice, and just store data
  367. but normally you would have the problem of keeping track of where your data lives
  368. if it's still ok, if it's still there, everything.
  369. With git-annex you can automate all this administrative side of archiving your stuff.
  370. Even if you only have one of those disks, if they're a proper remote...
  371. you'll have full informations about all the data in your annex cloud up to this point
  372. so even if you only pull out one random disk you still have informations on all the other disks on this one disk
  373. which obviously is a nice thing.
  374. Media consumption.
  375. Let's say you pull a video of this talk, or you get some slides...
  376. maybe also from this talk, you can get some podcasts...
  377. and git-annex has become a native podcatcher quite recently, I thing two or three weeks ago
  378. which means you don't even have a separate podcatcher
  379. you just tell git-annex "this is all of my rss feeds" and it will just pull in all the content,
  380. Then you can synchronize all this data for example to your cellphone, or your tablet, or whatever
  381. consume the data on any of your devices, even if you have 10 copies of this particular podcast
  382. because you didn't get around to listen to it on your computer...
  383. and you didn't get around to listen to it on your cellphone
  384. but then on your tablet you did listen to it
  385. you have three copies of this file which you don't need anymore...
  386. because you have listened to the content and you don't care about the content anymore
  387. what you do is you drop this content on one random repository
  388. and this information that you have dropped the actual content,
  389. not the metadata about the content, but the actual content, you don't need the content anymore...
  390. will slowly propagate to all of the annexes and if they have the data they will also drop the data
  391. so you don't have to really care about keeping track of those things
  392. you can simply have this message propagate
  393. do you want to comment? can someone give Joey a microphone?
  394. Just a minor correction
  395. it doesn't propagate that you've dropped the content
  396. but you can move it around in ways that have exactly the effect you described
  397. ? get the wrong idea that if you accidentally remove one thing it will vanish from everything ?
  398. but if you deliberately drop the content and tell the annex...
  399. no. that's not how it works.
  400. I want to talk about it later, but it's...
  401. you looked at the slides, but...
  402. sorry, ?
  403. He watches for everything which is ?
  404. Next thing, if you are on the road, and one usecase which is probably quite common: taking pictures while you are on the road ?
  405. You take your pictures, you save them to your annex
  406. where you are able to store them back to your server or wherever
  407. if you want to, and even if for example one disk gets ?
  408. and you lose part of your content,
  409. you'll still at least be able to have an overview of what content used to be in your annex
  410. and if you then pull out your old SD card and see "oh, that photo is still there" you can simply reimport it and it will magically reappear.
  411. What it also does is:
  412. if you have a very tiny computer with you
  413. you can, as soon as you are at an internet cafe, just sync up with your server or your storage, whatever
  414. and push out the data to your remotes
  415. which then means you'll have two or three or five copies of the data
  416. and git-annex keeps track of what is where for you
  417. so you don't have to worry about copying stuff around.
  418. And then there is one personal usecase, for photographs
  419. I have a very specific way of organizing my photographs
  420. my wife disagrees violently
  421. she likes to do her photo storage in a completely different way
  422. she doesn't care about the raw files
  423. and she doesn't care about all the documentation pictures of signposts or whatever which I just took to remember which cities we went through
  424. so what she can do is she can simply delete the actual files or ? the symlink of this file
  425. and it will disappear from her own annex
  426. she can then commit all this
  427. normally if she would sync back the data I would also have the same layout, which I don't want
  428. expecially since she tends to rename everything a lot
  429. but what I did, I set up a rebasing branch on top of my normal git-annex repository
  430. so what she gets is: she has her own view of the whole data
  431. or the part she cares about
  432. and when I add new content
  433. she will see the new content, she will rearrange the content however she pleases
  434. but as it's a rebasing branch
  435. all her changes will always be replayed on top of master
  436. so she has her own view, and I don't even notice her own view
  437. but even if she uses one of the other computers she would have the same view which she herself has
  438. so basically she has her own view all of the data
  439. This is very convenient to keep the peace at home.
  440. Next topic: vcsh.
  441. Most of you here probably have some sort of system...
  442. where they have one subversion or cvs or whatever repository
  443. and they have it somewhere in their home directory
  444. you symlink into various places in your home directory, and it kind of keeps working so you don't throw it away, but...
  445. to be honest it sucks. Here is why.
  446. Or, here's why in a second.
  447. vcsh is implemented in POSIX, which is very very portable
  448. it's based on git, but it's not directly git
  449. The one thing which git is not able to do is maintain several different working copies into one dicrrectory
  450. which is a safety feature, more on that later
  451. but this really sucks if you want to maintain your mplayer, your shell, your whatever configuration
  452. in your home directory, which is the obvious and only real place where it makes sense to put your configuration
  453. you don't want to put it into dot-dot-files and then symlink back
  454. you want to have it in your home directory as actual files.
  455. So, vcsh uses fake bare git repositories
  456. again, more on that on the next slide
  457. and it's basically a wrapper around git
  458. which makes git do stuff which it normally wouldn't do
  459. and it has a quite extensible and useful hook system which ? will care about
  460. Whith a normal git repository you have two really defining variables within git
  461. you have the work tree
  462. which is where your actual files live
  463. and you have the $GIT_DIR, where the actual data lives
  464. normaly in a normal checkout you just have your directory and .git under this
  465. If you have a bare repository you obviously don't have an actual checkout of your data
  466. you have just all the objects and the configuration stuff
  467. so that's what a bare repository boils down to being
  468. A fake bare git repository on the other hand has both
  469. it has a $GIT_WORK_TREE and it has a $GIT_DIR
  470. but those are detached from each other
  471. they don't have to be closely tied together
  472. and also sets core.bare = false, to actually tell git that "yes, this is a real setup, but..."
  473. "yes, you still have a work tree, even thought you don't really expect it"
  474. "to have one, you still have a work tree".
  475. By default vcsh puts your work tree into home
  476. and your git dir into...
  477. it's based on .config/vcsh/repo.d and then the name of the repository
  478. which just puts it away and out the way of you actually seeing stuff
  479. but it follows the cross desktop specifications so if you move stuff around it will also follow
  480. Fake bare repositories are really...
  481. are messy to setup, and it's very easy to get them wrong
  482. that is also the reason why git normally disallows this kind of stuff
  483. because all of a sudden you have a lot of...
  484. context-dependency on when you do what
  485. just immagine you set git workdir...
  486. $GIT_WORK_DIR, sorry
  487. and run random commands like git add, that's...
  488. kind of ok, if you git reset --hard you'll probably not be to happy
  489. you checkout the current version that's also quite bad
  490. and if you clean -f, yeah, you just throw the home directory
  491. congratulations
  492. So, it's really risky to run with these variables set
  493. which is why I wrote vcsh to wrap around git
  494. to hide all this complexity and do quite some sanity checks to make sure everything's set up correctly
  495. again it allows you to have several repositories and it also manages really the complete lifecycle of all your repositories
  496. it's very easy to just create a new repository, you just init, just with git
  497. you add stuff, you commit it, and you define a remote and start pushing to this remote
  498. simple
  499. This looks like git because it's very closely tight to git
  500. and it uses a lot of the power or of the syntax of git, for obvious reasons
  501. because... it's closely tight to git
  502. you can simply clone as you would with git
  503. you can simply show your files as you would with git
  504. you can rename the repository, which git can't do, but you don't have to
  505. you can show the status of all your files
  506. or just of one of your repositories
  507. or of all repositories
  508. you can pull in all your repositories at once, you can push all of your repositories at once
  509. with one single command
  510. so, if you are on the road, or you just want to sync up a new machine it's really quick, it's really easy
  511. There are three modes of dealing with your repositories
  512. default mode is the quickest to type
  513. you just say vcsh zsh commit whatever or any random git command
  514. but you cannot really run gitk
  515. you can do this by using the run mode, which is the second mode
  516. we simply ? here run is missing and here git is missing
  517. so you say simply vcsh run zsh git commit whatever
  518. and this is exactly the same command, it's literally the same comand once it arrives at the shell level, so to speak
  519. here you can also run gitk, because...
  520. with this, you set up the whole environment for one single command to run with this context
  521. of the changed environment variables
  522. or you could even enter the repository, then you set all the variables
  523. and then you can just use normal git commands as you would normally
  524. this is the most powerful mode,
  525. but it's also the most likely to hurt you if you don't know what you're doing
  526. so I don't recommend working ? down this way.
  527. You should have your shell display prompt information about being in a vcsh repository or not
  528. simply because else you may forget that you entered something
  529. and then if you run those commands, there will be pain
  530. At once the usecases, which will be possible quite soon
  531. we can just combine vcsh with git-annex to manage everything which is not configuration files in your own home directory
  532. ? basically two programs to sync everything about all of your home directory
  533. without having to do any extra work
  534. you can also use it to do really wierd stuff
  535. for example you can backup a .git of a different repository with the help of vcsh
  536. so you can just go in, change objects or anything, break stuff and just replay whatever you're doing
  537. just to try and see how it breaks in interesting ways.
  538. You can just backup a working copy which is maintained by a different reopository or a different system
  539. you can even put a whole repository, including the .git,
  540. into a different git file
  541. or you can even put other VCSs like subversion or something into git, if you want to.
  542. Then there is mr.
  543. mr ties all those...
  544. hopefully by now you have about twenty new repositories
  545. because you have configuration, you have ikiwiki, you have everything
  546. so now you need something to syncronize all those repositories
  547. because doing it by hand is just a lot of work
  548. mr supports push, pull, commit operations for all the major known version control systems
  549. allowing you to have one single interface to operate on all your systems
  550. It's quite trivial to write support for new systems
  551. I think it took me about two hours to support vcsh natively
  552. so, that's really quick
  553. If you want to try, the stuff which I told you about...
  554. in the links later there will be the possibility to just clone a subrepository for vcsh
  555. which will then put up a suggested mr directory layout
  556. and you can just work from there
  557. This is the... alright, it's my suggested layout
  558. which basically...
  559. you just include everything in config.d you maintain...
  560. your available.d, by means of vcsh, so you simply sync around all your content between all the different computers
  561. and then you simply soft link from available to the actual config
  562. which is basically what apache does with sites.enabled and sites.available
  563. or modules.available and modules.enabled
  564. which is really really powerful
  565. Last thing is not git based, but zsh.
  566. It's a really powerful shell, you should consider using it
  567. it has very good tab complection for all the tools listed here, more than bash
  568. it has a right prompt, which will automatically disappear if it needs to
  569. which is very convenient to display not important but still useful information
  570. and it will automatically, if you tell it to, tell you about you being in a git repository or subversion repository or whatever
  571. by means of vcs.info
  572. which also means you'll be told that at the moment you are in a vcsh repository
  573. and you may kill your stuff if you do things wrong
  574. it can mimic all the major shells
  575. and there's just too many reasons to list
  576. So... final pitch
  577. This is true: I've tried it earlier, I can demo it, I still have five minutes left
  578. it takes me less than five minutes to syncronize my complete, whole, digital life while on the road
  579. so if I'm at the airport and just want to update all my stuff,and push out all my stuff...
  580. it'll take a few minutes, but then I can hop on the airplane...
  581. and I'll know everything is fine, everything is up-to-date on my local machine
  582. on my laptop machine, I can continue working, and have a backup on my remote systems
  583. These are the websites
  584. The slides will be linked from penta, so you are more than welcome to look at these links later
  585. There are previous talks, which you can also look at, if you want to
  586. and that's pretty much it
  587. and if you have any more questions afterwards either catch me...
  588. or there is an IRC channel, and there is a mailing list
  589. ok, we can take a few questions, we have still a few minutes
  590. then if there are more questions ask Ritchie afterwards
  591. And while we are doing this just look here, because that's a complete sync of everything I have
  592. Just to make sure I understood this correctly,
  593. with git-annex the point is that the data is stored dispersed over different local destinations, so to speak
  594. but the metadata ? exists, ? complete git history
  595. so git is able to tell me, "well, this version at that destination was changed at that time and so on and so on"
  596. did I get this right or...
  597. git will be able to tell you about changes...
  598. ok, I don't have internet, sorry
  599. git will be able to tell you about changes in the filenames, or directory structure
  600. git-annex will be able to tell you about changes in the actual file content
  601. or in moving around the files
  602. but as one single unit, more or less, then yes...
  603. the answer is yes, but not quite, but yes
  604. yes, but ? all the things you asked about are in git, you know the previous location, all that stuff
  605. but in a separate branch which you should use git-annex to access, but you can do it by hand if you want to
  606. I'm not familiar with tracking branches,
  607. yet you mention that the workflow for your wife has a different view of the data than you
  608. with that workflow is it possible for your wife to upload photos that you will have in your view as well, or is it a oneway street?
  609. minor correction: tracking branches track a different repository,
  610. what I meant was rebasing branches, which rebase on top of a different branch
  611. which basically just keeps the patches always on top of the branch, no matter where the head moves to
  612. if she wanted to do that she would need to simply git checkout master
  613. do whatever she wanted to do, and then git checkout her own branch, and then she's...
  614. she is able to, but she would need to change into the master branch and then back
  615. microphone
  616. she never pushes her private branch? it always lives on her own machine?
  617. no, she does push it, but I don't display this view of the data
  618. because else she wouldn't be able to syncronize this view between different computers
  619. I seem to have internet now, so let's just let this run in the background
  620. any more questions?
  621. no more questions?
  622. than we...
  623. ? more minutes for questions?
  624. ok, so thanks to Richard Hartmann, we will continue...