In order that I am able to play some music if I forget the USB drive, each of
the laptops has a subset of albums on it, depending on the size of their
respective hard drives. If I add a new album to the USB drive, then that change won’t
get written to either of the laptops when the drive is plugged in. Not entirely
satisfactory. I had tinkered around with
globbing, or with
having find(1)
scan deeper into the tree, or even a loop to check for the presence of directories in an array…
It just got too hard. My rudimentary scripting skills and the spectre of recursion, I am sorry to admit, conspired to undermine my resolve. So, rather than concede unconditional surrender, I asked for help. As is almost always the case in these situations, this proved to be a particularly wise move; the response I received was neither what I expected, nor was it anything I was even remotely familiar with: so in addition to an excellent solution (one far better suited to what I was trying to achieve), I learned something new.
The first comment on my question proved singularly insightful.
Care to use union mounts, for example via overlayfs?
A union mount, something until now I was blissfully unaware of, is according to Wikipedia,
a mount that allows several filesystems to be mounted at one time, appearing to be one filesystem.
Union mounting has a long and storied history on Unix, beginning in 1993 with the Inheriting File System (IFS). The genealogy of these mounts has been well covered in this 2010 LWN article by Valerie Aurora. However, it is only in the current kernel, 3.18, that a union mount has been accepted into the kernel tree.
After reading the documentation for overlayfs, it seemed this was exactly what I was looking for. Essentially, an overlay mount would allow me to “merge" the underlying tree (the Music directory on the USB drive) with an “upper” one, $HOME/Music on the laptop, completely seamlessly.
Then whenever a lookup is requested in such a merged directory, the lookup is performed in each actual directory and the combined result is cached in the dentry belonging to the overlay filesystem.
It was the just a matter of adapting my script to use overlayfs
, which was
trivial:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
And now, when I plug in the USB drive, the contents of the drive are merged with my local music directory, and I can access whichever album I feel inclined to listen to. I can also copy files across to the local machines, knowing if I update the portable drive, it will no longer mean I have to forego listening to any newer additions by that artist in the future (without manually intervening, anyway).
Overall, this is a lightweight union mount. There is neither a lot of functionality, nor complexity. As the commit note makes clear, this “simplifies the implementation and allows native performance in these cases.” Just note the warning about attempting to write to a mounted underlying filesystem, where the behaviour is described as “undefined”.
Creative Commons image, mulitlayered jello by Frank Farm.
]]>What became apparent over the last couple of months, as I began to consciously make more regular backups, was that pruning the archives was a relatively tedious business. Given that Tarsnap de-duplicates data, there isn’t much mileage in keeping around older archives because, if you do have to retrieve a file, you don’t want to have to search through a large number of archives to find it; so there is a balance between making use of Tarsnap’s efficient functionality, and not creating a rod for your back if your use case is occasionally retrieving single—or small groups of—files, rather than large dumps.
I have settled on keeping five to seven archives, depending on the frequency of my backups, which is somewhere around two to three times a week. Pruning these archives was becoming tedious, so I wrote a simple script to make it less onerous. Essentially, it writes a list of all the archives to a tmpfile, runs sort(1) to order them from oldest to newest, and then deletes the oldest minus whatever the number to keep is set to.
The bulk of the code is simple enough:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
You can see the rest of the script in my bitbucket repo. It even comes with colour.
Once every couple of weeks, I run the script, review the archives marked for deletion and then blow them away. Easy. If you aren’t using Tarsnap, you really should check it out; it is an excellent service and—for the almost ridiculously small investment—you get rock solid, encrypted peace of mind. Why would you not do that?
This is the one hundredth post on this blog: a milestone that I never envisaged getting anywhere near. Looking back through the posts, nearly 60,000 words worth, there are a couple there that continue to draw traffic and are obviously seen at some level as helpful. There are also quite a few that qualify as “filler”, but blogging is a discipline like any other and sometimes you just have to push something up to keep the rhythm going. In any event, this is a roundabout way of saying that, for a variety of reasons both personal and professional, I am no longer able to fulfil my own expectations of regularly pushing these posts out.
I will endeavour to, from time to time when I find something that I genuinely think is worth sharing, make an effort to write about it, but I can’t see that happening all that often. I’d like to thank all the people that have read these posts; especially those of you that have commented. With every post, I always looked forward to people telling me where I got something wrong or how I could have approached a problem differently or more effectively2; I learned a lot from these pointers and I am grateful to the people that were generous enough to share them.
Now that I have a Raspberry Pi1, I am naturally much more interested in packages that can be built for the ARMv6 architecture; especially those that are available in the AUR. It is worth a brief digression to note that Arch Linux ARM is an entirely separate distribution and, while they share features with Arch, support for each is restricted to their respective communities. It is with this consideration in mind that I had begun to think about multi-arch support in PKGBUILDs, particularly in the packages that I maintain in the AUR.
I have previously posted about using Syncthing across my network, including on a Pi as one of the nodes. As the Syncthing developer pushes out a release at least weekly, I have been maintaining my own PKGBUILD and, after Syncthing was pulled into [community], I uploaded it to the AUR as syncthing-bin.
Syncthing is a cross platform application so it runs on a wide range of
architectures, including ARM (both v6 and v7). Initially, when I wrote the
PKGBUILD, I would updpkgsums
on my x86_64 machine, build the package and
then, on the Pi, have to regenerate the integrity checks. This was manageable
enough for my own use across two architectures, but wasn’t really going to
work for people using other architectures (especially if they are using
AUR helpers).
Naturally enough, this started me thinking about how I could more effectively manage the process of updating the PKGBUILD for each new release, and have it work across the four architectures—without having to manually copy and paste or anything similarly tedious. Managing multiple architectures in the PKGBUILD itself is not particularly problematic, a case statement is sufficient:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
The real challenge, for me, was to be able to script the replacement of each of the respective sha1sums, and then to update the PKGBUILD with the new arrays. Each release of Syncthing is accompanied by a text file containing all of the sha1sums, each on its own line in a conveniently ordered format, like so:
1 2 3 4 |
|
This seemed a perfect job for Awk, or more particularly, gawk
’s
switch statement,
and an admittedly rather convoluted printf
incantation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
The remaining step was to update the PKGBUILD with the new sha1sums. Fortunately,
Dave Reisner had already written the code
for this in his updpkgsums
utility; I had only to adapt it slightly:
1 2 3 4 5 6 7 8 9 10 |
|
Combining these two tasks means that I have a script that, when run, will download
the current Syncthing release’s sha1sum.txt.asc
file, extract the relevant sums into the replacement case statement and then
write it into the PKGBUILD. I can then run makepkg -ci && mkaurball
, upload
the new tarball to the AUR and the two other people that are using the PKGBUILD
can download it and not have to generate new sums before installing their
shiny, new version of Syncthing. You can see the full version of the script in
my bitbucket repo.
Creative Commons image of the Mosque at Agra, by yours truly.
]]>One that has proved to be of some utility for many years now is a simple wrapper script I wrote to help manage my finances. Like many useful scripts, it was written quickly and has been in constant use ever since; becoming almost transparent it is so ingrained in my workflow.
The script allows me to manage the lag between when a company emails me an invoice and when the payment is actually due. I find that companies will typically email their invoices to me some weeks in advance, whereupon I will make a mental note and then, unsurprisingly, promptly forget all about it, thereby opening myself up for penalties for late payment. It didn’t take me long (well, in my defence, a lot less time than it took for invoices to become digital) to realise that there was a better way™ - a script.
The at command is purpose built for running aperiodic commands at a later time (whereas cron is for periodic tasks). So, using at(1), once I receive an invoice, I can set a reminder closer to the final payment window, thereby avoiding both the late payment penalty—and the loss of interest were I to pay it on receipt. I just needed a script to make it painless to achieve.
The main function of the script is pretty self-explanatory:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Now, an invoice arrives, I open it and fire up a scratchpad, and follow the prompts. A couple of weeks later, the reminder email arrives and I login to my bank account and dispatch payment. You could, of course, have the script trigger some other form of notification, but an email works well for me.
The rest of the script is similarly basic; just some options for listing and reading any queued jobs and some more rudimentary checking. The full script is in my bitbucket repo2.
Not more than a couple of hours after posting this,
Florian Pritz pinged me in #archlinux with some great suggestions
for improving the script. I particularly liked relying on date(1) handling the input format
for the time and date values. He also suggested a
readline
wrapper called (appropriately enough)
rlwrap and a tmpfile
to better manage
input validation. You can see his
full diff of
changes. In the end, I adopted the date
suggestion but passed on rlwrap
. Thanks for the
great pointers, Florian.
myterm=$(echo $TERM)
which I would hope I copied blindly from somewhere
else, but accept full responsibility for nonetheless.Creative Commons image by Adelle and Justin on Flickr.
]]>Essentially a tree of all of the PKGBUILDs (and other necessary files) for the packages in the official repositories, the ABS is the means by which you can easily acquire, compile and install any of the packages on your system:
ABS is made up of a directory tree (the ABS tree) residing under /var/abs. This tree contains many subdirectories, each within a category and each named by their respective package. This tree represents (but does not contain) all official Arch software, retrievable through the SVN system.
I have been using ABS since I started running Arch and it has worked well. I wrote a simple script to check for and download updates when required to help simplify the process and have been generally content with that approach. That isn’t to say that elements of this process couldn’t be improved. One of the small niggles is that the ABS only syncs once a day so there is almost always—for me down here in .nz, anyway—at least a full day’s wait between the package hitting the local mirror and the updated ABS version arriving. The other issue is that you download and sync the entire tree…
That all changed when, at the start of this month, one of the Arch developers,
Dave Reisner, opened a
thread on the Arch boards
announcing asp, the Arch
Source Package management tool, a git-based alternative for abs
1.
Basically a 200-line bash script, asp
is an improvement over abs
insofar as
you get the updated PKGBUILDs immediately; you can choose between just pulling
the necessary source files (as per abs
), or checking out the package branch
so that you can create your own development branch and, for example, keep your
patch set in git as well.
You can elect to locate the local git repository in a directory of your
choosing by exporting ASPROOT
, there are Tab completion scripts
for bash and zsh and a succinct man
page. Overall, for a utility that is only
three weeks old, asp
is already fulfilling the function of a drop-in
replacement; a faster, more flexible tool for building Arch packages from
source.
With thy sharp teeth this knot intrinsicate
Of life at once untie…
Creative Commons image, Red Lego Brick by Brian Dill on Flickr.
]]>So, I was off down that path… Called simply pass, it is a 600 line bash script that uses GPG encryption and some other standard tools and scripts to organize and manage your password files. I had never heard of it but, based on Cayetano and Bigby’s recommendations, I thought it would be worth a look.
On of the reasons that I had not come across it before was that, after using KeePassX for so long, I had assumed that I would need to continue to use that database format; so when I was looking for an alternative, KeePassC was a natural fit (and a fine application). The question of migrating my data hadn’t even occurred to me…
It turns out that the migration process to pass is extraordinarily well catered
for: there are
10 migration scripts
for a range of different formats, including
keepassx2pass.py,
which takes the exported XML
KeePassX database file and creates your pass files,ordered by the schema you
had used in that application. You just need to make sure you amend the shebang
to python2
before running the script, otherwise it will fail with an
unhelpful error message.
After using KeePassX to dump my database, before I could use the script to
create my pass directories, I had to export the PASSWORD_STORE_DIR
environment variable to place the top level pass directory in an alternate
location. This way, instead of initializing a git repository, I could have the
store synced by
Syncthing.
The git idea is a good one, but I’m not particularly interested in version
controlling these directories, and I have no intention, encrypted or not, of
pushing them to someone else’s server.
That constitutes the basic setup. It took a grand total of five minutes. The real strength of pass, however, is in its integration with two other fantastic tools: keychain and dmenu. Together with pass, these constitute a secure, convenient and effortless workflow for managing your passwords. With your GPG key loaded into keychain, you are only prompted for your master passphrase once1 and with Chris Down’s excellent passmenu script, you can use dmenu to sort through your password files, Tab complete the one you are looking for and have it copied to your clipboard with a couple of keystrokes.
After using Chris' script for a couple of days, I made a few alterations to
suit my setup: removed the xdotool
stuff (as I don’t need it), included dmenu
formatting options to match my
dwm statusbar
and, most significantly, changed the way that the files are printed in dmenu to
remove the visual clutter of the parent directories, ie., print archwiki
as opposed to internet/archwiki:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
It does introduce some more complexity into the script, but it makes it a lot easier for me to identify the desired password when reading it in dmenu.
Now, when I need a to enter a password, I hit my dmenu hotkey, type dpass
Enter and the first couple of letters of the desired password
filename, TabEnter and the password is loaded and ready
to go. There are also
completion scripts
for the main shells, and even one for
fish2 for the iconoclasts…
While I have no complaints at all with KeePassC, I have found this pass setup to be a lot less intrusive to use, it seamlessly integrates with my workflow, and the passwords themselves are much simpler to manage. Short of someone else popping up in the comments with another compelling proposition, I’m content with the way this has worked out. Many thanks to Cayetano Santos and Bigby James for the push.
Finally, a command line shell for the 90s…Indeed.
Creative Commons image by Intel Free Press on Flickr.
]]>Fortunately, this is a solved problem. There are a number of password managers available, both as desktop clients and cloud services. Personally, I find the idea of storing my passwords in the cloud has all the fascination of bungee jumping; it’s apparently mostly safe, but that can be cold comfort… The first application that I used, and used happily for quite a long time, was KeePassX.
Around the end of 2012, I started experimenting with KeePassC, a curses-based password manager that is completely compatible with KeePassX and has very little in the way of dependencies. I have been using it solidly on my home and work laptops ever since and, after recently uninstalling Skype on my desktop, have switched over to it completely1. I’m still not entirely clear why I haven’t written about it previously.
Written in Python 3, KeePassC is entirely keyboard driven (naturally enough, you can use Vim keybinds) and integrates seamlessly with your browser and clipboard. My experience of the software over the last eighteen-odd months is that it has been incredibly stable and the developer, Karsten-Kai, has been exceptionally responsive and helpful in the forum thread.
Like most good software, there is not a lot to it. You pull up the login page, switch
to a terminal and run keepassc
, enter your passphrase (I use a
Yubikey
for this and it works wonderfully) and then search for your desired entry with
/ and then hit c to copy the password to your clipboard before
switching back to the browser and you are in.
KeePassC also has a set of simple command line options, run keepassc -h
to see them.
Additionally, you can set up KeePassC as
a server, I haven’t
experimented with this as I sync my database. The only functionality that the X application
offers in addition, as far as I can tell, is the auto-filling of your username and password
fields bound to a keybind; undoubtedly, this is a very handy feature, but I haven’t
really missed it at all.
As I said, I store the database in a directory synced between all my machines2 (using Syncthing), so I have access to an up-to-date versions of my credentials everywhere. Well, almost everywhere. I don’t use the Android client because the mobile web is just such a fundamentally insecure environment and I see it as just being sensible, rather than any sort of inconvenience.
Creative Commons image on Flickr by xserv.
]]>Of course, if you look back at the Installation Guide immediately before the
move to the new scripts, for example the version that shipped with the last AIF
in
October, 2011, it is pretty evident that
the current approach
is a lot simpler. Sure, there is no curses GUI to step you through each part of the install but the
introduction of pacstrap
and arch-chroot
meant that you no longer need those
prompts.
There is also the added advantage that these scripts are useful outside the installation process itself; they can be used for system maintenance and, in the rare event that your recent bout of experimentation at 2am after a few drinks doesn’t pan out the way you anticipated, repair.
One of the other responses to the new approach, however, has been the steady proliferation of “helpful” install scripts. These are essentially bash scripts that emulate the behaviour of the AIF and walk people through an automated install of their system. Well, not really their system, more accurately a system. So you run one of these scripts, answer a few prompts and then, when you reboot, you have a brand spanking new Arch Linux install running KDE with the full panoply of software and, in a few special cases, some customized dot files to “enhance” your experience.
From a certain perspective, I can see how these things appeal. “I wonder if I could script an entire install, from the partitioning right through to the desktop environment?” That sounds like a fun project, right? Where it all comes unstuck, unfortunately, is when the corollary thought appears that suggests sharing it with the wider community would be a good idea. It is at this point that a rigorous bout of self-examination about the outcomes that you are seeking and your base motivations for this act of selflessness are called for.
Whatever those motivations are, whether driven by altruism or the naked desire for fame and fortune that have—from time to time—alighted on these projects when they appear on /r/archlinux and the adoring throngs bestow their favours in equal measures of upvotes and bitcoin, they are grotesquely misplaced. No good comes from these things, I tell you; none.
Why not? Because, in the guise of being helpful, you are effectively depriving people of the single most important part of using Arch: building it themselves. It’s like inviting someone to a restaurant for an amazing haute cuisine meal, sitting them down across the table from you and then them watching as the staff bring out a mouth-watering array of dishes, each of which you ostentatiously savour before vomiting it all back into their mouth.
Now, I am sure there is a small minority (well, at least from my own sheltered experience I imagine it is small) who would relish this scenario, but for most it would qualify as a pretty disappointing date.
Then, after the technicolour table d'hôte, there is the matter of the clean up. Recently, we had someone show up on the Arch boards who had “installed Arch” but who did not understand how to edit a text file; literally had no clue how to open a file like /etc/fstab make some changes and then save it. This is beyond stupid; it is a drain on the goodwill of the community that has to deal with this ineptitude, it is unfair on people to put them in a position where they feel they are at the mercy of their technology, rather than in control of it, and it does nothing to advance the interests of Arch.
If you want to write something like this to improve your scripting skills, by all means proceed. If you want to contribute to Arch, then pick a project to contribute to, some bugs to squash, wiki edits, whatever; just don’t publish another one of these idiotic scripts, because you aren’t doing anyone any favours, quite the contrary.
Flickr Creative Commons image, Measuring spoons by Theen Moy.
]]>Setting it up was straightforward enough, the wiki is typically clear and thorough; the only bottleneck in the process was waiting for the Pi’s tiny chip to chug through the creation of public keys. Once it was done, and I had tested that it was indeed working as intended, a more vexing issue presented itself. The service wouldn’t come up after rebooting. Not a deal breaker, I could always just SSH in and manually bring the server up, but that sort of defeats the purpose of being able to have the thing running reliably.
The reason that it fails on boot is that, without a hardware clock, the Pi resets its time to UNIX time until the NTP daemon can start, which in turn depends upon the network being up. So, after rebooting, the journal would show the VPN server as having failed as the date was nearly half a century ago.
There are a variety of fixes floating around, the most amusing being a wrapper for init. Suffice to say, this wasn’t an option for me. Looking at the problem logically, it occurred to me that the issue was actually a trivial one: the correct sequencing of different services post boot. Isn’t this, I asked myself, one of the issues systemd was supposed to address?
I just had to ensure that the network came up as quickly as possible after boot, that the time
was reset correctly once there was a viable connection, and that the openvpn.service
waited for
those things to happen before launching.
I have fitted the Pi with a wireless dongle, so the first step was to ditch netctl (the default network manager on the ARM image) and replace it with systemd-networkd. This is the point at which all the wingnuts that think that systemd is some sort of conspiracy to overthrow the old UNIX gods and replace them with false idols in chapeau rouge start foaming at their retrognathic mouths about “viral takeovers” and—seriously what fucking planet do these imbeciles hail from?—“an abhorrent and violent slap in the face to the Unix philosophy.”1
For those of us that accept the theory of evolution, this technology is both
effective and non-threatening; in fact, it is quite an improvement over its by
now ageing predecessor. So, a few minutes later,
/etc/systemd/network/wlan0.network and
/etc/wpa_supplicant/wpa_supplicant-wlan0.conf had pretty much
written themselves and then it was just a matter of enabling the eponymous services
as well as systemd-resolved.service
. Reboot and the network is up seemingly
instantly.
Compounding my heresy, I then switched out NTP for systemd-timesyncd and the Pi’s clock was now reset with the same alacrity. The final piece, ensuring that the openvpn service waited for this to happen, was to add two lines to the service file:
1 2 3 |
|
That is all there is to it. Reboot and the network comes up, the clock is reset and then the OpenVPN server starts. Like magic. The sort of heathen magic that threatens to sap and impurify all of our precious bodily fluids.
To that end, I have been spending more time working with Vim and trying to improve both my understanding of it’s capabilities, the environment in which I use it and how I can optimise both of those conditions to not necessarily just be more productive, but to minimise friction in my work flows.
A large part of my job involves writing and, happily, so does a good deal of what constitutes my leisure activity. Whether it is emails written in Mutt, these blog posts, longer documents written in LaTeX, or just roboposting to forums; it is Vim all the way down. So I have spent some effort setting up Vim to make that experience as comfortable as possible.
The first step, and one I took several years ago now, was to write custom colour schemes, depending on whether I am in the console or X. Several weeks ago, I came across a Vim plugin called DistractFree, which is described as “enabling a distraction free writing mode.” I had always been slightly (well, perhaps scathingly would be more accurate) cynical when reading about these sorts of things when they first started to appear, but—after playing with two or three of them—this one has really grown on me (see the screenshot on the right).
I adapted my colour scheme for it, added a line to my ~/.vimrc to enable it for specific document types, and have not looked back.
1
|
|
The final piece was to set up spell checking to work properly. As well as using English, I wanted to share my personal whitelist of words between all of my machines, so I define a location in ~/.vimrc as well:
1 2 3 |
|
Then it is just a matter of ensuring that misspelled or other flagged words (like those that should be capitalised) are highlighted correctly. This required a couple of lines in my colour schemes:
1 2 3 4 5 6 7 8 9 10 11 |
|
The most significant change, however, was that I recently purchased a hard copy of Drew Neill’s excellent Practical Vim. I have long been a fan of Drew’s excellent podcast, Vimcasts, and the book is every bit as impressive as the collection of those episodes, and more. Reading it on my commute over the last week, I have been constantly amazed at the power and flexibility of this editor and Drew’s ability to clearly articulate how to work economically and efficiently with it. I’m convinced this is an invaluable resource for anyone that currently uses Vim or wants to learn.
I expect that, over time, as I become more proficient with Vim, that I will adapt some of these settings, but for the time being they feel very comfortable and allow me to focus on hacking words together (and to do the occasional bit of scripting on the side)…
Image is from the cover of Drew Neill’s Practical Vim, by Ben Cormack.
]]>By way of a digression, it occurred to me at some point while I was wrestling with setting this up that, over the last seven or so years, much of the “entertainment” provided by corporate content distributors has been in the form of encouraging me to spend hundreds? thousands? of hours researching and implementing ways to circumvent their litany of failed and defective technological restrictions: region codes, DRM and the like. It is worth noting that, in the vast majority of cases, I was just seeking access to content that I already owned (in another format), or was prepared to pay for.
My move to GNU/Linux in 2007 was in large part motivated by the awful realisation that the music I had bought in iTunes was stuck in there. The combined intellectual effort globally expended trying to legitimately route around broken copyright law would have comfortably powered another golden age of the sciences; it’s not entirely implausible to think that the only reason we still have to deal with cancer is the malignant legacy of Sonny Bono2.
Now, back to our regular programming… One of my approaches to get around this sort of economic and policy rigor mortis has been to use a basic script to create a proxy tunnel to my home server. It assumes that you have public key authentication set up and your passphrase loaded in keychain, or something similar.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Over the last couple of weeks, though, while I have been setting up and playing with Syncthing, I found this script wanting. With six nodes and, depending if I was on the LAN or not, as many as four of those hosts only accessible via SSH, then having the ability to quickly and painlessly open a browser on any one of the nodes without having to edit the script suddenly seemed like quite a good idea.
Accordingly I went to work on the script, including a test to determine if I
was on my home network and passing the name of the desired host as an
argument. With this approach, I simply type tunnel $host
and chromium
opens tunneled to that host, where I can the happily open
Hulu the Syncthing GUI.
The updated script is posted as
a gist,
and as you can see, still needs some work to make it a little more generic.
You will need, for example, to hand edit in the hosts and ports in
get_host()
. It is also the first time I have played with
named pipes
and I am not convinced that my use of mkfifo
here is either the correct
approach or implementation; but it works. Comments enlightening me would
be gratefully received.
Flickr Creative Commons image, The Tunnel, by Lawrence Whitmore.
]]>Due to a combination of inertia (it was installed on all my devices and working well) and the lack of an alternative that didn’t involve installing and running a LAMP stack just to sync my files, I have persisted with it; even writing a simple Awk script to interact with the API. For the last twelve months I have been pretty happy with it.
In addition to the licensing choice, there are some other small issues that I have discovered: the Android App chews through your battery if it is left on (which sort of defeats the purpose, but I learned to live with it), the syncing can take a little while to pick up changes, and nodes that don’t see much action may flake out completely after a while.
Then, a few weeks ago I read about clearskies, an open source implementation of this concept. Sadly, it is still a proof of concept. This discovery, however, led to an extremely fortuitous exchange on Twitter with cayetano santos who pointed me at Syncthing. Described thus (cue background music):
Syncthing replaces Dropbox and BitTorrent Sync with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet.
Written in Go, Syncthing is still in early development, but judging by the release history, it is moving ahead quickly. The developer, Jakob Borg, is obviously both talented and productive. After using it for the last couple of weeks, I have uninstalled BitTorrent Sync from all my machines and have Syncthing set up and running nicely. It is simple, elegant and works wonderfully; for what it does.
Which is to say that it is not all of the things that either of its predecessors are: it isn’t intended to distribute files, or to host images or simple websites. It doesn’t include versioning or have a built-in editor. It is for securely syncing your files independent of any third parties. And there is no Android app, so your battery is safe1.
Setup is through a tidy, bootstrap
built web interface, by default running on 127.0.0.1:8080
. I subsequently found
that once I understood how the configuration worked, editing the config file at
$XDG_CONFIG_HOME/syncthing/config.xml was much
simpler. When setting up, you will also do a lot of stopping and starting of
syncthing
; if you use the service file that comes with the
AUR package,
that won’t work so well from the web interface… Run it from the command line
with one of the STTRACE
variables to debug and then, once it is running to
your liking, hand it off to systemd2.
With a reasonably tricky cluster setup (six machines; two outside the LAN sharing half a dozen repositories with three different topologies), there is a fair bit of trial and error involved getting everything set up and working correctly. But when you do, it is much faster to pick up changes than either of the other two and there is none of the risk around privacy and security presented by Dropbox and—to a (much, I would hazard to guess) lesser extent—BitTorrent Sync.
I have also found that it is much quicker for all of the nodes connect if, for the LAN connections in your cluster, you specify the IP addresses and port and disable global discovery for those nodes: this speeds up the discovery time when the nodes are started. Nodes still, from time-to-time, flake out, particularly if one of them is rebooted. Restarting the offending node is usually sufficient to bring the cluster back into sync.
That isn’t to say that there aren’t any issues with it. The documentation is in
need of some work; your best bet at the moment is a combination of the rather
terse --help
message and reading the source. There are also some hidden gems
in the Github commits; for example, details on using
.stignore files, a very handy feature, are buried in
an earlier
commit to README.md that didn’t survive the move of the documentation to the
forum.
The only other significant issue I have encountered is that getting all of the nodes across the cluster to connect to each other at the same time seems to be inexplicably difficult. It may just be my setup, but more often than not, nodes (on the LAN) will timeout while trying to connect to at least one of their peers. This isn’t a huge issue as you only really need one chained connection to bring all the data up to date, but it does cause a something of a minor OCD flare up.
Given the early stages of the project, though, and the fact that it is open source, these anomalies are trivial. Overall, Syncthing is a delight. It does exactly what I need it to do, and it does it securely. Given the promise it is showing this early, I can only imagine it getting a lot better over time as more bugs are squashed and features rolled out.
I don’t spend much time on SO, mostly because I am not a programmer and so I don’t have much to contribute there, but also because I am active on Unix & Linux2, one of the clones that has emerged out of the metastasizing StackExchange empire. I do, however, subscribe to quite a few of the RSS feeds for question tags on SO: things like #awk, #bash, and other topics that are of interest to me. Given the amount of noise in these feeds; in the form of redundancy (ie., questions that have been asked in one form or another many times before) or just a signal failure to read any documentation, it is not surprising that the people responsible for this degradation see the site as negative, or unfriendly, or elitist. This is perfectly natural: it is the community attempting to defend itself from help vampires.
What was also quite predictable, given StackExchange’s response to this “problem” which was the 2012 Summer of Love campaign where they wanted to make SO a more friendly place, was that this would only exacerbate the issue. My experience of the site over that time is that the signal to noise has not improved at all, quite the contrary. I think that setting an expectation that the site would be more welcoming has two perverse consequences: first, it validates the perceptions of the newcomers about purported hostility, and in doing so signals to the existing community that built the value of the site that their culture needs remediation. Secondly, it signals more tolerance for behaviours that add no value to the community; so it is hardly surprising to see those behaviours proliferate.
From time to time the Arch community faces exactly the same criticisms: we are unfriendly, or elitist. I neither agree with these types of comments, nor am I particularly perturbed by them. Arch is a small community of volunteers, in order to create an environment where people who are willing and able to actively contribute, I think is important to ensure that there are clearly articulated standards3 and that they are maintained by the community at large.
I don’t see any scope for increasing the community’s tolerance for vampirism or the sort of self-entitlement that has a sad habit of appearing from time-to-time:
As for the Arch team and their lacking resources, let's take your point at face value. Now, we've known what the problem is for almost four hours and no fix has been issued.
There is some sort of “problem” that “we” have known about for four hours and yet, inconceivably4, nothing has been done about it?
There is no bug report on the Arch tracker so I am not sure why the collective pronoun is warranted here, that quibble aside; in what universe is this sort of petulance considered acceptable? The only reasonable response to this sort of whining is to close the thread and ban the infractor and their progeny, who will no doubt suffer from the same sort of genetic deficiencies, in perpetuam…
Clearly articulating, and enforcing, minimum standards of behaviour doesn’t make your community hostile or unfriendly; it establishes boundaries for people that supports respectful collaboration and the effective ability of that community to self-moderate. People new to the community who take the time to read the guidelines and observe how things work will have no problem adapting5. It may not make you popular, but it will make a significant contribution to the sustainability of the community and the levels of engagement of those that do contribute and wish to continue doing so.
Creative Commons photo on Flickr by Chris Lester
]]>As Christian intimated that this change would happen in the very near future, and “js” had commented on my last post to the effect that it was working well, I thought I should take a look for myself.
There are packages in the AUR for
vdirsyncer-git and
python2-argvard
so you will just need to grab the khal branch that uses vdirsyncer
as it’s
syncing engine. I have thrown up a
PKGBUILD gist,
or—as we are only talking about a couple of simple scripts—you could install the
complete set using
pip, the
Python package manager; in which case it would be a straightforward:
1 2 3 |
|
…and then remember to make sure that $HOME/.local/bin
is included in your $PATH
.
I wanted to have vdirsyncer manage two of my calendars, my CalDav work calendar and a simple iCalendar with all of the New Zealand public holidays. Configuring vdirsyncer to successfully do this took me a lot longer than I would like to admit: a combination of my ineptitude, a bug and a broken schema in the original holidays.ics that I wanted to use.
The collections
variable in the config file merits a mention in this regard.
If you choose to use it, be aware that your URL
will have this value appended to it, which may throw
a 404. Using the
DEBUG
verbosity level will identify this issue if you are struck by it.
Eventually, with the help of both Christian and Markus, the vdirsyncer
developer, I got it set up and running smoothly. I then just had to create
a cron
job to sync my work calendar every two hours and it was done.
As you would expect with such a simple tool, there is not a lot to say, or do,
with vdirsycner. It runs on a similar model to
OfflineIMAP, in that it synchronises a
remote and local repository. There are a couple of nice touches: it will read
your credentials from $HOME/.netrc so you don’t have
to worry about sensitive information in plain text in yet another file, and there
is a VDIRSYNCER_CONFIG
variable, so you can place the config file wherever it
suits.
Elevator, a Creative Commons image on Flickr by Mykl Roventine.
]]>Day after day, I receive a lot of meeting invitations and, when these show up in my inbox they are, for all intents and purposes, unintelligble. Yes, with careful scrutiny you can decypher the iCalendar files, but doing so is more likely to induce a seizure than a punctual appearance at an important meeting. To get around this, I had been using a basic Awk script that would parse the most important parts of the message and print them out. This was working well enough until I started to receive invitations from people using OSX. For some god-unknown reason, Apple’s “interpretation” of the standard2 is different enough to those sent from Evolution and Thunderbird that my script wouldn’t successfully print some of the data (just the start and end times of the meeting, nothing too important).
I started to try and expand the capability of the script and then realized that I would be much better off seeing if someone else had solved this problem; satisfactorily, that is. And they had. In a further delightful coincidence, the original author of the script, Martyn Smith is an ex-colleague who, in 2004, first got me interested in Linux (thank you, Martyn). Armed with this script and entries in $HOME/.mutt/{mailcap,muttrc} now, whenever I open a calendar invitation, the pertinent details are printed out perfectly legibly. It’s a small step, but an important one.
Next I started playing around with khal, a command line calendaring application that uses CalDav to sync to calendar servers. It is described as being in “the early stages of development” and that certainly is the case. Nonetheless, it is incredibly promising as—even in this rudimentary form—it performs well and offers most of the basics that I require. khal is simple to setup, does not have too many (python2) dependencies and handles multiple calendars3. Yes, there are bugs, but nothing grievous and the developer, Christian Geier, is very responsive and helpful4.
The khal documentation
gives you a pretty good idea of the current feature set. Set up your
khal.conf with the calendars you want synched and then
you have two modes of interaction: directly via the command line or an
interactive mode invoked with ikhal
. Both modes allow you to perform the basic
functions of adding, editing or deleting events.
While the interactive mode is very simple and straightforward, what I am most excited about is the ability to add events from the command line, as per the example in the documentation:
1
|
|
I just needed to figure out a way to extract the relevant fields from the iCal file and pass them to khal. My first attempt is unashamedly ugly, both in conception and execution. However, I don’t know Perl (and at this stage of my life I have run out of time to learn it), and it actually works. I modified Martyn’s script to write to a temp file and then, for iCal events I want to import to khal, I bound a key sequence in Mutt to a simple Awk script5:
1 2 3 4 5 6 7 8 9 10 |
|
In $HOME/.mutt/muttrc, I have Ctrlk in pager view trigger the script like so:
1 2 |
|
Neither elegant nor imaginative, I know; but for a first attempt, it gets the job done. If I did know any Perl, I am sure I would be able to avoid the additional temp file and the need to reread the information before handing it off to khal, but you work with the skills (or lack thereof) that you have. Needless to say, patches are welcomed.
Creative Commons image, Calendar by Angela Mabray.
]]>Once you have your environment set up; your window manager patched exactly the way you want, same for your editor and even your kernel builds automated, then you either start from scratch and learn a whole lot more, or you start to focus on the really small details. The endless polishing that is bred of a mania for automation and customisation and the liberating freedom of using software that allows, and even encourages, this approach.
Since I started using a couple of basic functions for managing my note taking, I have been conscious of the way I can use this tool to make my workflow a little less onerous.
One of the things I find myself doing a lot is reusing the same snippets of text; either prose in work documents, or links to relevant articles on the Arch Wiki and Forums. It is simple enough to add this material to my ~/.notes, but retrieval has always been—for the text I reuse frequently—unwieldy.
How many times do you really want to open the file, search for the relevant excerpt, highlight it and then copy it to the system clipboard before closing the file and pasting it into your email or a web form? I must have logged several thousand before I finally decided to do something about it.
I now have a couple of different files in ~/.notes/ depending upon the context; the example I’ll use is the one for the Arch Forums kept, naturally enough, at $HOME/Sync/notes/arch (I symlink to ~/.notes so that the directory is synched using BitTorrent Sync).
This is just a simple text file with all of the links, guidance and wisdom that I generously share with those people, mostly new to the community, who have yet to embrace the opportunity to commit the Forum Etiquette to memory. The format of each file is the same and is pretty basic:
# rules sticky
https://bbs.archlinux.org/viewtopic.php?id=130309# smart questions
[url=http://www.catb.org/esr/faqs/smart-questions.html]How To Ask Questions The Smart Way[/url]# arch only
https://wiki.archlinux.org/index.php/Forum_Etiquette#Arch_Linux_Distribution_Support_ONLY
I use the commented title to identify the desired piece of text and then just copy it to the clipboard, ready to pasted into a post that will undoubtedly be gratefully received by the infractor:
1 2 3 4 5 6 7 8 9 10 |
|
So, on the rare occasion that I need to remind someone that on the Arch boards
we only support Arch Linux (I know, quite the revelation…), I just open
a scratchpad
and enter bbs only
and then Shift+Insert the
text into the post and I am done. Not passing an argument just prints the
commented titles in the file in the event that I forget what the damn thing
is called.
I have a similar setup for work, with a couple of files that feature longer pieces of text that I find myself reusing for proposals, email responses and other administrivia. It’s a simple enough approach, but it works well and does lend a certain satisfaction to the otherwise tedious business of writing boilerplate.
Flickr Creative Commons image how many cans by shrapnel1
]]>Over the last couple of days, I have been playing with udev, the kernel device manager, as I was attempting to run a script once a specific USB drive was plugged in. It turns out, as is so often the case, that udev is only part of the picture…
As both my work and personal laptops have relatively small
SSDs, I carry around
my music
on a 1TB external drive. As the drive only contains .flac
files, I wanted to automate the process of rsync
’ing music from my desktop to
the drive and, for the laptops, repopulating the symlinks to
~/Music/ when the drive was plugged in.
My first thought was a rule in /etc/udev/rules.d/,
using RUN+=
. There are any number of blog posts espousing this approach and,
as I quickly discovered, they are all wrong. The problem with using this key is
that, as the man
page makes clear, it is not designed for long running
programs:
This can only be used for very short-running foreground tasks. Running an event process for a long period of time may block all further events for this or a dependent device.
Starting daemons or other long running processes is not appropriate for udev; the forked processes, detached or not, will be unconditionally killed after the event handling has finished.
The problem, as it manifest for me, was the drive would be blocked from
mounting until after the script had run, meaning rsync
or my symlinks would
have no target. There are various “workarounds” on the web for this, including
using two scripts, one to trigger the other2. Even for me, this
seemed like a
Pyrrhic victory.
The correct way to do this, as I found once I uncovered
this thread
on the Arch boards where WonderWoofy and 65kid helpfully pieced it together, is
to use SYSTEMD_WANTS
. As it is described in the manual:
THE UDEV DATABASE
The settings of device units may either be configured via unit files, or directly from the udev database (which is recommended). The following udev properties are understood by systemd:SYSTEMD_WANTS=
Adds dependencies of type Wants from this unit to all listed units. This may be used to activate arbitrary units, when a specific device becomes available. Note that this and the other tags are not taken into account unless the device is tagged with the "systemd" string in the udev database, because otherwise the device is not exposed as systemd unit.
So, I edited /etc/udev/rules.d/90-usb-music.rules to
remove the RUN+=
key, using a systemd service file instead, like so3:
1
|
|
And then wrote the corresponding service file to have systemd hand off to the bash script:
1 2 3 4 5 6 7 8 9 10 |
|
As I mentioned in my post on my
simple unmounting script,
I use a naming convention for all my USB drives. In this case, my music is stored on Apollo, and
it is auto-mounted with
udiskie. In the
service file, systemd uses a hyphen instead of a forward slash, so the correct
designation is media-Apollo.mount
.
Then it is just a matter of enabling the service with systemctl enable upmusic
and,
whenever Apollo is plugged in to either my desktop of laptop, the appropriate
script will run and either update the files on Apollo or the symlinks on one of the
laptops.
Creative Commons image on Flickr by Jacob Garcia.
]]>Concomitant with this is the increasing corporatisation of the Internet; the deliberate and seemingly ineluctable effort by a relatively small number of global interests to turn most of the Internet into something more like cable television. As someone who has not lived with a television for decades precisely because I don’t want to live in a Skinner box that is solely designed to condition me to compliantly purchase more product, I find this rankly offensive (in the sense of morally repugnant as well as a coordinated and remorseless assault).
With the escalation of both the level and nauseating intensity of advertising around the “holiday season” and the almost hysterical exhortations to purchase more happiness, I decided that the one thing that I would be really thankful for this Christmas was better ad blocking.
I had been using Privoxy and an AUR script, blocklist-to-privoxy but I was still experiencing a lot of ads sneaking through (particularly local ones) and some unintended side effects of using Privoxy for this job, so I decide to give Jake VanderKolk’s Hostsblock a shot.
I went with the basic, entry level setup: kwakd for serving blank HTML in place of ads and hostsblock to write to my /etc/hosts. I didn’t see the need for dnsmasq and, after a week or so or using it haven’t seen the need to revisit that decision.
Suffice to say, it is working brilliantly. Where once web pages were festooned with garish advertisements promising me ripped abdominals, tropical holidays and the lasting serenity that only Apple products can really truly deliver, I now have glorious whitespace
There are a couple of other factors to consider with Hostsblock. There is a very simple command line interface for managing black and white listing of websites, and the various “content distribution networks” that infest most commercial sites like fleas. You can easily allow advertisements on or from sites and organizations that you want to support, while forever muting the inane drivel from the likes of Failbook et al.
Installation and setup are very straightforward, with simple instructions on the Hostsblock site. There is also an active thread on the Arch boards where Jake and a couple of others are extremely helpful.
If you are feeling listless, run-down and lacking in energy, why not try Hostsblock? It will make your web pages brighter, speed up your page loads, protect your privacy and make you insanely popular. Try Hostsblock today!
ad free, a Creative Commons image by Louisa Billeter on Flickr.
]]>At the beginning of this month, BitTorrent unveiled the Beta API for developers (meaning you have to tell them what you plan to do in order to be issued a key). After some equivocation, I signed up with the rather flimsy excuse of “writing a shell wrapper for the command line” and found, to my chagrin, a key in my inbox the next morning.
This proved to be something of an unwelcome arrival. In theory, I was excited about having access to a tool to query the Sync application. One of the nodes is on my headless Raspberry Pi and the idea of being able to issue a command from my laptop to ascertain what was going on in the Pi’s synced folders was (and is) a tremendously attractive one.
However, now that I was in possession of the key, I felt morally obliged to do something with it. The problem with this realization was that I had no idea how to work with an API, let alone writing a script to accomplish my purported goal.
After spending some time looking at
the documentation,
and buoyed by the optimism of ignorance, I decided to make good on my promise.
My first attempt sort of worked, but was hampered as much by a serious
conceptual flaw as it was by poor implementation. I decided, in spite of any number
of Stack Overflow posts warning expressly not to do this, to use awk
to parse
the JSON data2. This was what I would
euphemistically describe as a “learning opportunity.” The result is preserved
for posterity in my bitbucket repo
(for the completists)…
At this point, I was fortunate that Earnestly in #archlinux introduced me to
keenerd’s purpose built tool, Jshon.
And by “introduced” I mean generously (and patiently) talked me through how it
worked and how I could use it with the Sync API to achieve what I was after.
After a while playing with it, there was the—inevitably belated—moment of
realization: this thing is genius! It allows you to intelligently
interrogate the data. Not blindly chopping at it with an increasingly complex
series of actions in awk
3; but quite directly traversing the structure and
extracting the desired elements.
Now I was cooking. Well, more to the point, I was flailing about in a smoke filled kitchen convinced that the feeling of euphoria was inspiration—not imminent asphyxiation. Once again, Earnestly’s patience and bash skills were put to good effect. The resulting script, btstat, is undeniably a triumph of his good ideas over my own poor execution. In other words, the fact that it works so well is testament to his ability, but any and all faults are mine alone4.
And it does work. It only requires Jshon (which you will already have installed because you use aurphan, right?) and a file with your synced folders and their respective secrets on each line5, like so:
1 2 |
|
The functionality in the script is limited at this stage to just querying the application; I wasn’t interested in pushing changes at this point. So you can access the version of the currently installed Syncapp, upload and download speeds, the size of synced folders and list all of their contents.
It is a beta release, so the odd bug is to be expected. Overall, though, the API is a welcome addition to what is a great application. If you have an API key, add it to your sync.conf, grab the script and give it a whirl. Undoubtedly, over the coming weeks more polished versions will emerge in “proper” languages, but for the time being this does exactly what I need.
awk
; which I am undoubtedly
fond of, but rather the
limits of my ability with that language.$json
variable, I pass curl
the -n
option,
which means it reads my credentials from $HOME/.netrc.Where I have found a small gap, or more a minor irritation really, is with
unmounting devices. udiskie-umount /media/MY_USB_DRIVE
works just fine, but
I often have a variety of media mounted, and not just standard
USB drives. At work, for
example, it is not uncommon for me to have mounted under
/media any or all of the following:
In a couple of these cases, I don’t want to—or can't—just umount
them with
udiskie
; they require a different approach. To facilitate this, I have adopted
a simple approach: I use a standard naming convention for all external media
or their mountpoints. When I first buy a USB drive, I give it a meaningful name,
beginning with an uppercase letter, as this won’t clash with any of my internal
drives and it happily accommodates drives from other operating systems. So:
1
|
|
sets up a new 8GB drive I found in the schwag bag at the conference I attended last week.
Now that all external media are predictably named, it is just a matter of writing a script that checks how many are mounted, presents a menu if there is more than one, and unmounts the respective device correctly:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
Now, no matter the number or type of devices I currently have mounted, I can
unmount them by typing dismount
and choosing the appropriate drive via the
select
menu. Simple, but satisfying. The script is in my
bitbucket repo
Creative Commons image, Dismount, by Chris.
]]>