jasonwryan.com

Miscellaneous ephemera…

Using Mutt, LDAP and SSL

One of the great things about starting a new job at an open source company is having the freedom to use the tools that suit your workflow, rather than having to suffer the indignity of whatever the IT department consider to be the lowest comon denominator. Suffice to say, I have had a lot of fun this week setting up my working environment—and the ocassional hiccough as I was forced to learn something new…

One of those “learning opportunities” consisted of trying to get my mail client, Mutt to talk to the LDAP directory over SSL so that I could query the shared address book. There are a number of helpful blog posts that describe using lbdb with mutt1. Unfortunately, after a lot of searching, I was unable to find any documentation on achieving this integration over a secure connection. I kept seeing this error:

1
2
Error: Search failed. LDAP server returned an error : 13, description: TLS
confidentiality required at /usr/lib/mutt_ldap_query line 198, <DATA> line 558.

Several hours later, and with some help from @ibeardslee, I managed to set it up, and it was definitely worth the effort.

You will need to install lbdb from the AUR:

1
cowerd lbdb     # 2

…and a couple of packages from the repos to make it all work:

1
pacman -S perl-ldap perl-io-socket-ssl netkit-bsd-finger

Then it is a matter of configuring lbdb to both query the LDAP directory and be able to be called from mutt. First, copy the config files into your $HOME:

1
2
3
mkdir .lbdb
cp /etc/lbdb.rc .lbdb/lbdbrc
cp /etc/lbdb_ldap.rc .lbdb/ldap.rc

And then modify the two configuration files to suit your setup: The first, $HOME/.lbdb/lbdbrc, is well commented and self-explanatory; add ldap to the methods and the nickname of your server:

1
2
METHODS="m_abook m_ldap"
LDAP_NICKS="catalyst"

The second config file, $HOME/.lbdb/ldap.rc is written in Perl and is a bit of a shocker:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
%ldap_server_db = (
    'catalyst' => ['ldaps://ldap.catalyst.net.nz:636',
                    'ou=Staff,ou=People,dc=catalyst,dc=net,dc=nz',
                    'givenname sn cn mail', 'givenname sn cn mail',
                    '${mail}', '${givenname} ${sn}']
);

# hostname of your ldap server
$ldap_server = 'ldaps://ldap.catalyst.net.nz:636';
$search_base = 'ou=Staff,ou=People,dc=catalyst,dc=net,dc=nz';
$ldap_search_fields    = 'givenname sn cn mail';
$ldap_expected_answers = 'givenname sn cn mail';
$ldap_result_email     = '${mail}';
$ldap_result_realname  = '${givenname} ${sn}';
$ignorant = 0;
$ldap_bind_dn = '';
$ldap_bind_password = '';
1;

The key is to ensure that you use both the ldaps prefix and explicitely specify the SSL port, 636. Without both of these, you will get the TLS confidentiality error.

You can then test that it is working correctly by running a query:

1
lbdbq jemima

All going well, if there is indeed a Jemima in the shared address book, you will see her contact details miraculously appear before you. If there is more than one, you will have a list to choose from.

Finally, you just need to set up mutt to query lbdb. In your muttrc, add the following:

1
set query_command = "lbdbq %s 2>/dev/null"

I found that suppressing the errors made the whole experience a little smoother. You may not require it… Now, hitting Shiftq in mutt brings up a prompt to query the LDAP directory (and my abook address book that I share via dropbox). You can also access the directory by starting to type an email address and then hitting Ctrlt to see a list of possible completions.

Notes

  1. Christian Schenk’s post got me started.
  2. A wrapper script for cower

Creative Commons image by bertop on Flickr

The Next Thing

After eight years as a bureaucrat1 working in a central agency in New Zealand’s public service, I have hung up the grey cardigan.

While I am sure that I will never shed some of the qualities of being an official (think discretion and a belief in the importance of sound governance—rather than a keen ear for the rattle of the tea trolley), I am looking forward to the opportunity to stretch out a little2.

When I first joined the Commission, it was to work in the E-government Unit. I was strongly attracted to a role that combined communications and technology and, at the time, the EGU was the epicentre of a huge amount of really interesting initiatives. The Open Source Policy, the Authentication Framework, work on Digital Rights Management and security to name just a few; it was a geek’s paradise (certainly from a policy perspective).

I have no doubt that the foundation of much of what I was later able to achieve was fostered in this environment; whether it was originally making the code available for the e-government website, blogging about social media in the public sector, working with the other public service communications managers on the Review of the Function, or issuing the RFP for the rebuild of the SSC website with the stipulation that the solution had to be open source so that we could reuse it across government.

You can probably see a theme emerging here. If you have read this blog at all, you will definitely see where this is heading…

Anyway, I am about to start focussing on technology—and open source technology in particular—on a full time basis. From the middle of this month, I join the team at Catalyst, New Zealand’s leading open source company, where I will be the newly minted Strategy and Relations Manager.

I’m thrilled to be able to work for a company that is dedicated to driving the uptake of open source by businesses and government. Both the private and public sectors around the world are really starting to grasp the strategic and economic benefits of using open source; we are going to be doing our best to accelerate that adoption.

Notes
  1. I have used the term proudly for the last eight years, and will continue to do so…
  2. Yes, my twitter stream will likely become a little more colourful.

Creative Commons image by Swamibu on Flickr

Moving to Octopress

Over the last week I have been moving my blog over to Octopress, a lightweight blogging framework for Jekyll, the static site generator powering Github Pages. I had previously been posting to a tumblr page and, over the nearly four years that I had been doing that I had somehow racked up just over 4000 posts. I was not looking forward to migrating across.

However, the fact that the Jekyll project has a number of scripts for migrating from other platforms assuaged my concerns about the difficulty of this task. That sense of relief was shortlived. Neither of the two tumblr migration scripts were of any assistance: both would die during their initial runs, probably due to some funky characters in the post titles, or perhaps the posts themselves.

I certainly had no intention of trying to wade through the entire back catalogue identifying the rogue posts. Rather that admit defeat, and probably more due to a sense of misguided optimism about the “straightforward” nature of the task, I saw this setback as an opportunity to cull all of the cruft1 from the blog and decided to manually import the fifty posts that I thought were of some interest.

Being an assiduous record keeper, all of the posts were helpfully bookmarked on Pinboard under one tag, and therefore it was simple enough to create a list of the required URLs. Armed with this list, it was just a matter of cobbling together a script to do the bulk of the work for me.

The first task was to retrieve the posts from the list:

1
2
3
4
# grab files
while read url; do 
    wget --adjust-extension "${url}"
done < /home/jason/Scripts/list

Then I needed to remove all of the HTML surrounding the actual posts: an awk one-liner took care of that.

1
2
3
4
5
# strip HTML cruft
for file in *.html; do
  awk '/<h3>/ {flag=1;next} /<\/div>/{flag=0} flag {print}' "$file" > "${file%%.*}"
done
mkdir html && mv *.html html/

The final task of this part of the migration was to convert the HTML into markdown, the format that Octopress uses. Pandoc the “universal document converter” handled that job:

1
2
3
4
# convert to markdown format
for file in *; do 
    pandoc -f html -t markdown "$file" > "$file".md
done

The final result was fifty markdown files holding all of my posts, almost ready to be committed to github. I say “almost” because the files still required what turned out to be a reasonable amount of cleaning up. Pandoc did a great job, for example, but would inexpicably break multi word hyperlinks over two lines. Similarly all of the internal links to my other posts pointed to the (meaningless) tumblr URLs2.

Setting up Octopress was extremely simple and quick by comparison: the documentation is very helpful. There was one slight hitch, a known issue on Arch x86_64, which was simple enough to deal with.

While the migration was not entirely pain-free, I am pleased that I have done it. Tumblr’s service increasingly left a lot to be desired but as it was a free service, I couldn’t complain too much. Or, more accurately, when I did complain, no-one actually listened…

Indeed, moving to a paid service like Github (yes, it’s free at first, but once you have enough data there you need to pay a small amount every month) makes a lot of sense. The paid services I do use, like Pinboard and Tarsnap are both inexpensive and much more reliable than their free counterparts3; and you get to invest in great software that is a pleasure to use.

Notes
  1. Initially, I had set up the site as a simple holding page and dumped a whole lot of feeds into it: twitter, bookmarks, scrobbled music, etc. Those 4000 posts were mostly just that sort of internet detritus…
  2. For creating redirections (Github pages do not support .htaccess) I can’t recommend enough the Jekyll Alias Generator. Just. Brilliant.
  3. And much more scrupulous about how they use your personal data.