Saturday, 30 November 2013

LibCampUK 13 - the highlights for me

I've just spent the day in the brilliant new Library of Birmingham building at the LibCampUK13 "unconference". This is a meeting of librarians, library staff, library consultants, library suppliers and even a few of us lowly techies allowed out into the light from our basements. It being an "unconference" the agenda is created on the fly on the day from things that the attendees want to talk about. Its not about presenting papers or holding workshops - its about sitting around in huddles sharing problems, telling stories and swapping ideas. Its manic, chaotic and absolutely fantastic. It being full of librarians there was also plenty of cake on hand (including a competition won by a lady who had made a superb Moomin cake).

This blog post serves by way of a memory jogger for some of the useful things I came across during the day. There was so, so, so much more than I can record here as there were five or so parallel threads going on. You were encourage to "vote with your feet" and move between threads during each session if you got bored or wanted more variety, but I found all the groups I attended engaging and so stayed put!

My first session was on social media use in libraries. This was a really popular topic - so much so that there were two separate groups discussing it simultaneously. In our group I learnt some really useful tips that the social media aware folk are using (including a lady who works for a pub chain rather than a library service!). For example tweeting events that are in your local area that you don't run but which might be of interest to your followers both gets your tweets retweeted more widely and thus attract new followers, as well as having the event organisers follow you and retweet some of your messages by way of return. Location based searches can be useful as well - see who is talking about topics you are interested in your geographic area and then target their conversations with your own replies. Social media analytics are of interest to many: can they be sure that the effort that they put into social media interactions is actually reaching the desired target markets? Keeping up with the various social media services is also an issue: although Twitter and Facebook are biggies, Pinterest, LibraryThing and Tumblr are also used (hardly anyone seems to care about Google Plus though!). Different social media are used by different groups and even age isn't a sure fire targeting mechanism (some schools say students are into Tumblr as the latest thing but another school librarian said her students viewed it as a bit last year and were all over Instagram now). Some sites have issues with some (or all!) social media being blocked - useful to bear in mind if you're trying to reach certain groups (schools and NHS especially).

After grabbing a quick coffee and trying to grab a charge on my laptop and phone, I headed upstairs to the Open Archives session. It was already in full swing when I crept in and I was surprised to hear that some educational institutions still seem to under value open archives or data repositories as a way of spreading the message about their research (despite them being happy to send data to folk who do manage to seek them out). RCUK mandates of archiving of data supporting publications and Gold/Green journal article publishing with institutional repositories is helping, as are some JISC initiatives. There's lots of active work in this area though so its a "hot topic" at the moment. One chap said he was from FE and there was a large and mostly untapped market for repositories in FE colleges. Some discussion of open data and the benefit it provides for "mash ups" (especially if library folk are hacking on open source code).

Next was lunch, followed by three more sessions. The first afternoon session I attended was on digitisation. Some interesting work being done on private digitisation initiatives, especially for things like maps (which the chap who pitched and initiated the session was really into). Some discussion on the +/- aspects of things like Google Books: I was on the +ve side as we've used it in LORLS reading lists and its been really useful and popular, but some complaints over lack of transparency on quality of OCR behind the page scans (which I can understand as that's an expensive thing to do and probably isn't Google's primary aim at the moment). Heard about a group of heritage conservation volunteers called NADFAS who had help digitise and preserve works in some special libraries (though it needs librarian input to ensure metadata about digital objects are captured).

Back in the main theatre the middle session of the afternoon I dropped into was about gadgets, a topic close to my techie heart. Most folk in the group held up smart phones or tablets and said lots of their users had them. Some talk about managers buying "iPads" without any real idea how they would be loaned out or to who. Some sites have issues with setting up shared tablets as the software eco-systems on them don't really encourage it, whereas others (ones using "bump-in-the-wire" wireless portals for network authentication) have fewer issues. I asked if other sites had any great solutions to students trailing power leads everywhere (as policies and telling them off don't really work): one chap said that even their new sofas had power sockets in the arms and all tables had power sockets. I managed to also slip in a mention that NFC capable phones/tablets can also pick up RFID tags which seemed to interest several folk!

 I then stayed put for the last session of the day which in my group was on open source. The immediately useful take home for me was Library Box, which is a content hosting wifi hotspot that I'd not come across before. Great for providing educational/library resources in "pop up" environments, especially where there isn't decent Wifi or 3G coverage (eg book events in public parks). Some discussion about appropriate open source software for managing small library catalogues: I suggested Koha but one of the facilitators suggested it was too complex for really small libraries and she'd made good use of Drupal with cataloguing extensions. Another chap was looking for suggestions for school data repositories - Dspace and Eprints were mentioned but again may be too complex to set up and maintain, whereas a CMS like Wordpress might work fine and be more familiar to school teachers.

So some great stuff there, and so much more in the other sessions I didn't get to (and probably in the bar after the meeting which I didn't go to either). LibCamp is definitely on my list to attend again next year... assuming they let Shambrarians like me slip in again!

Wednesday, 27 November 2013

Goodreads, Perl and Net::OAuth::Simple

Part of my day job is developing and gluing together library systems.  This week I've been making a start on doing some of this "gluing" by prototyping some code that will hopefully link our LORLS reading list management system with the Goodreads social book reading site.  Now most of our LORLS code is written in either Perl or JavaScript; I tend to write the back end Perl stuff that talks to our databases and my partner in crime Jason Cooper writes the delightful, user friendly front ends in JavaScript.  This means that I needed to get a way for a Perl CGI script to take some ISBNs and then use them to populate a shelf in Goodreads. The first prototype doesn't have to look pretty - indeed my code may well end up being a LORLS API call that does the heavy lifting for some nice pretty JavaScript that Jason is far better at producing than I am!

Luckily, Goodreads has a really well thought out API, so I lunged straight in. They use OAuth 1.0 to authenticate requests to some of the API calls (mostly the ones concerned with updating data, which is exactly what I was up to) so I started looking for a Perl OAuth 1.0 module on CPAN. There's some choice out there! OAuth 1.0 has been round the block for a while so it appears that multiple authors have had a go at making supporting libraries with varying amounts of success and complexity.

So in the spirit of being super helpful, I thought I'd share with you the prototype code that I knocked up today.  Its far, far, far from production ready and there's probably loads of security holes that you'll need to plug.  However it does demonstrate how to do OAuth 1.0 using the Net::OAuth::Simple Perl module and how to do both GET and POST style (view and update) Goodreads API calls.  Its also a great way for me to remember what the heck I did when I next need to use OAuth calls!

First off we have a new Perl module I called Its a super class of the Net::OAuth::Simple module that sets things up to talk to Goodreads and provides a few convenience functions. Its obviously massively stolen from the example in the Net::OAuth::Simple perldoc that comes with the module.


package Goodreads;

use strict;
use base qw(Net::OAuth::Simple);

sub new {
    my $class  = shift;
    my %tokens = @_;

    return $class->SUPER::new( tokens => \%tokens,
                               protocol_version => '1.0',
                               return_undef_on_error => 1,
                               urls   => {
                                   authorization_url => '',
                                   request_token_url => '',
                                   access_token_url  => '',

sub view_restricted_resource {
    my $self = shift;
    my $url  = shift;
    return $self->make_restricted_request($url, 'GET');

sub update_restricted_resource {
    my $self = shift;
    my $url          = shift;
    my %extra_params = @_;
    return $self->make_restricted_request($url, 'POST', %extra_params);

sub make_restricted_request {
    my $self = shift;
    croak $Net::OAuth::Simple::UNAUTHORIZED unless $self->authorized;

    my( $url, $method, %extras ) = @_;

    my $uri = URI->new( $url );
    my %query = $uri->query_form;
    $uri->query_form( {} );

    $method = lc $method;

    my $content_body = delete $extras{ContentBody};
    my $content_type = delete $extras{ContentType};

    my $request = Net::OAuth::ProtectedResourceRequest->new(
        consumer_key     => $self->consumer_key,
        consumer_secret  => $self->consumer_secret,
        request_url      => $uri,
        request_method   => uc( $method ),
        signature_method => $self->signature_method,
        protocol_version => $self->oauth_1_0a ?
                                   Net::OAuth::PROTOCOL_VERSION_1_0A :
        timestamp        => time,
        nonce            => $self->_nonce,
        token            => $self->access_token,
        token_secret     => $self->access_token_secret,
        extra_params     => { %query, %extras },
    die "COULDN'T VERIFY! Check OAuth parameters.\n"
        unless $request->verify;

    my $request_url = URI->new( $url );

    my $req = HTTP::Request->new(uc($method) => $request_url);
    $req->header('Authorization' => $request->to_authorization_header);
    if ($content_body) {
        $req->content_length(length $content_body);

    my $response = $self->{browser}->request($req);
    return $response;

Next we have the actual CGI script that makes use of this module. This shows how to call the (and thus Net::OAuth::Simple) and then do the Goodreads API calls:


use strict;
use CGI;
use CGI::Cookie;
use Goodreads;
use XML::Mini::Document;
use Data::Dumper;

my %tokens;
$tokens{'consumer_key'} =  'YOUR_CONSUMER_KEY_GOES_IN_HERE';
$tokens{'consumer_secret'} = 'YOUR_CONSUMER_SECRET_GOES_IN_HERE';

my $q = new CGI;
my %cookies = fetch CGI::Cookie;

if($cookies{'at'}) {
    $tokens{'access_token'} = $cookies{'at'}->value;
if($cookies{'ats'}) {
    $tokens{'access_token_secret'} = $cookies{'ats'}->value;

if($q->param('isbns')) {
    $cookies{'isbns'} = $q->param('isbns');

my $oauth_token = undef;
if($q->param('authorize') == 1 && $q->param('oauth_token')) {
    $oauth_token = $q->param('oauth_token');
} elsif(defined $q->param('authorize') && !$q->param('authorize')) {
    print $q->header, 
    $q->h1('Not authorized to use Goodreads'),
    $q->p('This user does not allow us to use Goodreads');

my $app = Goodreads->new(%tokens);

unless ($app->consumer_key && $app->consumer_secret) {
    die "You must go get a consumer key and secret from App\n";

if ($oauth_token) {
    if(!$app->authorized) {
} else {
    my $url = $app->get_authorization_url(callback => '');
    my @cookies;
    foreach my $name (qw(request_token request_token_secret)) {
        my $cookie = $q->cookie(-name => $name, -value => $app->$name);
        push @cookies, $cookie;
    push @cookies, $q->cookie(-name => 'isbns',
                              -value => $cookies{'isbns'} || '');
#    print $q->redirect($url);
    print $q->header(-cookie => \@cookies,
                     -status=>'302 Moved',


sub GetOAuthAccessTokens {
    foreach my $name (qw(request_token request_token_secret)) {
        my $value = $q->cookie($name);
     $tokens{'access_token_secret'}) = 
                                    callback => '',

sub StartInjection {
    my $at_cookie = new CGI::Cookie(-name=>'at',
                                    -value => $tokens{'access_token'});
    my $ats_cookie = new CGI::Cookie(-name => 'ats',
                                     -value => $tokens{'access_token_secret'}
    my $isbns_cookie = new CGI::Cookie(-name => 'isbns',
                                       -value => '');
    print $q->header(-cookie=>[$at_cookie,$ats_cookie,$isbns_cookie]);
    print $q->start_html;

    my $user_id = GetUserId();
    if($user_id) {
        my $shelf_id = LoughboroughShelf(user_id => $user_id);
        if($shelf_id) {
            my $isbns = $cookies{'isbns'}->value;
            print $q->p("Got ISBNs list of $isbns");
            AddBooksToShelf(shelf_id => $shelf_id,
                            isbns => $isbns,
    print $q->end_html;

sub GetUserId {
    my $user_id = 0;
    my $response = $app->view_restricted_resource(
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        my $user_xml = $xml->toHash();
        $user_id = $user_xml->{'GoodreadsResponse'}->{'user'}->{'id'};
    return $user_id;

sub LoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;
    my $response = $app->view_restricted_resource('' . $tokens{'consumer_key'} . '&user_id=' . $user_id);
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        my $shelf_xml = $xml->toHash();
        foreach my $this_shelf (@{$shelf_xml->{'GoodreadsResponse'}->{'shelves'}->{'user_shelf'}}) {
            if($this_shelf->{'name'} eq 'loughborough-wishlist') {
                $shelf_id = $this_shelf->{'id'}->{'-content'};
        if(!$shelf_id) {
            $shelf_id = MakeLoughboroughShelf(user_id => $user_id);
    print $q->p("Returning shelf id of $shelf_id");
    return $shelf_id;

sub MakeLoughboroughShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = 0;
    my $user_id = $params->{'user_id'} || return $shelf_id;

    my $response = $app->update_restricted_resource('[name]=loughborough-wishlist',
    if($response->content) {
        my $xml = XML::Mini::Document->new();
        my $shelf_xml = $xml->toHash();
        $shelf_id = $shelf_xml->{'user_shelf'}->{'id'}->{'-content'};
        print $q->p("Shelf hash: ".Dumper($shelf_xml));
    return $shelf_id;

sub AddBooksToShelf {
    my $params;
    %{$params} = @_;

    my $shelf_id = $params->{'shelf_id'} || return;
    my $isbns = $params->{'isbns'} || return;
    foreach my $isbn (split(',',$isbns)) {
        my $response = $app->view_restricted_resource('' . $tokens{'consumer_key'} . '&isbn=' . $isbn);
        if($response->content) {
            my $book_id = $response->content;
            print $q->p("Adding book ID for ISBN $isbn is $book_id");
            $response = $app->update_restricted_resource(''.$book_id);

You'll obviously need to get a developer consumer key and secret from the Goodreads site and pop them into the variables at the start of the script (no, I'm not sharing mine with you!). The real work is done by the StartInjection() subroutine and the subordinate subroutines that it then calls once the OAuth process has been completed. By this point we've got an access token and its associated secret so we can act as whichever user has allowed us to connect to Goodreads as them. The code will find this user's Goodreads ID, see if they have a bookshelf called "loughborough-wishlist" (and create it if they don't) and then add any books that Goodreads knows about with the given ISBN(s). You'd call this CGI script with a URL something like:

Anyway, there's a "works for me" simple example of talking to Goodreads from Perl using OAuth 1.0. There's plenty of development work left in turning this into production level code (it needs to be made more secure for a start off, and the access tokens and secret could be cached in a file or database for reuse in subsequent sessions) but I hope some folk find this useful.

Friday, 22 November 2013

Post nuclear differences

In the wake of the Fukushima Daiichi nuclear power station disaster in early 2011 both Japan and Germany changed their stance on nuclear power generation. Some nuclear plants were shut down immediately and the rest will most likely be decommissioned with the next decade.  Both countries appeared to have widespread public support for this radical change in their national energy policy.
Solar city, science park, Gelsenkirchen.
Source: Green Baroque Ins. Flickr under Creative
Commons CC BY-NC 2.0 Licence 

The effect these decisions have had on these two nations response to climate change is interesting.  Germany was already well into building out its solar & wind based renewable generation capacity before the earthquake and tidal wave wrecked the reactors in Japan. With widespread community involvement in the investment in renewables, and helpful financial and regulatory environment provided by the German authorities,  they've been able carry this forward.  They've still got fossil fuels in their mix but they do still seem to be on target for their carbon emissions targets.  Germany already had a strong anti-nuclear movement and was planning on phasing out nuclear by 2036 anyway so this event really accelerated that timetable.

Japan on the other hand have just announced during the UN COP19 climate change talks that they were going to have to substantially reduce their existing emissions reduction target.  They are building renewable generating capacity of course - practically every developed nation is.  However it will not be enough to cope with losing all the nuclear generating capacity that they are removing,  which prior to March 2011 contributed around 30% of their total generating capacity.  They are having to turn to increasing use of imported fossil fuels as oil, coal and gas to supply their electricity.

I was wondering what we could learn from the different outcomes arising from what, at first, appear to be very similar decisions.  Germany had something of a head start as they were already aggressively building solar PV & wind generators. But they also have a geographic  advantage over Japan: Germany is bigger with more land available.  Solar & wind both have low "energy density" so to get a decent amount generated you need alot of them covering lots of roofs & land. Japan is a relatively crowded country so space is at a much higher premium.  As a comparison Japan is 39th in the population density league table whereas Germany is 58th.  Japan does have potential for many gigawatts of renewable power generation though, as it has amply space in the seas around it for large scale off-shore wind farms.  There does appear to be the need to encourage more community involvement and investment in on-shore renewables though.
Fukushima Unit 4 with cranes working on 
stabilizing the site. Source: IAEA Imagebank under 
Creative Commons CC BY-NC-ND 2.0 Licence

Both nations have to bear the costs of decommissioning their nuclear infrastructure, which will mostly likely be a long and expensive task.  Again Germany has an advantage - it only had 17 nuclear power stations operating prior to March 2011 whereas Japan had over 50.  Germany was already well into decommissioning quite a few reactors, especially from the former East Germany.

Japan also has the expense and difficulty of cleaning up Fukushima itself to deal with. That's going to be a big drain on the resources of both its owner Tepco and the Japanese government. The clean up may well be competing for funds, people and time required to ramp up construction of  renewables, even though those renewables are part of the solution to the overall problem. Indeed one wonders if the exclusion zone around Fukushima might well end up being a good place to site renewables with their relatively low maintenance requirements (so fewer people have to spend less time in the potentially more radioactive areas).  At least they might provide some economic payback to the people whose land is otherwise now worthless.

Japan's economy has taken some serious blows over the last few years, which also puts them at a disadvantage against Germany. Germany is the economic power house of the EU, and so it can afford to invest in the capital cost of renewables. Indeed its a positive cycle for the Germans: the more renewables they can invest in the more insulated they are from fossil fuel price rises, which improves their competitiveness and increases their income, part of which they can invest in more renewables.  Japan has the opposite problem - its emergency switch to large scale fossil fuels to replace the nuclear power stations is costing the Japanese power companies an extra 3.6 trillion yen in 2013 over the costs in 2010 before the disaster.  Things are likely to get worse for Japan before they start to get better.

There is one overarching "take home message" I pick up from the different reaction to the change in energy policy in the two countries.  The sooner a nation starts to make a large scale switch to distributed renewable power generation, the better placed it is likely to be to deal with sudden, external changes in traditional centralized power generation.  In this case it was the rapid removal of all nuclear capacity but in the future who knows what it will be?  Gas pipelines cut off as part of national sabre rattling? Wars leading rapid price rises in global oil prices? Coal shipments being disrupted by industrial unrest?  All of these could affect national grids that rely too heavily on one particular fuel source, especially if that fuel is controlled by others.  We all need to be investing in clean, distributed energy generation to make our nations, towns, cities and communities more resilient in the face of these unexpected changes.

Saturday, 9 November 2013

Replacing Green Levies with Brown Ones

There's been much talk in the UK media and within political circles recently about the costs associated with so called "green levies".  These are additional costs added to energy bills to help fund climate change mitigations such as increased power production from renewable sources, carbon reduction strategies and the all important energy efficiency measures for low income and vulnerable groups in society.  The principle is that the more energy you use, the more you end up paying to help turn that energy generation into a cleaner, greener form and help poorer folk save energy.

Now some people are campaigning that these levies need to be reduced or removed completely and/or moved to general taxation.  The rising energy prices from the Big Six energy companies are hitting the "hard working" people of the UK and some politicians are sensing a quick, popular vote winner in appearing to do something to cut these bills.  Moving some of the green funding measures to general taxation is probably the most progressive option as it moves the cost towards those who can afford to pay more, even if they themselves have already reduced their energy demands. Of course that might be rather unpopular with people in power who tend to pay more of such taxes, and there is a bit of recent history of socially responsible tax payer funded schemes facing the axe.

But what if these green levies are removed completely and we succeed in stalling UK plc's green economy?  Not only will that affect quite a lot of jobs (many in the private section - that bit of the market place that is supposed to be pulling us out of economic doldrums) but it will also mean there will be less investment being made in climate change mitigation technologies, and we'll also end up putting a lot more CO2 into the atmosphere as a result.  We may well find that it becomes impossible to meet our legally binding targets on carbon emissions, which may have some direct economic costs if we're fined or foreign competitors manage to lock the "dirty man of Europe" out of future deals.

One argument put forward by those wishing to remove the green levies is that they don't think that climate change, global warming, call it what you will, is a man-made or even man-influenced effect.  To them it doesn't really matter how much CO2 or other greenhouse gases we emit, the climate will just do its own thing, and actually it will just fluctuate a bit and really we can all just carry on with business as usual.  Ignore the "ecomentalists" and get on with a continuation of 20th Century life into a bright, energy guzzling future.

Unfortunately people with such views currently seem to hold some of the reins of power in the UK, so there's a distinct possibility that at least some of the climate change mitigation funding may be lost.  If that does come to pass, I'd like to propose something to replace them: "brown levies".  Such levies will not be used to fund climate change mitigation strategies but instead fund climate change adaptation strategies. For example such things as building better sea defences, increasing the use of permeable paving systems in urban areas to reduce flooding, covering more of the UK countryside with polytunnels or glasshouses to reduce weather event effects on agriculture, etc, etc.

Some of that is happening now, but at a relatively low level, so the brown levy wouldn't need to be large to start with, but we do need to fund it.  At the moment what funding there is coming from disparate sources such as water bills, council taxes and general taxation, but it is hidden away rather that splashed all over the front pages.  Lets bring it out into the light as a nice, visible set of costs in the same way that the green levies have been brought centre stage by having them bundled together in energy bills.  That way people can see what they have to pay to adapt to climate change.  What goes into the levies could be given to one of Parliament's climate change committees to look after, or be debated every year in the House.

If the climate change deniers are right, the brown levies will stay small, and possibly even reduce as the climate swings naturally back to a late 19th/early 20th century.  Nothing will need to be decided on by Parliament and everyone gets to laugh and point at members of the the Green Party, Friends of the Earth and Transition Towns.  The worst that will happen is that we'll have funded some useful short term environmental protections in coastal towns and flood plains which will have reduced their insurance costs and protected some local industries.  The sort of thing we've been doing for years.

Of course if those folk from the Green Party, Friends of the Earth and Transition Towns are right about man made climate change, and the climate deniers in power right now do manage to wreck the current green levies funding climate change mitigation strategies, then those brown levies will have to go up over time.  And up.  And up.  Adapting to global climate change, even with the moderate changes we're likely to see in the UK, is likely to be very expensive.  Possibly more expensive than the cost of mitigating climate change in the first place.  And of course you'll also be paying for the higher priced fossil fuels themselves still, as UK plc won't have been made more energy efficient or built out its low carbon power generation sufficiently.  We may well be less competitive with some of our neighbours who will be more self-reliant on locally sourced power, so those increased costs will come at the same time as reduced trading incomes.  Oh, and there's those targets we won't have met to deal with as well.

So there's the glove slapped down to the climate change deniers in power trying to reduce green levies: put your (and everyone else's) money where your mouths are and agree to introduce legally binding climate adaptation brown levies if you remove climate mitigation green levies.  What do you, in your world view, have to lose after all?