Updating and Upgrading Technology

2011 October 21
by Karen

Performing updates and upgrades is something that all technologists have to do. Since switching jobs I’ve had to do a lot less of it because it really isn’t part of my job anymore.

But that doesn’t mean that I can escape it completely. Particularly when it comes to my personal technology. Back in August I got both a new desktop computer (iMac) and a new wireless router (Cisco/LinkSys). Getting both working with all the existing technology in the house proved to be a multi-weekend project.

Ironically, setting up and integrating the new desktop was the easier of the two tasks. In part because I have my hard drive partitioned and all my files live on a separate partition, plus I have them all backed up to the cloud. This meant I could literally just hook up my 1TB firewire drive use that to transfer the files from old to new computer. Install the software I needed and I was pretty much good to go.

The biggest problem I encountered in the whole endeavor was successfully migrating my iTunes library. Music transferred no problem but my playlists, ratings, and numbers of times played have disappeared into the wind. No amount of Googling presented a solution that worked and after messing around with it for 90 minutes I decided that frankly I didn’t care enough to spend any more time.

As for the router, well getting it setup was a major pain. First, Cisco’s setup software doesn’t work with Lion. Fortunately the very nice chat support person showed me how to set all the configuration via a web interface instead. Once I’d done that though my problems were only beginning. You see a new secure router meant all the devices that use the Wi-Fi network needed to be reconnected and authorized. This was fairly easy with everything accept my wireless printers. So I went digging for install CDs and manuals. The mult-function color Canon printer manual showed me that I could easily reconnect to the network using the lovely panel and buttons on the printer. My HP black and white laserjet was a completely different story. The only button it has is one that turns it on and off. The upshot of the installation instructions was that I had to connect the printer to a computer via USB and then use the printer settings to reconnect to the network. Getting this to work successfully took 2 hours.

The growing pains of upgrading technology are never fun. However, the new hardware is a significant improvement. The router has allowed me to set QoS for Netflix streaming and Skype improving my experience with both. The new iMac is much faster than my older one. But as usual no gain without some pain.

My next upgrade project is to help my spouse migrate all of his data on a variety of media: hard drives, disks, etc on to his desktop, external hard drive, and into the cloud. This way he’ll have have easy access to it, and it’ll be on more modern mediums which will hopefully help ensure its future accessibility.

Ensuring future access to born digital content is an issue near and dear to me because I’ve lost content in the past. 10+ years ago when I was a penniless graduate student I made a cookbook for various members of my family using (horrors) Microsoft Publisher. There were originally 6-8 print copies made by photocopying and hand binding. Unfortunately when we got rid of that computer the file which generated the cookbook was lost in the electronic ether. This really came to bite me in the &#* when my brother got married and my new sister in-law wanted a copy of the cookbook. I ended up using the paper copy to recreate the book in Pages, print and bind it. Having learned my lesson from the first edition of the cookbook, the 2nd edition was made into a PDF-A and like the rest of my stuff backed up to both the cloud and an external drive. This proved to be a smart choice as another relative emailed me last week asking if “there are any copies of that family cookbook around”. 5 minutes later the PDF was on its way over the internet to the relative.

None of this would have been possible if I didn’t try to stay on top of what intellectual works I’ve created, they’re formats and migrate them in a timely fashion. It is a daunting task for just my personal stuff which is why I think the preservation of born digital content is one of the biggest challenges facing libraries today.

Theming an RSS feed in Drupal

2011 October 20
by Karen

I still do a bit of work with Drupal as part of my job at OCLC. Mostly that involves making tweaks to the Developer Network website which runs on Drupal. In the last couple weeks I’ve wanted to create a custom RSS feed that would let me move bits of content from the site into other webpages OCLC offers. The problem is that the standard Drupal RSS feeds aren’t exactly what I want. The content I’m creating an RSS feed for uses CCK heavily and has lots of fields that I want to expose in the RSS feed. The standard Drupal settings will let me control which of these fields show or not by going to RSS. The problem is that this controls whether or not these fields show in the description element of the feed. It doesn’t allow you to map them to other elements, which is what I wanted to do. Luckily I found a nice blog post by someone who wanted to do something similar. The upshot is that I’m theming the RSS feed so that I have two new fields dc:creator (which I’ve remapped from the username of the user who create the node to another field for my content) and media:content which I’m using for the images that I need to incorporate into the other system.

After reading and trying to incorporate the code shown in the fore-mentioned blog post, I encountered two issues.

  1. dc:creator already exists as a field and I need to put different data in it
  2. the image associated with my node is very large, too large in fact for the application that is going to consume the RSS feed. So I needed to send a smaller version of it

Solving the first problem is a bit challenging because the standard dc:creator element is output via an array variable $item->elements. This array is generated via the Node API part of Drupal and if you want to override its behaviors then Drupal expects you to write a module. The result is that if you just want to deal with theming not writing a module you have two options:

  1. Manipulate $item->elements array when it comes to the view which means using a preprocess function
  2. Perform a ereg_replace on the $item_elements variable (which is a string) that is generated from the $item->elements array. This can be done by creating a customized in the views-view-row-rss.tpl.php file something that I had to do already to customize the RSS feed. Basically what this does is is find a particular pattern in the string and replaces it with whatever you want. If you replace it with nothing, then it eliminates that pattern from the string.
    $item_elements = ereg_replace(‘<dc:creator>.*</dc:creator>’, ”, $item_elements);  // removes author

Solving the second problem involved leveraging the ImageCache module and its functions. Some of my readers may have heard of or used ImageCache before. But for those of you who aren’t familiar with the module, its basic purpose is to allow a site maintainer to upload a single copy of an image and then create presets for other versions of that image. This comes in really handy if you want a very large image on a Node display page but smaller versions on a page that lists a bunch of nodes. Imagecache is really easy to use and you can access different presets via the GUI parts of Drupal. In addition though ImageCache has functions that allow you to access the presets you built via themes. The theme function allows you to apply a particular preset to an image an output said image with the appropriate HTML. In my case I didn’t need to output the image but rather the path to a particular preset of an image so I could build a media:content element in my feed. This required a slightly different function – imagecache_create_path .

My final code looked like this

New preprocess function for my template.php file

function devnet_preprocess_views_view_row_rss(&$vars) {
$view = &$vars['view'];
$options = &$vars['options'];
$item = &$vars['row'];

// Use the [id] of the returned results to determine the nid in [results]
$result = &$vars['view']->result;
$id = &$vars['id'];
$node = node_load( $result[$id-1]->nid );

$vars['title'] = check_plain($item->title);
$vars['link'] = check_url($item->link);
$vars['description'] = check_plain($item->description);
//$vars['description'] = check_plain($node->teaser);
$vars['node'] = $node;
$vars['item_elements'] = empty($item->elements) ? ''

New views-view-row-rss.tpl.php file

<item>
 <title><?php print $title; ?></title>
 <link><?php print $link; ?></link>
 <description>
 <![CDATA[
 <?php print $node->body; ?>
 ]]>
 </description>
 <?php
 $creator = '';
 if ($node->field_contact_name[0]['value']):   
 $creator = $node->field_contact_name[0]['value'];
 endif;
 if ($node->field_contact_name[0]['value'] and $node->field_institutions_organizations[0]['value']):
 $creator = $creator . ', ';   
 endif;
 if ($node->field_institutions_organizations[0]['value']):
 $creator = $creator . $node->field_institutions_organizations[0]['value'];
 endif;

 if ($node->field_application_screenshot[0]['fid']) :
 $screenshot = field_file_load($node->field_application_screenshot[0]['fid']);
 $screenshot_filepath = $screenshot['filepath'];
 $screenshot_url = $GLOBALS['base_url'] . '/' . imagecache_create_path('application_gallery_export', $screenshot_filepath);
 endif;
 $item_elements = ereg_replace('<dc:creator>.*</dc:creator>', '', $item_elements);  // removes author
 ?>
 <dc:creator><?php print $creator?></dc:creator>
 <media:content url="<?php print $screenshot_url;?>" medium="image"/>
 <?php print $item_elements;?>
 </item>

The whole project took me a couple hours but it has allowed me to move my Drupal data around and incorporate it into other systems much more effectively. Additionally, you can see from this project why I often say that feeds are the poor man’s APIs. Using this feed I can share data real-time between systems and I didn’t need to build a fancy API from scratch. All I needed to do was understand the extensibility of RSS and Drupal. If you want you can check out the feed on the Developer Network site.

 

Why I moved my book collection list to Goodreads

2011 August 29
by Karen

I’ve been using LibraryThing for a long time to keep track of what I’m reading, since 2006 in fact. But recently I decided to make the move to Goodreads instead. I like LibraryThing alot but I’ve been wanting to do more with my list of stuff and there are little features that LibraryThing is missing.

I kind of feel bad that I moved especially since all my data didn’t come across just right. So I’m going back and making corrections using the nice spreadsheet I exported from LibraryThing.

LibraryThing and Goodreads are quite similar but for me Goodreads has a couple advantages: RSS feeds for everything and a more robust read and write API. Why does that matter to me? Well mostly because I like being self reliant and having feeds and APIs allow me to build neat things off of my book data. This isn’t something I can do as easily with LibraryThing. Yes, LibraryThing has some nice read APIs but it doesn’t have any APIs for writing data which makes me sad. I was hoping to be able to keep both tools but I’m not sure I can because while I can get feeds of the latest items I’ve added or read in Goodreads, I have programmatic way to push this data over to LibraryThing. By the same token, I could push new things I’ve added to LibraryThing to Goodreads using the Goodreads API because LibraryThing has a feed of items added. But I can’t can’t seem to get a feed of a particular collection of things from LibraryThing which would allow me to update in both LibraryThing and Goodreads when I finish reading something.

So I’ve given up maintain both for now until perhaps I find a better way to keep them in sync. In the meantime, I have the mindless task of making sure all my tags, ratings and read dates are correct a head of me. On the bright side, I’m working on a nifty little app  that uses my Goodreads data in my spare time. I hope I’ll have something cool to share in January that is if I can eek out a little bit more coding time.

Building a Presentation that Works

2011 July 28
by Karen

There are lots of things I take for granted knowing how to do and do reasonably effectively. However, as I’ve progressed in my career I’ve learned that some of these things: coding, writing, presenting, others find more challenging. People often come up and ask me how I do what I do. At technology conferences this question usually gets asked about coding. I’ve posted about my process for this before. However, I also often have gotten asked about writing and presenting. So I thought perhaps it would be useful if I shared a bit about my process for both of these types of endeavors. I want to start out talking about how I go about putting together a “good” presentation and focus this post on that. I’ll put up a post about my writing process a little later.

Before I start building any presentation I always ask myself some basic questions

  • How long do I have to talk?
  • What is the audience?
  • What’s the story I want to tell?

These three questions are critical when putting together a presentation because they change the nature of the talk greatly. Giving a lightning talk at code4lib is nothing like giving a 20 minute presentation there or giving a 50 minute presentation there. Length of talk seriously matters because it not only governs how much material you can cover but also the length of your story arc. My personal rule of thumb is no more than 1 slide per minute because you just can’t cover more material than this. So for a 20 minutes talk that means 15 slides with 5 minutes for questions. Like all rules though this one isn’t immutable though because if you’re stepping through examples with slides then you’re probably going to go faster than a slide per minute. The key is to run through you stuff and make sure the content is going to fit in the time. Nothing is worse than having to hurry through or skip over the “AH-HA” piece of your presentation.

Audience is also a critical factor when building presentations. Currently I spend a good chunk of my time interacting with a pretty technical audience. So many of my presentation make assumptions about people’s level of technical expertise. When I’m dealing with a non-technical audience, I have to set all these assumptions aside and make sure I lay a strong foundation in my presentation. Otherwise folks get lost along the way and once again you don’t have a successful “AH-HA” moment. When I’m working on a presentation to an audience that isn’t my typical focus, I also run it my others to make sure the story arc of the presentation works. This is because I sometimes can’t get enough distance from my ingrained knowledge to know sometimes if I might have lost people.

What story you’re telling in your presentation is incredibly important. Make sure you re-read the abstract for your presentation in the conference program. This is the story you said you were going to tell. Your audience is going to feel weird and upset if you tell a different story. Second, if you were asked to give a talk run, your story back by the people who invited you. Particularly if they gave you something fairly open-ended as a topic to start. You want to make sure that expectations are met.

Once I’m rock solid on the story I’m telling in my presentation, I almost always begin my process by creating a presentation with the number of slides I need and then blocking out very roughly my story arc across the slides. I’m really minimalist about what I put on the slides at this point because I’m just trying to represent the gist of where I’m starting and where I need to end up. I go back and start flushing out the presentation slide by slide. When I’m working on a slide I try to focus on what it is I need to get across and what I think is the best way to accomplish this. Photos, graphics, and charts are all useful tools for this. When I’m working this way I’m always checking to make sure the flow works. It isn’t uncommon for me to reorder things or sometimes completely pitch a slide if it doesn’t fit in the arc.

If I get stuck on a particular slide or I’m hung up on the fact that the arc isn’t quite the way I like, I take a break. This is ultimately why I usually carve out a week for this type of work. I need to shape out my work over time so that I have time to decompress and reflect when I’m putting things together. So if a 50 minute presentation is going to take 6 hours of prep I spread those hours across a week period. When I’m done I often share my final product with others so they can perform a sanity check and provide feedback. I’ve found this invaluable on a number of occasion for catching little mistakes that make you as a presenter look unprofessional. Like anything, this is all about experience and practice.

Setting a Custom Header in a View Using Arguments

2011 June 14
by Karen

In Drupal Views can have a Header or Footer that appear before or after the View. Often one will use this to put in information related to what the view is displaying. However, sometimes you need this information to be context specific. For example you have a View with different terms and you want the header to change based on term the view is being limited to by argument. This is fairly easy to accomplish by adding some PHP code to the header field. Below is an example

<?php
$view = views_get_current_view();
$term = $view-> args[0];

if ($term == ‘Some Term’) {
echo ‘<p>This is the message you want to show for that term</p>’;
}
?>

If you have lots of terms and you want to have specific messages show for each of them then you may want to use switch/case syntax. All and all this is pretty simple trick which allows you to customize the content of your View relatively easily.

On a coding streak

2011 June 3
by Karen

The last few months I’ve been working on a couple of projects that have required me to write a lot of code. I’m coming to the end of the projects and came to the realization that I haven’t written this much code since I left my job at SUNY Cortland more than 6 years ago.

Much of that is that my job as the Head of Web Services at the University of Houston required a completely different skill set. While I did write some code during that time, it wasn’t anything like the years I spent within SUNY. One of the reasons I left UH was I wanted to get back to coding and coders. My first year at OCLC I wrote quite a bit of demo code which was extremely gratifying but none of that was quite like the projects I’m just finishing up.

The irony of it all is that even without doing lots of coding for the last 6 years, I feel like I’m a better coder now than I was then. Some of it has to do with maturity and how I think about and plan out projects. A lot of it has to do with confidence. I’ve worked on lots of little projects during the last 6 years and been part of planning much larger projects. I’ve worked with people I consider more skilled developers and been able to hold my own. I’ve also taught workshops on coding and development to others. That has given me a confidence that I didn’t have early on. In the early days, I’d spend a lot of time questioning myself and trying to find the “right” way to write the code.

Now I realize that there is just the “right” for right now. Not every project needs to be perfect and striving to get it just right sometimes means that it just doesn’t every get done. I’d love to go back and spend another day pulling part all the code I’ve written for these projects and optimize it a bit more. Do I need to do that? Not really. It sure would be advantageous if I want to share it with others. For the original purpose of the project its not a requirement though and I have to prioritize where optimization and clean up falls on my larger list of To Dos.

Another trap I’d often find myself paralyzed by early on was hitting a problem that I couldn’t solve and getting swallowed by it. I have the confidence now to know that everyone has to Google or ask for help sometimes. The result is I’m much more willing to ask for help. I’ve also  learned when “enough is enough”. Sometimes you get to the point with working on something that you need to take a break because you’re never going to get enough perspective from the problem to solve it unless you back away from the wall you’re banging your head against.

Probably the biggest thing I’ve taken away from working on these projects though is that I truly enjoy coding. Its probably because the idea of making something, really appeals to me and I like to see tangible results. It certainly makes the days go quickly.

Development Methodologies

2011 April 28
by Karen

One of the most frequent questions I get from people when I teaching them about web services, mashups and coding is “how do you get started”. Typically what they mean by this is how do I go about building something from start of finish. I have my own methodology for building things and goes something like this.

  1. Outline the what the thing I’m building is supposed to do. Basically this is what I’ve learned many development shops call “creating user stories”. User stories are basically stories how the user would use the tool being built. For most complex systems there is lots of them. For a simpler system there might be only one user story that might have sub-stories. Most of my projects have a single user story with substories.
  2. Once I have the user stories I map out (on paper) how the application is going to work including anything about what other systems it needs to interact with. I try to be very detailed with this including information like what information I need to send from my application to an external web service to get what I want back. If I’m dealing with a new web service this is where I do my research about the service and also if I need to find a new service to suit my needs then I do the research for that here. I’ll pinboard documentation for any new service I’m using and any examples I can find.
  3. Once I’ve mapped everything out on paper, I transfer this to what I refer to as pseudo-code which is basically the skeleton of my real code. In my case it is the real PHP code file that is going to be my application. It consists mostly of lots of comments about what’s supposed to be going on where in the code. I also will pull out pieces of code that repeat as part of this process and make them functions. The functions are typically empty except for comments about what they do. It isn’t uncommon for me to have to do more research in the pseudo-code phase. If I realize in writing my pseudo-code that I’m going to need to use cases in my code and I haven’t done that in a while. I’ll look that up and pinboard examples for later.
  4. Once I have my pseudo-code I usually start the real coding on the project filling in sections as I go. I use the stuff I’ve pinboarded to help me work faster.
    I try very hard to work on the code in testable sections. This allows me to test as I go and make sure things are working the way they should before I get the whole thing written and get caught in a tangle of where the heck is my error. It also allows me to have stopping points when I need to stop for a break, meal or the end of the day. I like to make notes as I go within the code but also in my task list which I usually use to document the project. I almost always have a task list with subtasks so I can keep track of the fact that I wrote code to accommodate all the user stories. I like to check off functionality in my task list as I go.
    If I’m working with a new service or function that I’m really not familiar with, I’ll open up a separate coding document to do some testing outside of my application. This allows me to understand how the new thing I’m learning works separately from the complexity of application.
  5. The last thing I do is test everything to make sure it works properly. I do this based on the user stories; pretending I’m a user working with the application. If something is broken I fix it.
  6. If the application is for learning/demonstration purposes, a new thing I’ve been doing is writing up a document about how the code works for others to learn from. This is can be the hardest part of the process if I’ve been lazy when I documented the code. It also is much worse to create this type of thing later because you have to go back and relearn your code.

My way of doing things is just one possible method a developer might use to build things. If you work as part of a team you have lots of tools at your disposal and the process is stricter because people can be working on the same code at the same time. In terms of tools, I use version control now. I don’t know what the heck I did without it and would go insane if I didn’t have it. I’ve also mentioned my testing setup before. All of these make the development process go more smoothly for me. Every developer and development shop has to work out their own practices but I thought it would be helpful to share mine since I frequently get asked about them.

Amazon Cloud Player and Drive

2011 April 1
by Karen

This week Amazon debuted its new Cloud Drive and Player service. It generated a ton of buzz in the library Twitterverse and Jason Griffey has a nice post on ALA TechSource blog about how it changes the way in which think about content, again. For me the debut generated interest mostly because I am a Mac user with an Android phone which creates a major pain when moving media from Mac to Android. So much so that I often just listen to music streamed off of Last.fm rather than fool with syncing songs and playlists. I’ve tried several programs and nothing works in a fashion that makes me happy. Easy syncing between my computer and phone would be a dream, particularly if it was automatic.

However, the announcement was slightly ironic for me because, first of all I happened to buy a album from Amazon the very day the service went live, meaning I instantly had content to play with. And being curious, test I did, and there is no doubt that being able to easily play stuff from my phone is VERY cool. The greater irony though is the fact that, my entire media collection lives in Amazonland – S3 specifically. Amazon already has my stuff but I can’t get to it via CloudPlayer not unless I re-upload it and give them more money. I find this disconnect silly and more than mildly annoying. Partly because it is clear that the program I’m using now (JungleDisk) to make sure my computer and my S3 space are sync’d is superior to anything Amazon yet offers to do this automatically with CloudDrive. Partly because it seems irrational to re-upload stuff.

I’d be a happy camper if Amazon would let me play the music in my S3 storage with CloudPlayer. Here is hoping that Amazon can allow users to use these services in conjunction in the near future.

Library Ebooks Showdown

2011 March 1
by Karen

There has been a tremendous amount of discussion about the upcoming changes to Overdrive as a result publisher’s changing their policies (specifically Harper Collins). I’ve been using Overdrive since getting my ereader almost a year ago and I was pretty impressed when their iPhone/iPod application was released earlier this spring. While getting Overdrive ebooks onto a reader isn’t easy, it is a workable process and I think that Overdrive wants to improve the users experience when they can. However, Overdrive is a middleman between publishers, libraries, and library users. Being a middleman means that the level of control they have is limited, a position which libraries themselves can potentially understand. I won’t enumerate the changes, many others have done that. What I will say is that I feel this is not a win for Overdrive, libraries, or readers of ebooks. These changes will make an already bad user experience even worse for library users. Furthermore it will make the management of ebooks and collections more difficult to libraries and Overdrive. Overdrive will have to play content cop and that means they’ll likely cut off access when they have any doubt.

At the crux it would seem that some publishers think that libraries aren’t paying enough for ebooks because ebooks can be lent indefinitely and not replaced when they wear out. There is a huge fallacy in this argument, that different rules should apply to print and ebooks and that the rules should be different between the type mediums in ways that purely benefit the seller not the consumer. Ebooks cost less to distribute which benefits sellers and there aren’t printing costs eliminating the need to sell a certain number because a set number were printed. Yet many ebooks cost as much as there print counterparts and come with fewer rights for readers.

From a consumers perspective, is it reasonable for a consumer to pay $8-$14 dollars for an ebook, read it once and then not be able to do anything with it? In my opinion heck no. But these are the rules that all ebook purchases are subject to now. Such rules actually drive me to buy FEWER ebooks, because it seems wasteful to spend that much on what ends up being a one time rental. I’m a voracious reader and many things I just read once. If I buy those things electronically then I can’t resell them or donate them. They basically become throw aways. So these are the things that I borrow from the library. Since I purchased my ereader in of May 2010 I’ve read 70 library ebooks. Furthermore, borrowing books has often introduced me to new authors who I wouldn’t have risked buying otherwise. The ability to borrow and trade content helps keep the ecosystem of written content working and healthy. When you make it more difficult to do these things you hurt the ecosystem.

Therefore, if we want to continue to have a vibrant community where written content is created and consumed something has got to change. A good starting place is the eBook User’s Bill of Rights I hope that it will get everyone from readers and libraries to publishers and authors to consider a path to a different model which is fair to all parties and helps to keep the ecosystem of the written word thriving.

Login Usability Tweak for Drupal

2011 February 28
by Karen

One of the things that has been annoying me about one of the Drupal sites I work on is the fact that I want people to be able to login and end up back on the page they clicked the login button from. I know how to do this when I create a login link from scratch. It is simple you just tack the ?destination= and then the node url onto the user/login url. The problem is that I have my login link in a Drupal menu so that I can easily update it. Because of that I can’t code in the destination parameter.

The problem has been irritating me for a while and I just haven’t had time to go looking in the Drupal docs and groups for a solution until today. The irony is that the solution is a very simple one. Basically it all boils down to theming and the all powerful template.php file. If alter the menu_item_link function into your template.php file to include

// add destination for login link
if ($link[‘href’] == ‘user/login’) {
$link[‘localized_options’][‘query’] = drupal_get_destination();
}

you’ll get any menu login links to auto-redirect to the page the user was on when they clicked the “Login” menu item. Little code for a huge usability improvement. The only way I could make things better would be if I made the login menu item AJAXy and bring up an overlaid form rather than taking the user to a different page. The Ajax Login/Register module might do the trick, but that’s a project to think about for another day.