Category Archives: Computing

On triviality

Just now I read a blog post by a programmer whose premise was that it would be “almost trivial” to do something — and proceeded to roll my eyes.

However, it then occurred to me to interrogate my reaction a little. Why u so cranky, Galen?

On the one hand, the technical task in question, while certainly not trivial in the sense that it would take an inexperienced programmer just a couple minutes to come up with a solution, is in fact straightforward enough. Writing new software to do the task would require no complex math — or even any math beyond arithmetic. It could reasonably be done in a variety of commonly known languages, and there are several open source projects in the problem space that could be used to either build on or crib from. There are quite a few potential users of the new software, many of who could contribute code and testing, and the use cases are generally well understood.

On the other hand (and one of the reasons why I rolled my eyes), the relative ease of writing the software masks, if not the complexity of implementing it, the effort that would be required to do so. The problem domain would not be well served by a thrown-over-the-wall solution; it would take continual work to ensure that configurations would continue to work and that (more importantly) the software would be as invisible as possible to end users. Sure, the problem domain is in crying need of a competitor to the current bad-but-good-enough tool, but new software is only the beginning.

Why? Some things that are not trivial, even if the coding is:

  • Documentation, particularly on how to switch from BadButGoodEnough.
  • Community-building, with all the emotional labor entailed therein.

On the gripping hand: I nonetheless can’t completely dismiss appeals to triviality. Yes, calling something trivial can overlook the non-coding working required to make good software actually succeed. It can sometimes hide a lack of understanding of the problem domain; it can also set the coder against the user when the user points out complications that would interfere with ease of coding. The phrase “trivial problem” can also be a great way to ratchet up folks’ imposter syndrome.

But, perhaps, it can also encourage somebody to take up the work: if a problem is trivial, maybe I can tackle it. Maybe you can too. Maybe coming up with an alternative to BadButGoodEnoughProgram is within reach.

How can we better talk about such problems — to encourage folks to both acknowledge that often the code is only the beginning, while not loading folks down with so many caveats and considerations that only the more privileged among us feel empowered to make the attempt to tackle the problem?

Ada Lovelace Day, during which I call out some folk for awesomeness

Today is Ada Lovelace Day, a celebration of the work and achievements of women in science, technology, engineering, and math.

And library technology, whose place in STEM is not to be denied.

Here are a few (and I should emphasize that this is a very incomplete list) of the women I have had the privilege to collaborate with and learn from:

  • Ruth Bavousett: Ruth is a Perl monger, contributor of many patches to Koha, has served as Koha’s translation manager, and is an author for
  • Katrin Fischer: Katrin has contributed over 500 patches to Koha and has served many terms as Koha’s quality assurance manager. QA Manager is not an easy position to occupy, and never comes with enough thanks, but Katrin has succeeded at it. Thanks, Katrin!
  • Christina Harlow (@cm_harlow): Christina walks the boundary between library metadata and library software and bridges it. In her blog’s title, she gives herself the sobriquet of “metadata lackey” — but to me that seems far too modest. She’s been instrumental in the revival of Mashcat this year.
  • Kathy Lussier: Kathy has contributed both code and documentation to the Evergreen project and has served in many roles on the project, including on its oversight board and its web team. She has spearheaded various initiatives to make the Evergreen project more inclusive and is a strong advocate for universal, accesible design.

Henriette Avram [image via Wikipedia]

Henriette Avram [image via Wikipedia]

Although she is no longer with us, Henriette Avram, the creator of the MARC format, deserves a callout today as well: it is not every programmer who ships, and moreover, ships something that remains in use 50 years later. I am sure that Avram, were she still alive and working, would be heavily involved in libraries’ efforts to adopt Linked Open Data.

Henriette Avram versus the world: Is COBOL capable of processing MARC?

Is the COBOL programming language capable of processing MARC records?

A computer programmer in 2015 could be excused for thinking to herself, what kind of question is that!?! Surely it’s obvious that any programming language capable of receiving input can parse a simple, antique record format?

In 1968, it apparently wasn’t so obvious. I turned up an article by Henriette Avram and a colleague, MARC II and COBOL, that was evidently written in response to a review article by a Hillis Griffin where he stated

Users will require programmers skilled in languages other than FORTRAN or COBOL to take advantage of MARC records.

Avram responded to Griffin’s concern in the most direct way possible: by describing COBOL programs developed by the Library of Congress to process MARC records and generate printed catalogs. Her article even include source code, in case there were any remaining doubts!

I haven’t yet turned up any evidence that Henriette Avram and Grace Hopper ever met, but it was nice to find a close, albeit indirect connection between the two of them via COBOL.

Is the debate between Avram and Griffen in 1968 regarding COBOL and MARC anything more than a curiosity? I think it is — many of the discussions she participated in are reminiscent of debates that are taking place now. To fair to Griffin, I don’t know enough about the computing environment of the late sixties to be able to definitely say that his statement was patently ill-informed at the time — but given that by 1962 IBM had announced that they were standardizing on COBOL, it seems hardly surprising that Avram and her group would be writing MARC processing code in COBOL on an IBM/360 by 1968. To me, the concerns that Griffin raised seem on par with objections to Library Linked Data that assume that each library catalog request would necessarily mean firing off a dozen requests to RDF providers — objections that have rejoinders that are obvious to programmers, but perhaps not so obvious to others.

Plus ça change, plus c’est la même chose?


I was a math major in college, but it was my student job at the college library that ended up setting the current direction for my career and life. I started out filing update pages for the Standard Federal Tax Reporter (and sometimes reading it!). I worked for most of the departments at the library at various points in time, and ended up as a junior systems administrator.

Towards the end of my undergraduate career, the library changed its ILS, and I wrote most of the code to extract and migrate the library’s data from the old system to the new one. After college, I started working for the vendor of the new ILS. I have spent the last nine years migrating data, programming, moving from Chicago to Anchorage to Tallahassee to Chicago, finding love and kitty cats, and watching and experiencing the trials and travails of the library automation industry.

I changed jobs recently, and am now coding for and supporting an open source integrated library system, Koha.

I plan to blog about library automation, open source software, metadata and the many headaches inspired by it, and anything else I happen to think of.