Let’s search a Koha catalog for something that isn’t at all controversial:

Screenshot of results from a catalog search of a Koha system for "anarchist"

What you search for in a library catalog ought to be only between you and the library — and that, only briefly, as the library should quickly forget. Of course, between “ought” and “is” lies the Devil and his details. Let’s poke around with Chrome’s DevTools:

  1. Hit Control-Shift-I (on Windows)
  2. Switch to the Network tab.
  3. Hit Control-R to reload the page and get a list of the HTTP requests that the browser makes.

We get something like this:

Screenshot of Chrome DevTool's Network tab showing requests made when doing the "anarchist" Koha catalog search.

There’s a lot to like here: every request was made using HTTPS rather than HTTP, and almost all of the requests were made to the Koha server. (If you can’t trust the library catalog, who can you trust? Well… that doesn’t have an answer as clear as we would like, but I won’t tackle that question here.)

However, the two cover images on the result’s page come from Amazon:


What did I trade in exchange for those two cover images? Let’s click on the request on and see:

:authority: images-na.ssl-images-amazon.com
:method: GET
:path: /images/P/0974458902.01.TZZZZZZZ.jpg
:scheme: https
accept: image/webp,image/apng,image/,/*;q=0.8
accept-encoding: gzip, deflate, br
accept-language: en-US,en;q=0.9
cache-control: no-cache
dnt: 1
pragma: no-cache
referer: https://catalog.libraryguardians.com/cgi-bin/koha/opac-search.pl?q=anarchist
sec-fetch-dest: image
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.116 Safari/537.36

Here’s what was sent when I used Firefox:

Host: images-na.ssl-images-amazon.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0
Accept: image/webp,/
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Referer: https://catalog.libraryguardians.com/cgi-bin/koha/opac-search.pl?q=anarchist
DNT: 1
Pragma: no-cache

Amazon also knows what my IP address is. With that, it doesn’t take much to figure out that I am in Georgia and am clearly up to no good; after all, one look at the Referer header tells all.

Let’s switch over to using Google Book’s cover images:


This time, the request headers are in Chrome:

:authority: books.google.com
:method: GET
:path: /books/content?id=phzFwAEACAAJ&printsec=frontcover&img=1&zoom=5
:scheme: https
accept: image/webp,image/apng,image/,/*;q=0.8
accept-encoding: gzip, deflate, br
accept-language: en-US,en;q=0.9
cache-control: no-cache
dnt: 1
pragma: no-cache
referer: https://catalog.libraryguardians.com/
sec-fetch-dest: image
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.116 Safari/537.36
x-client-data: CKO1yQEIiLbJAQimtskBCMG2yQEIqZ3KAQi3qsoBCMuuygEIz6/KAQi8sMoBCJe1ygEI7bXKAQiNusoBGKukygEYvrrKAQ==

and in Firefox:

Host: books.google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0
Accept: image/webp,/
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Referer: https://catalog.libraryguardians.com/
DNT: 1
Pragma: no-cache
Cache-Control: no-cache

On the one hand… the Referer now contains only the base URL of the catalog. I believe this is due to a difference in how Koha figures out the correct image URL. When using Amazon for cover images, the ISBN of the title is normalized and used to construct a URL for an <img> tag. Koha doesn’t currently set a Referrer-Policy, so the default of no-referrer-when-downgrade is used and the full referrer is sent. Google Book’s cover image URLs cannot be directly constructed like that, so a bit of JavaScript queries a web service and gets back the image URLs, and for reasons that are unclear to me at the moment, doesn’t send the full URL as the referrer. (Cover images from OpenLibrary are fetched in a similar way, but full Referer header is sent.)

As a side note, the x-client-data header sent by Chrome to books.google.com is… concerning.

There are some relatively simple things that can be done to limit leaking the full referring URL to the likes of Google and Amazon, including

  • Setting the Referrer-Policy header via web server configuration or meta tag to something like origin or origin-when-cross-origin.
  • Setting referrerpolicy for <script> and <img> tags involved in fetching book jackets.

This would help, but only up to a point: fetching https://books.google.com/books/content?id=wdgrJQAACAAJ&printsec=frontcover&img=1&zoom=5 still tells Google that a web browser at your IP address has done something to fetch the book jacket image for The Anarchist Cookbook. Suspicious!

What to do? Ultimately, if we’re going to use free third-party services to provide cover images for library catalogs, our options to do so in a way that preserves patron privacy boil down to:

  • Only use sources that we trust to not broadcast or misuse the information that gets sent in the course of requesting the images. The Open Library might qualify, but ultimately isn’t beholden to any particular library that uses its data.
  • Proxy image requests through the library catalog server. Evergreen does this in some cases, and it wouldn’t be much work to have Koha do something similar. It should be noted that Coce does not help in the case of Koha, as all it does is proxy image URLs, meaning that it’s still the user’s web browser fetching the actual images.
  • Figure out a way to obtain local copies of the cover images and serve them from the library’s web server. Sometimes this is necessary anyway for libraries that collect stuff that wasn’t commercially sold in the past couple decades, but otherwise this is a lot of work.
  • Do nothing and figure that Amazon and Google aren’t trawling through their logs correlate cover image retrieval with the potential reading interests. I actually have a tiny bit of sympathy to that approach — it’s not beyond the realm of possibility that cover image access logs are simply getting ignored, unlike say, direct usage data from Kindle or Google Books — but ostriches sticking their head in the sand are not known as a good model for due diligence.

Non-free book jacket and added content services are also an option, of course — and at least unlike Google and Amazon, it’s plausible that libraries could insist on contracts (with teeth) that forbid misuse of patron information.

My thanks to Callan Bignoli for the tweet that inspired this ramble.

A very brief post to start the new year. I’m not inclined to make elaborate resolutions for the new year other than being very firm that I will stop writing “2018” in dates by the end of March… or maybe April.

But seriously, I do want to write and engage more this year and more actively try new things. As I’m doing, right now, by trying WordPress’s new Gutenberg editor. Beyond that? We’ll see.

A brief digression on Gutenberg: I will bet a bag of coffee that the rollout of Gutenberg will become a standard case study in software management course syllabi. It encapsulates so many points of conflict: open source governance and the role of commercial entities in open source communities; accessibility and the politics of serving (or not) all potential users; technical change management and the balance between backwards compatibility and keeping up to date with modern technology (or, more cynically, modern fashions in technology); and managing major changes to the conceptual model required to use a piece of software. (And an idea for a future post, either by me or anybody who wants to run with it: can the transition of WordPress’s editor from a document-based model to a block-based modal be usefully compared with the transition from AACR2/ISBD to RDA/LRM/LOD/etc.?) Of course, the situation with Gutenberg is evolving, so while initial analyses exist, obviously no definitive post mortems have been written.

But before I let this digression run away from me… onwards to 2019. May everybody reading this have a happy new year, or at least better one than 2018.

Hecate the tortoiseshell asleep, facing the camera, and curled up behind me in my chair.
Hecate sleeping behind me in my chair. Her New Year’s resolutions are pretty clear: play, sleep, and eat. Also, torment her humans and her brother/uncle cats.

A quotidian concern of anybody responsible for a database is the messy data it contains. See a record about a Pedro González? Bah, the assumption of Latin-1 strikes again! Better correct it to González. Looking at his record in the first place because you’re reading his obituary? Oh dear, better mark him as deceased. 12,741 people living in the bungalow at 123 Main St.? Let us now ponder the wisdom of the null and the foolishness of the dummy value.

Library name authority control could be viewed as a grand collaborative data cleanup project without having to squint too hard.

What of the morality of data cleanup? Let’s assume that the data should be gathered in the first place; then as Patricia Hayes noted back in 2004, there is of course an ethical expectation that efforts such as medical research will be based on clean data: data that has been carefully collected under systematic supervision.

Let’s consider another context: whether to engage in batch authority cleanup of a library catalog. The decision of whether it is worth the cost, like most decisions on allocating resources, has an ethical dimension: does the improvement in the usefulness of the catalog outweigh the benefits of other potential uses of the money? Sometimes yes, sometimes no, and the decision often depends on local factors, but generally there’s not much examination of the ethics of the data cleanup per se. After all, if you should have the database in the first place, it should be as accurate and precise as you can manage consistent with its raison d’être.

Now let’s consider a particular sort of database. One full of records about people. Specifically, a voter registration database.  There are many like it; after all, at its heart it’s just a slightly overgrown list of names and addresses.

An overgrown list of names of addresses around which much mischief has been done in the name of accuracy.

This is on my mind because the state I live in, Georgia, is conducting a gubernatorial election that just about doubles as a referendum on how to properly maintain a voter registration list.

On the one hand, you have Brian Kemp, the current Georgia secretary of state, whose portfolio includes the office that maintains the statewide voter database and oversees all elections. On other hand, Stacey Abrams, who among other things founded the New Georgia Project aimed at registering tens of thousands of new voters, albeit with mixed results.

Is it odd for somebody to oversee the department that would certify the winner of the governor’s race? The NAACP and others think so, having filed a lawsuit to try to force Kemp to step down as secretary of state. Moreover, Kemp has a history of efforts to “clean” the voter rolls; efforts that tend to depress votes by minorities—in a state that is becoming increasingly purple.  (And consider the county I live in, Gwinnett County. It is the most demographically diverse county in the southeast… and happens to have the highest rate of rejection of absentee ballots so far this year.) Most recently, the journalist Greg Palast published a database of voters purged from Georgia’s list. This database contains 591,000 names removed from the rolls in 2017… one tenth of the list!

A heck of a data cleanup project, eh?

Every record removal that prevents a voter from casting their ballot on election day is an injustice. Every one of the 53,000 voters whose registration is left pending due to the exact match law is suffering an injustice. Hopefully they won’t be put off and will vote… if they can produce ID… if the local registrar discretion leans towards expanding and not collapsing the franchise.

Dare I say it? Data cleanup is not an inherently neutral endeavor.

Sure, much of the time data cleanup work is just improving the accuracy of a database—but not always. If you work with data about people, be wary.

Yesterday I gave a lightning talk at the Evergreen conference on being wrong. Appropriately, I started out the talk on the wrong foot. I intended to give the talk today, but when I signed up for a slot, I failed to notice that the signup sheet I used was for yesterday. It was a good thing that I had decided to listen to the other lightning talks yesterday, as that way the facilitator was able to find me to tell me that I was up next.


When she did that, I initially asked to do it today as I had intended… but changed my mind and decided to charge ahead. Lightning talks are all about serendipity, right?

The talk went something like this: after mentioning my scheduling mix-up, I spoke about how I have been active in the Evergreen project for almost nine years. I’ve worn a variety of project hats over that time, including those of developer, core committer, release manager, member of the Evergreen Oversight Board, chair of the EOB, and so forth. While I am of course proud of the contributions I’ve made, my history with the project also includes being wrong about many things and failing a lot.

I’ve been wrong about coding issues. I’ve been responsible for regressions. I’ve had my share of brown-bag releases. I’ve misunderstood what library staff and patrons were trying to accomplish. I’ve made assumptions about the working conditions and circumstances of users that were very wrong indeed. Some of my bug reports and test plans have not been particularly clear.

Why bring up my wrongness? Prior to the talk, I had been part of a couple conversations about how some folks feel intimidated about writing bug reports or posting to the mailing lists for fear of being judged if their submission was not perfect. Of course, I don’t want people to feel intimidated; the project needs bug reports and contributions from anybody who cares enough about the software to make the effort. By mentioning how I — as somebody who is unquestionably a senior contributor to the project — have been repeatedly wrong, I hoped to humanize people like me: we’re not perfect. Perfection is not a requirement for gaining status in the community as a respected contributor — and that’s a good thing.

I also wanted to give permission for folks to be wrong, in the hopes that doing so might help lower a barrier to participating.

So much for the gist of the lightning talk. People in the audience seemed to enjoy it, and I got a couple nice comments about it, including somebody mentioning how they wished they had heard something like that as they were making their first contributions to the project.

However, I would also like to expand a bit on a couple points.

Permission to be wrong is not something I can grant all by myself. While I can try to model good ways of providing feedback (and get better myself at it; I’ve certainly been wrong many a time about how to do so), it sometimes doesn’t take much for an interaction with a new contributor (or an experienced one!) to become unwelcoming to the point where we lose the contributor forever. This is not a theoretical concern; while I think we have gotten much better over the years, there were certainly times and circumstances where it was very rational to feel intimidated about participating in the project in certain ways for fear of getting dismissive feedback.

Giving ourselves permission to be wrong is a community responsibility; by doing so we can give ourselves permission to improve. However, this can’t be treated as a platitude: it takes effort and thoughtfulness both to ensure that the community is welcoming at all levels, and to ensure that permission to be wrong isn’t accorded only to people who look like me.

One of the things that the conference keynote speaker Crystal Martin asked the community to consider was this: “Lift as you climb.” I challenge senior contributors to the Evergreen project — including myself — to take this to heart. I have benefited a lot by being able to be wrong; we should act to ensure that everybody else in the community can be allowed to be wrong as well.

It almost doesn’t need to be said that old-fashioned library checkout cards were terrible for patron privacy. Want to know who had checked out a book? Just take the card out of its pocket and read.

It’s also a trivial observation that there’s a mini-genre of news articles and social media posts telling the tales of prodigal books, returning to their library after years or decades away, usually having gathered nothing but dust.

Put these two together on a slow news day? Without care, you can end up not protecting a library user’s right to privacy and confidentially with respect to resources borrowed, to borrow some words from the ALA Code of Ethics.

Faced with this, one’s sense of proportion may ask, “so what?” The borrower of a book returned sixty years late is quite likely dead, and if alive, not likely to suffer any social opprobrium or even sixty years of accumulated overdue fines.  Even if the book in question was a copy of The Anarchist Cookbook, due back on Tuesday, 11 May 1976, the FBI no doubt has lost interest in the matter.

Of course, an immediate objection to that attitude is that personal harm to the patron remains possible, even if not probable. Sometimes the borrower wants to keep a secret to the grave. They may simply not care to be the subject of a local news story.

The potential for personal harm to the borrower is of course clearer if we consider more recent loans. It’s not the job of a librarian to out somebody who wishes to remain in the closet; it remains the case that somebody who does not care to have another snoop on their reading should be entitled to read, and think, in peace.

At this point, the sense of proportion that has somehow embodied itself in this post may rejoin, “you’re catastrophizing here, Charlton,” and not be entirely wrong. Inadvertent disclosure of patron information at the “retail” level does risk causing harm, but is not guaranteed to. After all, lots of people have no problem sharing (some) of their reading history. Otherwise, LibraryThing and Goodreads would just sit there gathering tumbleweeds.

I’d still bid that sense of proportion to shuffle off with this: it’s mostly not the librarians bearing the risk of harm.

However, there’s a larger point: libraries nowadays run much higher risks of violating patron privacy at the “wholesale” level than they used to.

Remember those old checkout cards? Back in the day, an outsider trying to get a borrower’s complete reading history might have to turn out every book in the library to do so. Today, it can be much easier: find a way in, and you can have everything (including driver’s license numbers, addressees, and, if the patrons are really ill-served by their library, SSNs).

That brings me to my point: we should care about nondisclosure (and better yet, non-collection of data we don’t need) at the retail level to help bolster a habit of caring about it at the wholesale level.

Imagine a library where people at every level can feel free to point out and correct patron privacy violations — and know that they should. Where the social media manager — whose degree may not be an MLS — redacts patron names and/or asks for permission every time.  Where, and more to my point, the director and the head of IT make technology choices that protect patron privacy — because they are in the habit of thinking about patron privacy in the first place.

This is why it’s worth it to sweat the small disclosures, to be better prepared against large ones.

One of the beautiful things about Code4Lib qua banner is that it can be easily taken up by anyway without asking permission.

If I wanted to, I could have lunch with a colleague, talk about Evergreen, and call it a Code4Lib meetup, and nobody could gainsay me — particularly if I wrote up a summary of what we talked about.

Three folks in a coffeehouse spending an afternoon hacking together a connection between digital repository Foo and automatic image metadata extractor Bar, then tossing something up on the Code4Lib Wiki? Easy-peasy.

Ten people for dinner and plotting to take over the world replace MARC once and for all? Probably should make a reservation at the restaurant.

Afternoon workshop for 20 in your metro area? Well, most libraries have meeting rooms, integral classrooms, or computer labs— and directors willing to let them be used for the occasional professional development activity.

Day and a half conference for 60 from your state, province, or region? That’s probably a bit more than you can pull off single-handedly, and you may well simply not have the space for it if you work for a small public library. You at least need to think about how folks will get meals and find overnight accommodations.

The big one? The one that nowadays attracts over four hundred people from across the U.S. and Canada, with a good sprinkling of folks from outside North America — and expects that for a good chunk of the time, they’ll all be sitting in the same room? And that also expects that at least half of them will spend a day scattered across ten or twenty rooms for pre-conference workshops? That folks unable to be there in person expect to live-stream? That tries in more and more ways tries to lower barriers to attending it?

Different kettle of fish entirely.

The 2017 conference incurred a tick under $240,000 in expenses. The 2016 conference: a bit over $207,000. This year? At the moment, projected expenses are in the neighborhood of $260,000.

What is this going towards? Convention center or hotel conference space rental and catering (which typically need to be negotiated together, as guaranteeing enough catering revenue and/or hotel nights often translates into “free” room rental). A/V services, including projectors, sound systems, and microphones. Catering and space rental for the reception. For the past few years, the services of a professional event management firm — even with 50+ people volunteering for Code4Lib conference committees, we need the professionals as well. Diversity scholarships, including travel expenses, forgone registration fees, and hotel nights. T-shirts. Gratuities. Live transcription services.

How is this all getting paid for? Last year, 49% of the income came from conference and pre-conference registrations, 31% from sponsorships and exhibitor tables, 5% from donations and sponsorships for scholarships, and 3% from hotel rebates and room credits.

The other 12%? That came from the organizers of the 2016 conference in Philadelphia, who passed along a bit under $33,000 to the 2017 LPC. The 2017 conference in turn was able to pass along a bit over $25,000 to the organizers of the forthcoming 2018 conference.

In other words, the 2017 conference effectively operated at a loss of a bit under $8,000, although fortunately there was enough of a cushion that from UCLA’s perspective, the whole thing was a wash — if you ignore some things. Things like the time that UCLA staff who were members of the 2017 local planning committee spent on the whole effort — and time spent by administrative staff in UCLA’s business office.

What are their names? I have no clue.

But something I can say much more confidently: every member of the 2017 LPC and budget committees lost sleep pondering what might happen if things went wrong. If we didn’t get enough sponsorships. If members of the community would balk at the registration fee — or simply be unable to afford it — and we couldn’t meet our hotel room night commitments.

I can also say, without direct knowledge this time, but equally confidently, that members of the 2016 organizers lost sleep. And 2015. And so on down the line.

While to my knowledge no Code4Lib member has ever been personally liable for the hotel contracts, I leave it to folks to consider the reputational consequence of telling their employer, were a conference to fail, that that institution is on the hook for potentially tens of thousands of dollars.

Of course, somebody could justly respond by citing an ancient joke. You know, the one that begins like this: “Doctor, it hurts when I do this!”.

And that’s a fair point. It is both a strength and weakness of Code4Lib that it imposes no requirement that anybody do anything in particular. We don’t have to have a big annual conference; a lot of good can be done under the Code4Lib banner via electronic communications and in-person meetups small enough that it’s of little consequence if nobody happens to show up.

But I also remember the days when the Code4Lib conference would open registration, then close it a couple hours later because capacity has been reached. Based on the attendance trends, we know that we can reasonably count on at least 400 people being willing to travel to attend the annual conference.  If a future LPC manages to make the cost of attending the conference signficantly lower, I could easily see 500 or 600 people showing up (although I would then wonder if we might hit some limits on how large a single-track conference can be and still remain relevant for all of the attendees),

I think there is value in trying to put on a conference that brings in as many practitioners (and yes, managers) in the GLAM technology space together in person as can come while also supporting online participation — but puts control of the program in the hands of the attendees via a process that both honors democracy and invites diversity of background and viewpoint.

Maybe you agree with that—and maybe you don’t. But even if you don’t agree, please do acknowledge the astonishing generosity of the people and institutions that have put their money and reputation on the line to host the annual conference over the years.

Regardless, if Code4Lib is to continue to hold a large annual conference while not being completely dependent on the good graces of a  small set of libraries that are in a position to assume $250,000+ liabilities, the status quo is not sustainable.

That brings me to the Fiscal Continuity Interest Group, which I have helped lead. If you care about the big annual conference, please read the report (and if you’re pressed for time, start with the summary of options), then vote. You have until 23:59 ET on Friday, November 3 to respond to the survey.

The survey offers the following options:

  • maintain the status quo, meaning that each potential conference host is ultimately responsible for deciding how the liability of holding the conference should be managed
  • set up a non-profit organization
  • pick among four institutions that have generously offered to consider acting as ongoing fiscal sponsors for the annual conference

I believe that moving away from the status quo will help ensure that the big annual Code4Lib conference can keep happening while broadening the number of institutions that would be able to physically host it. Setting up some kind of ongoing fiscal existence for Code4Lib may also solve some problems for the folks who have been running the Code4Lib Journal.

I also believe that continuing with the status quo necessarily means that the Code4Lib community must rethink the annual conference: whether to keep having it at all; to accept the fact that only a few institutions are nowadays capable of hosting it at the scale we’re accustomed to; and to accept that if an institution is nonetheless willing to host it, that we should scale back expectations that the community is entitled to direct the shape of the conference once a host has been selected.

In other words, it boils down to deciding how we wish to govern ourselves. This doesn’t mean that Code4Lib needs to embrace bureaucracy… but we must either accept some ongoing structure or scale back.

Choose wisely.

Mellie-cat on a blue blanket
Mellie on a blue blanket

Last year, I wrote about the blossoming of the Mellie-cat, and closed with this line: “Sixteen years is not long enough to get to know a cat.”

It turns out that neither is seventeen and a half years.

Mellie passed away today after a brief illness. She is the last of my first set of cats, daughter of Erasmus and LaZorra, sister of Sophia. In the last year of her life, she trained Freddie how to cat; while she perhaps did not have the most apt of pupils, I know that he will miss her too.

She was the bravest cat I have ever known. She was not inclined to pounce on the world and take it in full; she was reserved and cautious… and yet she always showed up to observe, no matter how unfamiliar the strangers or unusual the circumstances.

Amelia is a grand name for a cat, but perhaps too grand for daily use. She was Mellie most days, but like many cats had accumulated a number of names and sobriquets throughout her life. The Clown Princess. Senior Member of the Treat Committee. Inspector of the Feets. Her mother’s special daughter. The softest and fluffiest.

And so another cat joins the realm of story.

It never gets any easier to mark that transition.

I’m leaving for Chicago tomorrow to attend ALA Annual 2017 (and to eat some real pizza), and while going over the schedule I found some programs that may be of interest to Mashcat folk:

As a little experiment, I’ve started a Google Doc for shared notes about events and other goings-on at the conference. There will of course be a lot of coverage on social media about the conference, but the shared notes doc might be a way for Mashcatters to identify common themes.

Assuming the order gets made and shipped in time (update 2017-06-22: it did), I’ll be arriving in Chicago for ALA Annual carrying a few tens of badge ribbons like this one:

Am I hoping that the librarians made of anti-matter will wear these ribbons to identify themselves, thereby avoiding unpleasant explosions and gamma ray bursts? Not really. Besides, there’s an obvious problem with this strategy, were anti-matter librarians a real constituency at conferences.

No, in a roundabout way, I’m mocking this behavior by Jeffrey Beall:"This is fake news from an anti-librarian. Budget cuts affect library journal licensing much more than price hikes. #OA #FakeNewsJeffrey Beall added,"

Seriously, dude?

I suggest reading Rachel Walden’s tweets for more background, but suffice it to say that even if you were to discount Walden’s experience as a medical library director (which I do not), Beall’s response to her is extreme. (And for even more background, John Dupuis has an excellent compilation of links on recent discussions about Open Access and “predatory” journals.)

But I’d like to unpack Beall’s choice of the expression “anti-librarian”? What exactly makes for an anti-librarian?

We already have plenty of names for folks who oppose libraries and librarians. Book-burners. Censors. Austeritarians. The closed-minded. The tax-cutters-above-all-else. The drowners of governments in bathtubs. The fearful. We could have a whole taxonomy, in fact, were the catalogers to find a few spare moments.

“Anti-librarian” as an epithet doesn’t fit most of these folks. Instead, as applied to a librarian, it has some nasty connotations: a traitor. Somebody who wears the mantle of the profession but opposes its very existence. Alternatively: a faker. A purveyor of fake news. One who is unfit to participate in the professional discourse.

There may be some librarians who deserve to have that title — but it would take a lot more than being mistaken, or even woefully misguided to earn that.

So let me also protest Beall’s response to Walden explicitly:

It is not OK.

It is not cool.

It is not acceptable.

The Institute of Museum and Library Services is the U.S. government’s primary vehicle for direct federal support of libraries, museums, and archives across the entire country. It should come as no surprise that the Trump administration’s “budget blueprint” proposes to wipe it out, along with the NEA, NEH, Meals on Wheels, and dozens of other programs.

While there is reason for hope that Congress will ignore at least some of the cuts that Trump proposes, the IMLS in particular has been in the sights of House Speaker Paul Ryan before. We cannot afford to be complacent.

Loss of the IMLS and the funding it delivers would be a disaster for many reasons, but I’ll focus on just one: the IMLS has paid a significant role in funding in the creation and use of free and open source software for libraries, museums, and archives. Besides the direct benefit to the institutions who were awarded grants to build or use F/LOSS, such grants are a smart investment on the part of an IMLS: a dollar spent on producing software that anybody can freely use can rebound to the benefit of many more libraries.

For example, here is a list of some of the software projects whose creation or enhancement was funded by an IMLS grant:

This is only a partial list; it does not include LSTA funding that libraries may have used to either implement or enhance F/LOSS systems or money that libraries contributed to F/LOSS development as part of a broader grant project.

IMLS has also funded some open source projects that ultimately… went nowhere. But that’s OK; IMLS funding is one way that libraries can afford to experiment.

Do you or your institution use any of this software? Would you miss it if it were gone — or never existed — or was only available in some proprietary form? If so… write your congressional legislators today.