TL;DR

  • Librarians don't get clean hands. We get leverage. Use AI as a tool you can shut off and document the debt you owe to the people whose work trained it.
  • Vendor capture is a pattern, not an accident. Export clauses, termination penalties, and pricing tied to dependency are designed to trap you.
  • Micro-apps and practitioner judgment give you exit capacity. Build the smallest owned tool that buys you leverage, document it, and delete it when it served its purpose.

Why I wrote this: I am tired of practitioners being told to wait for theory while vendors turn their budgets into captive rents.

If you can't turn the tool off without breaking service, you don't own it. That's capture, not innovation.

Vendor stage Your leverage
Original curve I sketched: every vendor arc starts helpful and ends extractive. Practitioners plan for the turn long before it arrives.

For librarians who don't have time to read this or who are offended by the truth, close the tab. I'm not the person you're trying to impress. For everyone else: buckle in. This is an honest ride through the part of library technology we usually keep off the record.

The AI discourse gives you two options: engage or refuse. Neither one helps when Monday morning shows up and the board wants answers and you're stuck between vendors who want to enshittify you and idealists who won't help you deliver. The practitioner is the third option. Build enough understanding to know when a tool serves you and when it captures you. Build your own small tools when it matters. Refuse when refusal makes sense and engage when engagement gives you leverage. Know your exit before you enter.

The goal is agency, not ideology. Sometimes the AI everyone is screaming about is the same tool that breaks a vendor's lock-in. Sometimes it is exactly the thing that will enshittify your workflows. The practitioner knows which is which because they've seen both happen.


What Twenty Years Taught Me About Library Technology

I started as employee sixty-something at OverDrive in 2008, first woman on the support team. Back when patrons had to log into the library, get kicked to OverDrive, get redirected to Adobe Reader, then sideload to a Nook from a third program that crashed every third transfer. The boys were literally ripping CDs in the back to build catalog supply. We were young, naive, and still believed the tech wasn't evil yet.

Saturdays I sat in Sandusky Public Library for my Kent State practicum trying to help a patron the staff called the "crusty old dude." He hated how few eBooks they had. I hated that I was the vendor who built the scarcity. Two hours later he wrote a $50,000 check because I listened. That's when I realized I was sitting on both sides of the same rigged table.

OverDrive's "Advantage" credits and platform fees sounded generous until year two invoices showed up. Fifty titles for five grand if you ignored the embargo and the 26-checkout caps. Embargo windows meant libraries prepaid for books they couldn't lend for six months. Staff thought they were learning digital collections. In reality they were learning rationing.

The Pattern Has a Name

Cory Doctorow calls it enshittification. Platforms launch useful, then squeeze users, then extract everything before collapsing. Different words, same diagnosis I lived through at OverDrive, Baker & Taylor, Trellis, and every "platform" that entered libraries. The people with power protect position, the people doing the work get squeezed, and patrons have no exit.

Librarianship is 80% women and has been systematically devalued for a century. Tool dependency is part of how that happens. You don't give the librarians control over the ILS. You make them tenants and you bill rent forever.

Where I've Been Since Walking Out

After burning out from startups, I built AI tools for small businesses, a book banning tracker, and eventually my own ILS because Baker & Taylor told me my ideas weren't worth funding. When they collapsed after OCLC sued them for allegedly stealing MARC data, I rebuilt the system I asked them to build, this time with stubbornness and open-source code. You can see it at littleschitt.com. It isn't pretty. It works.

Prior to large language models, I never would have pulled it off. Stubbornness isn't a business model, but it is leverage when the tooling is finally accessible. That's the window we're in right now.

Why Slater's Article Lit Me Up

Kaitlin Slater's "Against AI" is a clean refusal. Roman Yampolskiy's AI safety research is a sober warning. R. David Lankes is all about engagement. None of them have to sit via Zoom with a board demanding why the ILS costs 40% more while doing less. Practitioners get neither luxury. We get budgets, outages, and burned-out staff.

Slater is right about exploited labor and environmental costs. Lankes is right that if librarians aren't at the table, we're on the menu. Yampolskiy is right that we can't truly monitor advanced AI. None of those positions help you decide whether to renew a vendor contract next week. Practitioners need a framework that works under load.

The Discourse Won't Save You Monday Morning

Engagement without exit is capture. Refusal without alternatives is abandonment. The practitioner builds the third option: leverage. When the table is rigged, you stop treating invitations as influence and instead build something that gives you the ability to leave the table.

Lankes, Slater, and Yampolskiy give us a why. We still need a how. That's what the rest of this article covers.

I'm Not Against AI. I'm Against Enshittification.

Give people what they need and get out of the way. That's the whole philosophy. The same AI tools that scraped us without consent are also the ones that finally cracked OCLC's metadata monopoly. That isn't absolution. It's leverage. The harm happened. You cannot un-scrape the internet. You can use the resulting tool to build systems that serve your community without locking you into $50 pours of vendor whiskey.

It's not tu quoque. It's judo. Use the momentum against the captor, document the harm, repay the ecosystem, and don't pretend it's clean.

What Makes a Practitioner

A practitioner is the person who actually knows how the system works. Not the person with the title. Not the person who reads the manual. The person who knows what happens at 2 AM when something breaks, what the workarounds are, where the data really lives, and what will fall apart if they walk out the door.

I watched a library lose their systems librarian. She'd been there for twelve years. Built their MARC workflows from nothing. Documented all the custom scripts in a shared folder nobody else had read access to. The day she retired, the vendor's MARC processing started failing silently. The library kept ordering books. For three months they had no idea those records weren't loading because only she could read the error logs. When they found out, they had 40,000 uncataloged items waiting in receiving. The replacement cost was six months of work from two contractors plus expedited fees. That's what losing a practitioner looks like.

Three Components

  • Domain knowledge: know your field well enough to notice when the tool is lying.
  • Tool fluency: understand what the tool can and cannot do. Neither worship nor fear it.
  • Judgment: be ready to stop, override, and redirect when the output is technically correct but contextually wrong.

Patron asks for books about death for a seven-year-old. Domain knowledge knows that's grief counseling plus readers' advisory. Tool fluency knows to prompt for "gentle middle grade loss" and cross-reference holdings. Judgment knows to skip "Bridge to Terabithia" because sudden traumatic death is not the situation described. The AI is headlights. You still steer.

The practitioner is who uses those three things to keep the system honest. They're the librarian who can look at a vendor contract and say "this export clause is fake." They're the person who knows what data you actually control and what's trapped in a proprietary database. They're the one who can tell you what it would really cost to leave, measured in staff time and lost institutional knowledge.

How I Make Decisions Now

  1. Name the problem. "We need AI" is not a problem. "Staff spend six hours weekly cleaning MARC records" is.
  2. Assess leverage. Can we turn it off? Can we export data in a usable format? What happens if the vendor disappears tomorrow?
  3. Choose a posture. Engage when exit is easy, refuse when risk outweighs benefit, build when neither serves you.
  4. Constrain scope. Minimum data, minimum users, minimum time.
  5. Test in the wild. If it only works in demos or staff hate it, it doesn't work.
  6. Decide. Keep it, kill it, or formalize it. Killing a tool before it captures you is a success.

What Practitioners Actually Do That Nobody Else Can

Practitioners read vendor contracts and notice the real cost hiding in line 47. They know the difference between a 95% uptime SLA and a 95% uptime guarantee when you actually need the system. They can tell you that the vendor's "cloud-native" platform will lock you in because of how the API tokens work, not what the marketing says.

They debug the thing that isn't broken but isn't working right. They notice that patron holds aren't expiring on time and that the system is doing what the configuration told it to do, even though the configuration was built on assumptions that stopped being true three years ago. They can trace a data problem back to a decision someone made in 2019 and explain why changing it now requires testing in six different places.

They document the workarounds that keep the system running. They know that the circulation desk has a three-step manual override for holds when the system drifts, that acquisitions has been spreadsheet-adjuster for patron requests for two years because the vendor never built the feature, that the ILS has a custom report that took six weeks to build because the vendor's reporting tool doesn't actually do what it claims.

When a new person joins the team, practitioners onboard them. Not just "here's how to push the button." They explain what the button actually does, what you need to watch for, what's gonna break if you do it wrong, what to do when the system lies to you.

They build the small tools that keep you alive between paying the vendor. The script that exports data in a format you can actually use. The spreadsheet that reconciles what the system says you have with what's actually on the shelf. The documentation that says "if this happens, here's the fix" so the next person doesn't have to learn it all over again.

You can hire three entry-level staff for the cost of one practitioner. I know the math feels obvious. But when the system breaks at 2 AM and you need someone who can look at the error code and know what it means and know it happened before and know what you actually did to fix it the last time, there's no Slack channel that substitutes for that. There's no vendor ticket that moves fast enough. There's no manual that covers it because the problem is the intersection of three different systems.

Micro-Apps: The Anti-Platform

A micro-app is a disposable tool you own. No accounts, no phoning home, one job. Build, use, delete. Platforms want stickiness. Micro-apps reject it. They exist to give you leverage before you go back to the negotiation table.

Sit with an LLM, describe the workflow, build the smallest script that gives you feedback. Expect failure on version one. That is normal. Document what you built so the next person isn't guessing. Messy code with clear docs beats clean code with no lineage.

The Ethics of Tainted Tools

LLMs are built on extractive data and underpaid annotation labor. Treating them as untouchable doesn't help practitioners who need leverage today. Instead, govern them locally. Run open-weight models. Keep data on-prem. When you save $15,000 by replacing a vendor, document the lineage and pay it forward. Sponsor open-source, commission local artists, buy from local authors. Reciprocity is the budget line item that keeps you honest.

The Models Aren't Neutral

Training corpora skew white, western, male, academic. Stop treating the model as truth. Use retrieval augmented generation (RAG) so the AI reasons over your curated sources. Human intent in, machine middle, human out. No zero-shot automation on anything that matters.

Messy Is Fine. Insecure Is Not.

You can build scrappy. You cannot skip security. No patron data without privacy review. No health data without HIPAA compliance. No minors' data without COPPA documentation. When in doubt, collect less and store it safely or not at all.

When to Stop Building

  • If the system handles regulated data you cannot secure.
  • If failure would endanger patrons.
  • If it needs 24/7 uptime you can't guarantee.
  • If no one is accountable for maintenance.

Ask yourself what happens if it breaks at 2 AM. If the answer is "something bad," you don't build it. If the answer is "we fall back to the old workflow," build it.

Seven Questions That Will Piss Off Your Vendor

Good vendors answer yes to most of these. Bad vendors hedge.

  1. Can we export our data in a non-proprietary format?
  2. Do we control our own schema?
  3. Does the system run in degraded mode if you go offline?
  4. Does pricing scale with usage instead of dependency?
  5. Are core workflows documented for staff?
  6. Can we terminate without penalties that exceed migration costs?
  7. Could we replace this within 90 days?

Why This Matters for Librarianship

I know you think IT staff are replaceable. I know your director looked at your systems librarian salary and thought "we could hire two part-timers for that." I know you've been told that vendor SLAs mean you don't really need institutional knowledge anymore. Here's what I watched happen instead.

A mid-sized library system replaced their head of IT with a help desk contractor and a "cloud-native" vendor platform. Six months later they couldn't verify patron privacy was actually being protected. The contractor had no context. The vendor wouldn't say. The library had no practitioner left to read the contracts they'd already signed. By the time they realized the SLA didn't actually cover the problem, they'd already lost three years of patron trust because staff couldn't explain how data moved through the system.

Patrons already use ChatGPT because it's faster than pants. Directors already eye staff budgets when they hear "AI efficiency." The MLS is being hollowed out unless librarians become practitioners. You don't defeat that by making fewer hiring mistakes. You defeat it by building practitioners who can audit a contract, build a small tool, and power down a system without panic. That person is harder to replace than someone who only consumes what they're handed. That person has leverage.

The Cost of Losing Practitioners

When you lose a practitioner, you don't just lose one person. You lose every conversation they had in rooms you weren't in. You lose the emergency triage they could do without a vendor ticket. You lose the person who could tell you the real cost of leaving and the real cost of staying. You lose institutional memory that walked out the door.

I watched another library lose their discovery layer architect to burnout. She'd been asked to "do more with less" so many times that the less became nothing. No budget for tools, no staff to help, no advancement, no recognition. She'd built systems that served 500,000 patrons across four branches and the organization replaced her with an offshore support ticket. Three months in, search relevance tanked. The library couldn't figure out why. Neither could the vendor. The problem required understanding years of local indexing decisions she'd made. With her gone, that knowledge was gone. They eventually rebuilt the whole thing from scratch. Cost: $200,000 and a year of degraded service.

The real cost of losing practitioners isn't the severance. It's the decisions you can't make without them. It's the vendor contracts you sign without anyone reading them. It's the slow enshittification you don't notice because there's no one left who remembers what "good" looked like.

This Window Will Close

Right now regular people can build functioning tools with AI assistance. No CS degree required. Consolidation is already lining up to close that window. Proprietary models, exclusive partnerships, regulatory capture, data-center land grabs. Build your practitioner capacity before the ladder is pulled up.

Go Make Something

Lankes said engage. Slater said refuse. Yampolskiy said you can't fully control it. Practitioners say build the smallest thing that gives you leverage and document the tradeoffs. Describe the tool you wish existed. Build a janky version. Iterate until it does the job.

What Control Feels Like

I want my kid to know how to turn it off. That's the practitioner's position in one sentence. Can you power down and keep serving the patron? Can the library switch vendors and still deliver services? Control is the ability to walk away without losing yourself.

Start Small. Start Now.

Reread a contract with exit in mind. Highlight export clauses, penalties, and what happens to workflows if you leave.
Identify one annoying workflow. Something repetitive. Build a disposable tool that knocks 20 minutes off. Document it.
Give back. If the tool saved money, put 10% toward the ecosystem. Sponsor open-source, buy local, pay your contributors.

You develop judgment by making things, breaking them, and deciding when to stop. If you want help building the next thing, you know where to find me: unhingedlibrarian.com.

References

  • Doctorow, C. (2023). Enshittification. Wired.
  • Lankes, R. D. (2025). Triptych: Death, AI, and Librarianship. Library Journal.
  • OCLC, Inc. v. Baker & Taylor, LLC, Case No. 2:25-cv-00309 (S.D. Ohio, filed March 26, 2025).
  • Slater, K. (2025). Against AI: Critical Refusal in the Library. Library Trends, 73(4), 588-608.
  • Yampolskiy, R. V. (2024). On monitorability of AI. AI and Ethics, 1-19.