The Unhinged Librarian
14 min read

The EU AI Act Deep-Dive for Libraries: What Actually Changes

By Sam Chada

Library technology consultant with 20 years in library tech, having worked both vendor side and library side. Trained implementation teams, managed complex vendor relationships, and sat in the meetings where they decided the pricing you're paying. I know how this industry works because I've been on both sides of it.

TL;DR
  • EU AI Act (effective 2025-2026) classifies AI systems by risk. High-risk systems require impact assessments and human oversight. Some AI (facial recognition, behavioral profiling) prohibited in public spaces.
  • Library impact: discovery systems with behavioral tracking, patron recommendation algorithms, and security systems with person identification likely fall into high-risk category.
  • US libraries affected because vendors often build to strictest regulatory requirement and apply globally. EU compliance becomes de facto US standard for library AI systems.
  • Library action items: audit AI vendor contracts for EU compliance language, understand whether your current AI systems would meet high-risk classification, and plan compliance costs into budget.

Let me cut through the noise right at the start: The EU AI Act is not a hypothetical future problem. It's already reshaping how software works globally, and your library is affected whether you're in Stockholm or Seattle.

The law became fully enforceable on August 2, 2024 (prohibited systems), with the real compliance deadlines hitting August 2, 2026 for most AI-related obligations. That's less than 18 months away. And almost nobody in library land is ready.

Here's what most library directors don't realize: You're probably already using AI systems affected by this regulation. Your discovery layer vendor is scrambling. Your cataloging tool is being rewritten. Your database providers are either complying quietly or hoping nobody notices.

This isn't about becoming a lawyer. It's about understanding what your vendors are being forced to do - and what that means for your contracts, your budgets, and your obligations.

The Practical Reality: What Changes for Your Library Right Now

Before we wade into regulatory weeds, here's what actually matters on a Tuesday morning in your library:

AI compliance deadline coming up?

Every meaningful library software vendor with European customers is rewriting their contracts to include EU AI Act compliance language. This is happening now, whether you hear about it or not.

What you'll start seeing: AI disclosure clauses that explicitly list which systems use AI and how. Data usage rights that specify whether your patron data, circulation patterns, or search behavior can be used to train AI. Compliance responsibility allocation (who handles what if something goes wrong).

Translation: Contract renewals that you thought would be routine rubber-stamps are becoming complex documents with new compliance obligations attached. Your budget for legal review just went up.

2. Your AI Tools Will Get More Expensive

Compliance isn't free. High-risk AI systems under the EU AI Act require documented risk assessments, bias testing, ongoing monitoring, human oversight protocols, and regular audits. These aren't theoretical exercises - they're expensive to implement and maintain.

Vendors are going to pass these costs to customers. Some will absorb them in the margin. Most won't.

Expect 10-20% price increases on discovery systems, recommendation engines, and any chatbot or content generation tools over the next two years. Some vendors might unbundle "optional" AI features to avoid high-risk classification and cost - which means you're paying extra for features you used to get included.

3. You'll Need to Make Transparency Decisions About AI You're Already Using

The EU AI Act (and the Colorado, California, and New York laws that copied it) require disclosure when AI makes decisions about people. That means if your library uses AI to:

You probably need to tell people it's AI. Not buried in terms of service. Visible. Clear. Something your average patron can understand.

Your library needs to decide: Do you already disclose AI use? If not, are you going to start? When? How? Who's responsible for updating the website? This sounds simple until you realize your website is maintained by three different staff members who don't talk to each other.

The Risk Framework: Which AI Matters and Why

The EU AI Act uses a risk-based approach. Understanding which category your AI falls into is key to knowing your actual obligations.

Unacceptable Risk (Banned Outright)

These are systems the EU considers so dangerous they're not allowed at all:

Honest answer: You're not using these. But it's useful to know the bar - it shows how seriously the EU takes the most dangerous AI applications.

High-Risk (Where Things Get Real)

This is where libraries actually live.

High-risk systems are those that could significantly harm people. The law defines high-risk AI as systems used in:

For libraries, the trigger is usually "access to essential services" or "educational and training decisions." If your library is using AI to recommend resources to students, rank search results for researchers, or make any decision that affects what patrons can access - you're potentially in high-risk territory.

Here's the catch: The law's definition is intentionally vague. It's designed to catch edge cases and force organizations to think carefully about impact rather than just compliance checkbox. "Is recommending academic resources to a student a high-risk decision?" Probably not technically. But it's close enough that vendors are treating it as high-risk to stay safe.

What High-Risk Requirements Actually Mean

If your AI is high-risk, you (or your vendor, or both) must:

Limited Risk (Transparency is the Main Requirement)

This category includes AI that's lower risk but still requires disclosure:

The requirement: Users must be clearly told they're interacting with AI. "This response was generated by an AI system" level transparency. It's straightforward compared to high-risk, but still requires clear policies and visible disclosure.

Minimal Risk (General Law Applies)

Everything else. Your email spam filter. Basic recommendation algorithms. Systems that don't make decisions about people or their data.

These don't have specific AI Act requirements, but they're still subject to GDPR, accessibility laws, and other general regulations. No special compliance needed, but don't assume "minimal risk" means "no rules."

The Vendor Mess: Who's Responsible for What

This is where the EU AI Act creates ambiguity that vendors are exploiting.

The law defines three roles:

The law requires both providers and deployers to comply. It lets them negotiate who does what. This is where bad contracts happen.

Most vendors are trying to push compliance work onto customers (libraries) to minimize their own legal exposure. Look for contract language like:

"Customer is responsible for compliance with applicable AI regulations in their jurisdiction. Vendor provides the AI tools; compliance implementation is the customer's responsibility."

Translation: "If this goes wrong, it's your problem."

A better contract divides responsibility clearly:

"Vendor provides risk assessments and bias testing documentation required for high-risk AI under applicable regulations. Customer is responsible for reviewing and accepting these assessments as accurate for their deployment context."

This is negotiable. Don't accept the first draft that tries to dump everything on you.

The Technical Stuff (That Actually Matters)

Training Data and Bias Testing

Here's something vendors don't advertise: most library AI systems are trained on biased data.

Your discovery system's recommendation engine probably trains on historical search and checkout data. Which reflects existing biases in your collection, your patrons' existing preferences, and historical collection development practices. If your library historically overrepresented certain subjects or viewpoints, the AI learns that bias and amplifies it.

Under the EU AI Act, you (or your vendor) need to document this and mitigate it. Mitigation options include:

If your vendor can't explain how they're addressing training data bias, that's a red flag. It means they either haven't thought about it (incompetent) or don't want to (worse).

Documentation and Auditability

High-risk AI systems need to be documented in a way that allows external audits. This means:

Most library vendors don't have this documentation ready. They're scrambling to create it. If your vendor says "we don't have that" or "we can't share that," be skeptical about whether they actually understand their own compliance obligations.

The Edge Cases (The Weird Stuff That Matters)

The "Essential Services" Trap

The EU AI Act says high-risk includes AI used to make decisions about "access to essential services."

Is library access an essential service? Arguably yes - it's public education, information access, a service that many people depend on. Which means AI used to decide what resources patrons can access could be high-risk.

But this isn't been tested in court. Vendors are interpreting it conservatively to stay safe. You should too.

The International Patron Problem

The EU AI Act applies to AI systems used to make decisions about people in the EU. It doesn't matter where your library is. If you serve EU patrons (international students accessing your library website, researchers using your digital collections, any European IP address), you're arguably in scope.

This is why vendors are implementing EU compliance globally rather than maintaining separate versions. Your non-EU library is getting EU-compliant AI tools whether you asked for them or not.

The Generative AI Question

Generative AI (ChatGPT-style tools) created new compliance questions. If a library is using AI to generate reading recommendations, summaries of articles, or other content - is that high-risk?

The EU AI Act treats generative AI relatively lightly - mainly transparency requirements (users must know it's AI). But generative AI is also prone to hallucination, which could violate regulations about accuracy and transparency if recommendations are fabricated.

This area is still being defined by regulators. Vendors are being cautious, which is good.

What You Actually Need to Do (Practical Checklist)

EU AI Act Library Audit Checklist

Complete this in the next 60 days.

  • ☐ List every software system you use that might involve AI (discovery layer, chatbot, recommendation engine, analytics tools, cataloging systems, content management systems)
  • ☐ For each system, identify: What decision does it make? Who does it affect? What data does it use?
  • ☐ Categorize each system as unacceptable/high-risk/limited-risk/minimal-risk
  • ☐ Document where you currently disclose AI use (website, contracts, patron-facing materials)
  • ☐ Identify gaps in transparency disclosure
  • ☐ Create a list of questions to ask each vendor (see below)

Contract Negotiation Checklist (Next 6 Months)

  • ☐ Does your current contract include AI compliance language? If not, request it.
  • ☐ Does the contract clearly allocate compliance responsibility between you and the vendor?
  • ☐ Can you see the vendor's risk assessment and bias testing documentation?
  • ☐ Does the contract specify what happens if the AI makes discriminatory decisions?
  • ☐ Can you opt out of AI features if needed?
  • ☐ What's the vendor's timeline for full EU AI Act compliance?
  • ☐ Are there penalties if the vendor doesn't meet compliance commitments?

Patron-Facing Transparency Checklist (Next 3 Months)

  • ☐ Do you have a policy on when/how to disclose AI use to patrons?
  • ☐ Is AI disclosed clearly on your website (not buried in legal documents)?
  • ☐ Can patrons opt out of AI-powered features easily?
  • ☐ Do staff know how to explain to patrons that something is AI-powered?
  • ☐ If AI makes a recommendation or decision, can patrons understand why?
  • ☐ Do you have a process for patrons to appeal or challenge AI decisions?

The Questions to Ask Your Vendors (Copy-Paste These)

When you contact your vendors about AI Act compliance, use these questions. Write them down. Expect real answers, not marketing speak.

  1. "Which parts of your system use AI? Be specific about what each AI component does." Don't accept vague answers like "our entire platform uses machine learning." You need specifics.
  2. "Under the EU AI Act, how would you classify each AI component? Why?" This forces them to think about risk categorization.
  3. "Can you provide your risk assessment documentation for high-risk AI systems?" If they can't, they don't have it.
  4. "How have you tested your AI for bias? What biases did you find and how did you mitigate them?" Real vendors have bias testing reports. Others have excuses.
  5. "Where does your training data come from? Is it documented and representative?" This reveals whether they actually understand their own systems.
  6. "How do you implement human oversight for high-risk AI decisions?" What's the actual process?
  7. "If your AI makes a decision that affects a patron (like resource recommendations), can the patron know it was AI and appeal it?" This tests their transparency implementation.
  8. "What's your timeline for full compliance with the EU AI Act by August 2, 2026?" If they're not planning for this deadline, they're behind.
  9. "If regulatory changes require you to change how the AI works, how will that be implemented? Who absorbs the cost?" This reveals who bears compliance risk.
  10. "Can I terminate my contract if your AI systems don't meet compliance standards I need?" This protects you if they fail to deliver.

The Uncomfortable Timeline

Here's what's actually happening behind the scenes at library vendors right now:

Early 2026: Vendors are still figuring out compliance. Some have real strategies. Many don't. Expect contract changes and price increases to be announced.

June 2026: Six weeks before the August 2 deadline, vendors that aren't ready will start offering "grace periods" or "compliance roadmaps" instead of actual compliance. Don't accept this.

August 2, 2026: Full EU AI Act compliance deadline. Vendors will either be compliant or exposed. You need to know which camp your vendors are in before then.

August 2026-December 2026: If vendors are non-compliant, regulators will start investigations. This could affect your library's liability if you're knowingly using non-compliant systems.

This isn't future-planning. This is current-year crisis planning.

Real Talk: Most Libraries Are Behind

Let's be honest: Your library probably hasn't done this audit yet. Most haven't. You don't have EU compliance experts on staff. Your vendors are still figuring things out. Your board probably hasn't approved budget for compliance work.

But the deadline isn't negotiable.

The good news: You don't need to become experts. You need to:

  1. Understand what you're using (the audit above)
  2. Ask vendors hard questions (the questions above)
  3. Document your compliance efforts (creates a defense if something goes wrong)
  4. Make reasonable decisions based on available information

The bad news: If you do nothing, and your vendor gets caught violating the EU AI Act, and your library was knowingly using that non-compliant system - your library's liability is real. Not legally certain, but real enough to worry about.

The Domino Effect: Why This Matters Outside Europe

The EU AI Act is reshaping global library technology because:

By addressing the EU AI Act now, you're also preparing for the wave of U.S. state regulations that are already here.

Related Articles


Need help with your specific situation? You might use AI in ways I haven't covered here. Library contexts are weird and varied. If you need to talk through your specific systems and compliance obligations, that's what I do. Get in touch.

Further Reading:

AI compliance deadline coming up?

Get new posts by email, or book a free 30-minute call if you’re facing a contract, AI policy, or vendor decision.

Get the newsletter Free 30-min call