The EU AI Act became law in August 2024. Full enforcement hits August 2, 2026. If you think this is Europe’s problem, you’re wrong.

(Colorado passed similar AI regulation that hits June 2026. This is spreading.)

The Quick Version

The EU AI Act ranks AI systems by risk level and sets rules accordingly. High-risk systems face strict requirements. Low-risk systems get lighter rules.

For libraries:

  • Your vendor’s AI tools might be subject to this
  • If you serve European patrons, you’re in scope
  • Other countries are copying this framework (Colorado, California, New York)
  • Compliance costs will trickle down to you

Why Ohio Libraries Care About Brussels

Three reasons:

Global vendors aren’t building two versions—one for Europe, one for everywhere else. Too expensive. So they’re building to EU standards and selling that version worldwide. Your next software update will be EU AI Act compliant whether you asked for it or not.

If your library serves any European users (international students, researchers accessing digital collections), you’re technically in scope. That database vendor you use is sweating about this.

The U.S. is copying this playbook. Colorado passed its AI Act in May 2024 (effective June 30, 2026). California, New York, and several other states followed in 2025. This isn’t going away.

The Risk Pyramid

The EU AI Act divides AI into four categories:

Unacceptable Risk (Banned): Social scoring systems, manipulative AI, real-time facial recognition in public spaces. You’re probably not touching this. If you are, we need to talk.

High-Risk: AI that makes decisions about access to essential services, evaluates people, or manages employment.

If your AI is high-risk, you need:

  • Risk management processes
  • High-quality training data (no biased datasets)
  • Human oversight
  • Transparency (users must know they’re interacting with AI)
  • Detailed documentation

Most library AI tools probably aren’t technically high-risk. But they’re close enough that vendors are playing it safe.

Limited Risk: Chatbots, AI content generators. Users must be told they’re interacting with AI.

Running an AI research assistant chatbot? You need a disclosure. Simple, but required.

Minimal Risk: Everything else. No specific requirements, but general EU law still applies (GDPR, accessibility).

What This Means for Vendor Contracts

Look for:

AI Disclosure Clauses: Vendors should tell you if they’re using AI, what it does, and how it works. Vague answers are red flags.

Data Usage Rights: If your vendor is training AI on library usage patterns, patron behavior, or circulation data—that’s your data. You should know about it. You should have veto rights.

Compliance Responsibility: Who’s on the hook if the AI screws up? Under the EU AI Act, it’s the “deployer” (you) and the “provider” (vendor). Make sure your contract specifies who handles compliance work.

Algorithm Audits: High-risk systems need regular audits. If your vendor is subject to this, they’ll pass costs along. Budget for it. Ask if you can see the audit results.

The British Library Wake-Up Call

The British Library got hit with ransomware in October 2023. Not AI-related, but it exposed something crucial: libraries are terrible at vendor security audits.

The attack shut down the British Library for months. Catalog offline. Digital collections inaccessible. £7 million in damages.

Now imagine that happening because your AI vendor had a security hole. Or used biased training data. Or violated GDPR because they didn’t understand EU AI Act compliance.

You can’t just trust vendors to handle this.

Questions to Ask Your Vendors

Next sales demo or contract renewal:

  1. “Does this system use AI? If so, what does it do?”
  2. “Is this system considered high-risk under the EU AI Act?”
  3. “What data are you using to train this AI? Where does it come from?”
  4. “Can I see your AI impact assessment or risk documentation?”
  5. “If this AI makes a mistake, who’s legally responsible?”
  6. “How do I disable or opt out of AI features if needed?”
  7. “What’s your timeline for EU AI Act compliance?”

If they can’t answer these clearly, don’t sign the contract.

What You Should Do Now

Short term (next 3 months):

  • Inventory every system you use that might involve AI
  • Read vendor contracts and look for AI clauses
  • Start asking vendors the questions above

Medium term (next 6-12 months):

  • Add AI disclosure requirements to RFP templates
  • Create internal policy for AI tool evaluation
  • Train staff on how to identify when they’re using AI tools

Long term (next 1-2 years):

  • Budget for increased vendor costs (compliance isn’t free)
  • Develop patron-facing AI transparency policies
  • Monitor U.S. state laws as they evolve

The Bottom Line

The EU AI Act isn’t just European red tape. It’s reshaping how AI tools work globally, and libraries are in the blast radius.

You don’t need to become a lawyer. But you do need to understand the basics, ask vendors hard questions, and plan for a world where AI tools come with compliance requirements.

Most library vendors have no idea how to comply with this yet. They’re figuring it out as they go. If you don’t ask questions, they’ll make decisions on your behalf—decisions that might not be in your best interest.

Ask the questions. Push for transparency. Don’t let vendors hand-wave this away with “we’re working on compliance.”

Your patrons deserve better.

Don’t wait for vendors to figure this out. Make them tell you their plan. Now.


Authenticity note: With the exception of images, this post was not created with the aid of any LLM product for prose or description. It is original writing by a human librarian with opinions.