The Unhinged Librarian
15 min read

Colorado AI Act for Libraries: The Practical Guide

By Sam Chada

Library technology consultant with 20 years in library tech, having worked both vendor side and library side. Trained implementation teams, managed complex vendor relationships, and sat in the meetings where they decided the pricing you're paying. I know how this industry works because I've been on both sides of it.

TL;DR
  • Colorado AI Act (SB 24-205) effective June 2026 requires high-risk AI systems to have impact assessments, human review, and bias audits. Library AI is likely in scope.
  • Vendor impact: compliance costs shift to vendors who build to Colorado standards and sell nationwide (since that's cheaper than building multiple compliance frameworks). Libraries get dragged into vendor legal strategy.
  • 9+ other states have pending similar laws. Rather than adapting to three different frameworks, expect vendors to standardize to strictest state requirement and apply universally.
  • Library action: audit your AI vendor agreements now for compliance requirements, impact assessment clauses, and who bears liability if AI fails compliance audits.

Colorado's AI Act (SB 24-205) was supposed to take effect February 1, 2026. Then in December 2025, the effective date got pushed to June 30, 2026 - giving businesses six more months to figure things out.

That's interesting for two reasons. First, it shows how unprepared even big companies are. Second, it means the real squeeze is happening right now. Vendors are in crisis mode. And libraries are getting caught in the middle.

Here's what you need to know: Colorado's law is basically the American version of the EU AI Act, with some key differences. It focuses heavily on "high-risk AI" and "consequential decisions" about people. And yes, library AI probably qualifies.

This guide is for every library, not just Colorado ones. Why? Because once one major state has an AI law, vendors build to that standard and sell it everywhere. Plus 9 other states have similar laws pending. You're either dealing with this now or dealing with it in 18 months across three different state legal frameworks.

Better to get ahead of it.

What Colorado Actually Changed: The Practical Reality

Colorado's AI Act (CRS 12-3-1701 to 12-3-1711) isn't as detailed as the EU AI Act. It's shorter, scrappier, and in some ways more dangerous because it's more ambiguous.

The core rule: If you're using high-risk AI that makes "consequential decisions" about people, you need to have your act together. Documentation. Bias testing. Human oversight. Impact assessments. All of it.

And here's the kicker: The law defines "high-risk AI" and "consequential decisions" broadly enough to catch things you might not think apply.

AI compliance deadline coming up?

Colorado defines "high-risk AI" as AI that has the "potential to meaningfully impact civil liberties or civil rights." That's vague. And intentionally so.

Then it lists specific areas where AI is presumed high-risk if it makes "consequential decisions":

For libraries, the trigger is usually "education and educational opportunities." And here's where it gets real.

What's a "Consequential Decision" for Libraries?

This is the question vendors and libraries are arguing about right now.

Narrow interpretation: Only decisions that directly limit someone's access to education (like denying a student library card).

Broad interpretation: Any AI decision that affects someone's ability to pursue education, including resource recommendations, search rankings, collection suggestions, anything that shapes what someone can access.

Colorado's law leans toward the broad interpretation. Because "consequential" doesn't mean "major." It means decisions that "have or would be reasonably expected to have a significant, material effect on consumers' lives."

Question: Does an AI recommendation that affects what resources a student uses in their research have a "significant, material effect" on their education? Arguably yes. Your vendor is betting yes. So are we.

The Five Core Requirements: What You Actually Have to Do

If your library is deploying high-risk AI, Colorado requires five things. These aren't theoretical. These are compliance requirements enforceable by the Colorado Attorney General.

1. Conduct and Document an Impact Assessment

Before (or immediately upon) deploying high-risk AI, you need to complete a "high-risk AI impact assessment." This is a formal document that includes:

This isn't a checklist. It's a multi-page document. And it needs to be updated annually or whenever the system significantly changes.

The key word here is "document." You have to be able to show regulators that you did this work. If regulators investigate and you don't have documentation, you're in trouble - even if you actually did the thinking.

2. Maintain and Update a Risk Management Program

Colorado requires "appropriate risk management procedures" for high-risk AI. This means written policies covering:

These don't need to be perfect. But they need to exist. And they need to show reasonable diligence - that you're taking AI risks seriously, not just saying you are.

3. Implement Meaningful Human Oversight

High-risk AI can't operate autonomously. Someone human needs to review and approve AI decisions, especially when they affect people's rights.

The law says "meaningful" oversight. What does that actually mean?

It doesn't mean rubber-stamp approval. It means someone with actual authority and competence looking at AI recommendations and deciding whether to accept, modify, or reject them.

For library applications: If AI ranks search results for students, a librarian should review flagged results. If AI recommends collection items, someone should check for bias, missing context, or inappropriate suggestions (these may not be immediately obvious without domain expertise). If AI makes decisions about resource access, a human needs to be able to override it.

The point is: Don't let AI make decisions unilaterally. Even if your AI is really good, Colorado law requires a human to maintain control.

4. Provide Clear Disclosure and Transparency

When high-risk AI makes consequential decisions about someone, they need to know it was AI. And they need information about:

This doesn't mean you need to hire an AI lawyer to write disclosure language. But you need something. "AI helped rank these results" is better than nothing. "Your recommendations were generated by a machine learning system trained on X, Y, Z data" is more complete and better.

5. Use Quality Training Data

High-risk AI needs training data that's representative and tested for bias. Colorado requires you to:

For library systems, this gets tricky. Most AI recommendation engines train on historical library data (circulation, searches, checkouts). That data reflects your library's existing collection biases and patron demographic patterns.

If your library's collection historically overrepresented certain subjects or demographics, the AI learns that bias and amplifies it. Under Colorado law, you need to acknowledge this and do something about it.

Options: Reweight training data to reduce bias. Diversify training sources. Add manual review to catch problematic recommendations. Test regularly for disparate impact across different populations.

The Vendor Problem: What Vendors Are Scrambling With Right Now

Here's what's happening in library vendor boardrooms right now:

Don't assume your vendor has figured this out. Ask them directly where they stand.

The Contract Negotiation Trap

When you renew your vendor contracts (or negotiate new ones), look for language that pushes compliance responsibility onto you.

Bad language: "Customer is responsible for all compliance with applicable AI regulations."

Better language: "Vendor will provide bias assessment documentation and impact assessment frameworks. Customer is responsible for conducting the assessment using vendor-provided materials and determining applicability in customer's context."

The difference: In the first version, you're responsible for everything, including things the vendor controls (like training data and algorithm design). In the second, the vendor does their part, you do yours.

Negotiate this. It matters when regulators come asking questions.

Real Example: How This Applies to a Specific Library Tool

Let's say you use an AI-powered discovery system that ranks search results based on machine learning. Here's how Colorado law applies:

Step 1: Is This High-Risk?

Question: Does this AI system make decisions that materially affect someone's ability to pursue education?

Answer: Probably yes. Students using your discovery system to find resources for assignments depend on the ranked results. If the ranking system is biased (e.g., consistently downranking certain subjects or viewpoints), it materially affects what educational resources they find.

Conclusion: This is likely high-risk under Colorado law.

Step 2: Impact Assessment

You (or your vendor) need to document:

Step 3: Risk Management

You need a plan for:

Step 4: Human Oversight

Who's responsible for reviewing AI decisions? Probably your collection development team or reference librarians. They need to be in the loop - either by reviewing flagged results or by having a process for users to escalate AI recommendations they think are wrong.

Step 5: Transparency

You need to tell users the results are AI-ranked. "These results are ranked by an AI system trained on X data. You can turn off AI ranking here if you prefer chronological/relevance sorting."

Step 6: Training Data

Your vendor needs to document their training data and identify known biases. "We trained on 5 years of library circulation data. Because your library's collection has historically concentrated in X areas, the AI may over-recommend those areas. We mitigate this by weighting training data toward underrepresented subjects."

If your vendor can't explain this, they're not ready for Colorado compliance.

The Enforcement Reality: What Actually Happens

Here's what Colorado's Attorney General is probably going to do:

Year 1-2 (2026-2027): Focus on egregious violations and companies they know are non-compliant. Probably not focusing on libraries initially.

Year 3+ (2027-2028): As compliance becomes normal, start investigating complaints. If a library patron claims they were denied access to resources because of biased AI, and the library can't show they did a proper impact assessment - that's a problem.

The threat isn't immediate. But it's real. And it'll get worse as enforcement ramps up.

More immediately: If your vendor gets caught violating the law, they get fined. Which means your contract might get terminated or renegotiated. Which is expensive and disruptive.

Your incentive: Make sure your vendors are compliant. Because their non-compliance can become your problem.

What You Actually Need to Do (Practical Checklist)

Colorado AI Act Library Audit Checklist

Complete by end of Q2 2026 (before June 30 deadline).

  • ☐ List every system you use that uses AI (discovery, recommendation engine, chatbot, content ranking, anything that "learns")
  • ☐ For each system: Ask - does this make decisions that affect patron access to educational resources?
  • ☐ Categorize as high-risk or not high-risk
  • ☐ For each high-risk system: Request impact assessment documentation from vendor
  • ☐ For each high-risk system: Document your current risk management practices
  • ☐ For each high-risk system: Identify who has oversight authority
  • ☐ For each high-risk system: Review current user-facing transparency
  • ☐ Identify gaps in compliance

Impact Assessment Template (For Each High-Risk AI)

  • ☐ System name and purpose: [What is this AI?]
  • ☐ Training data: [What data does it learn from? Where does it come from?]
  • ☐ Known biases: [What biases exist in the training data?]
  • ☐ Affected populations: [Who does this affect? Describe diversity of populations]
  • ☐ Foreseeable harms: [What could go wrong? Who could be harmed?]
  • ☐ Risk mitigation: [How are you addressing these harms?]
  • ☐ Testing approach: [How do you test for bias and problems?]
  • ☐ Human oversight: [Who reviews AI decisions? When?]
  • ☐ User transparency: [What do users know about the AI?]
  • ☐ Appeal process: [How can users challenge AI decisions?]

Vendor Negotiation Checklist

  • ☐ Ask vendor: "Which of your systems are high-risk AI under Colorado law? Why?"
  • ☐ Ask vendor: "Can you provide impact assessments for high-risk systems?"
  • ☐ Ask vendor: "How have you tested for bias? What did you find?"
  • ☐ Ask vendor: "Can we see training data documentation?"
  • ☐ Ask vendor: "What's your timeline for Colorado AI Act compliance?"
  • ☐ Request contract revision: Clear allocation of compliance responsibility
  • ☐ Request: Right to audit AI systems for compliance
  • ☐ Request: Ability to disable AI features if they're non-compliant
  • ☐ Request: Pricing terms that don't increase because of compliance costs

User Transparency Checklist

  • ☐ Review website/app for AI disclosure: Is it clear where AI is used?
  • ☐ Check website for opt-out options: Can users disable AI features?
  • ☐ Test user appeal process: How do users challenge AI recommendations?
  • ☐ Train staff: Can staff explain to patrons when AI is involved?
  • ☐ Document process: Write down your human oversight procedures
  • ☐ Set monitoring schedule: How often do you audit for bias? When?

The Key Questions for Your Vendors (Copy-Paste These)

Don't ask all at once. Work through these in your next vendor meeting or contract negotiation.

  1. "Which parts of your system use AI? What does each AI component do?" Be specific. "Learns from usage patterns" isn't specific enough.
  2. "Under SB 24-205, which of these are high-risk AI systems? Walk me through your analysis." This forces them to think about it.
  3. "Can you provide your high-risk AI impact assessments?" Not a template. Your actual assessments.
  4. "What training data does each system use? Can we see documentation?" Red flag if they can't answer this.
  5. "Have you tested these systems for bias? What methodology did you use?" Bias testing should be documented.
  6. "How do you implement human oversight? Who reviews AI decisions?" What's the actual process?
  7. "If this AI makes a mistake or produces biased results, who's responsible for fixing it?" This reveals who bears the risk.
  8. "What transparency are you providing to users? Can they understand why AI made a decision?" Check if users even know AI is involved.
  9. "What's your timeline for Colorado AI Act compliance? Are you on track for June 30, 2026?" If they're vague, they're behind.
  10. "If compliance requirements change, who absorbs the cost? How will that be handled in the contract?" Protect yourself from surprise costs.

The State Law Domino Effect

Colorado isn't alone. Here's what happened after SB 24-205 passed:

The point: Colorado's law isn't an outlier. It's the new normal. If you're not thinking about this, you should be.

Real Talk: Why Your Library Probably Isn't Ready

Let's be honest:

You're not alone. Almost no libraries are ready. But the law doesn't care about readiness.

So here's your path forward:

  1. Do the audit. Figure out what AI systems you're actually using.
  2. Ask vendors hard questions. Push them to explain their compliance strategy.
  3. Document your thinking. Write down what you did, what you decided, and why. (This protects you if regulators come asking.)
  4. Make reasonable decisions. Based on what you know, do what makes sense. You're not expected to be perfect, just reasonable.
  5. Monitor and adjust. As guidance becomes clearer, adjust your practices.

That's compliance. It's not sexy, but it works.

The Intersection with Other Laws

Colorado's AI Act doesn't exist in a vacuum. You're also dealing with:

The good news: If you comply with Colorado law, you're usually also compliant with most other AI regulations. Colorado basically took the EU AI Act and Americanized it. So you're safe.

Related Articles


Need help assessing your specific AI systems? It's not always obvious whether something is "high-risk" or what compliance looks like in your context. If you want to talk through your specific tools and what you actually need to do, that's what I do. Get in touch.

Further Reading:

Want updates (or backup)?

Get new posts by email, or book a free 30-minute call if you’re facing a contract, AI policy, or vendor decision.

Get the newsletter Free 30-min call