In May 2024, Colorado Governor Jared Polis signed SB 24-205—the Colorado AI Act—into law. Effective date: June 30, 2026.

Colorado just became the first U.S. state to pass comprehensive AI regulation. If you think this only matters for Colorado libraries, you’re wrong.

(The EU did this first with the EU AI Act. Now states are copying the framework.)

What Colorado Actually Did

The Colorado AI Act is the EU AI Act’s American cousin. It focuses on “high-risk AI systems”—those that make or substantially assist with “consequential decisions” about people.

What’s a “consequential decision”? Any decision that has a “material legal or similarly significant effect” on someone’s:

  • Education access or opportunity
  • Employment
  • Financial services
  • Healthcare
  • Housing
  • Legal services

Notice what’s on that list? Education access. That’s libraries.

If your library uses AI to recommend resources to students, make decisions about who gets access to what, evaluate program participation, or automate decisions that affect patron services, you might be dealing with a high-risk system under this law.

Even if you’re not in Colorado.

The “We’re Not in Colorado” Fallacy

Three reasons this matters outside Colorado:

Your vendors operate nationally. They’re not building a Colorado version and a non-Colorado version. Too expensive. They’ll build to Colorado standards and sell it everywhere. Just like they’re doing with the EU AI Act.

Other states followed. California, New York, Connecticut, and Washington all passed similar legislation in 2025. Massachusetts and Illinois have bills pending. Within 18 months of Colorado’s law, nearly a dozen states enacted AI regulations.

Patron coverage matters. If you serve any Colorado residents (distance education, digital collections, inter-library loan), you’re potentially in scope.

What the Law Requires

If you’re a “deployer” of high-risk AI (using vendor AI tools), Colorado requires:

Impact Assessments

Before you deploy a high-risk AI system, complete an impact assessment:

  • What the AI does and how it works
  • What data it uses
  • Potential risks and mitigation
  • Whether the AI has been tested for bias
  • How you’ll monitor it over time

This isn’t a one-page form. It’s a serious document. Update it annually or whenever the system changes significantly.

Risk Management Program

Policies and procedures for:

  • Identifying and mitigating AI risks
  • Testing AI systems before deployment
  • Monitoring AI performance
  • Handling AI failures or errors
  • Regular audits

Disclosure Requirements

Tell people when AI is being used to make consequential decisions:

  • Clear notice that AI is involved
  • Explanation of what the AI does
  • Information about how to appeal or challenge AI decisions

Human Oversight

High-risk AI systems can’t operate completely autonomously. You need meaningful human review of AI decisions, especially when they affect people’s rights or opportunities.

Data Protection

  • Use training data that’s representative and tested for bias
  • Protect sensitive data appropriately
  • Document where your data comes from

What This Means for Vendors

The law creates obligations for “developers” (vendors who build AI) and “deployers” (libraries who use it).

Vendors are scrambling. Some are adding AI disclosure clauses, creating impact assessment templates, building bias testing into development, or limiting what their AI can do to avoid “high-risk” classification.

The catch: The law lets developers and deployers agree on who handles what. Your vendor might try to push compliance work onto you.

Watch for contract language like:

  • “Customer is responsible for conducting impact assessments”
  • “Customer shall ensure compliance with applicable AI laws”
  • “Vendor provides tools as-is; compliance is customer’s responsibility”

That’s vendor-speak for “this is your problem, not ours.” Don’t sign without negotiating.

Real Example: Discovery Systems and Bias

Say you use an AI-powered discovery system that ranks search results. It learns from usage patterns to “improve” recommendations.

Is that high-risk under the Colorado AI Act? Maybe.

If the AI is recommending resources to students working on assignments, and those recommendations affect their ability to complete academic work, that could be a “consequential decision” about education access.

Which means you’d need to:

  1. Confirm the vendor tested the AI for bias
  2. Document how the AI works and what data it uses
  3. Disclose to patrons that AI is ranking their results
  4. Have a way for patrons to opt out or appeal
  5. Monitor the AI to ensure it’s not creating discriminatory outcomes

Will Colorado’s Attorney General come after a library for this? Probably not. But the vendor could face penalties, which means they’ll pass compliance costs to you. And if a patron complains that the AI blocked their access unfairly, you need a paper trail showing you did due diligence.

FTC Enforcement You Need to Know About

Throughout 2024-2025, the FTC has been aggressive about AI company practices:

  • Rite Aid (2023): FTC banned Rite Aid from using facial recognition for 5 years after falsely flagging customers as shoplifters, disproportionately affecting people of color.
  • Amazon (2023-2024): FTC investigated Alexa data retention and how voice data was used for AI training.
  • OpenAI (ongoing): FTC opened investigation into data practices and whether ChatGPT violates consumer protection laws by generating false information about real people.

The pattern is clear: Regulators are scrutinizing AI companies for deceptive practices, inadequate safety testing, biased algorithms, and privacy violations.

Your library vendors are watching these enforcement actions. Some are proactively improving their practices. Others are hoping they won’t be next.

You need to know which camp your vendors are in.

What You Should Do Right Now

Immediately:

  • List every system you use that might involve AI
  • For each one, ask: Does this make decisions that affect patrons? If yes, how?
  • Email vendors: “Is this system considered high-risk under the Colorado AI Act? What’s your compliance plan?”

Next 6 months:

  • Review vendor contracts for AI-related language and compliance responsibilities
  • Create internal policy: “We will not deploy high-risk AI without completing an impact assessment”
  • Start tracking AI-related decisions (what AI recommended, what humans decided, why)

Next year:

  • Add AI impact assessment requirements to procurement process
  • Train staff on recognizing when AI is making decisions vs. providing information
  • Develop patron-facing disclosures for AI systems you already use

The Uncomfortable Truth

Most libraries aren’t ready for this.

You don’t have AI expertise on staff. You don’t have legal teams. You don’t have budget to hire consultants. And your vendors are figuring this out in real-time.

But ignorance isn’t a defense. If you’re using AI tools that make consequential decisions, you’re on the hook—whether you understood the law or not.

Two choices:

Don’t use high-risk AI. Stick to basic tools that don’t make decisions about people. Recommend resources manually. Use AI for backend stuff (cataloging, data cleanup) but not patron-facing decisions.

Get serious about compliance. Do the impact assessments. Document everything. Push vendors for transparency. Build internal processes. Budget for it.

There’s no middle ground. “We’ll deal with it later” isn’t a strategy—it’s a liability. And as of June 30, 2026, it’s an active liability.

The Reality Check

Enforcement is still ramping up. Colorado’s Attorney General will focus on big tech companies and egregious violations first.

But the trend is undeniable: AI regulation has arrived. Colorado led, a dozen states followed in 2025, and federal AI legislation is being debated in Congress. Within 2-3 years, you’ll be dealing with overlapping federal, state, and possibly sector-specific rules.

The libraries that get ahead of this—that start asking questions, documenting decisions, and demanding transparency from vendors—will be fine.

The libraries that ignore it will be scrambling in 2027 when their state passes a law, their vendor gets sued, and suddenly they’re trying to do 3 years of compliance work in 3 months.

Don’t be that library.

Email your vendors today. Ask the hard questions. Document everything.


Authenticity note: With the exception of images, this post was not created with the aid of any LLM product for prose or description. It is original writing by a human librarian with opinions.