The British Library Got Taken Down for 3 Months. AI Makes It Worse.
The British Library spent over £7 million recovering from ransomware. They're huge and well-funded. What chance does your library have?
In October 2023, the British Library got hit with ransomware. For three months, their catalog was offline. Digital collections inaccessible. Services crippled. Final cost: Over £7 million.
The British Library is huge, well-funded, and has professional IT staff. If they can get taken down for three months, what chance does your library have?
Now add AI to the mix, and things get worse.
What Happened at the British Library
The Rhysida ransomware gang breached the British Library’s network in October 2023. They encrypted systems, stole data, and demanded payment.
The Library refused to pay the ransom. Good for them. But that decision came with consequences:
- Online catalog down until January 2024
- Digital collections unavailable
- Website functionality severely limited
- Internal systems needed complete rebuilding
- Stolen data (including employee information) ended up on the dark web
Total cost: Still being calculated, but estimates run into the millions. And that doesn’t count reputational damage or the research that couldn’t happen.
Toronto Public Library: Round Two
In February 2024, Toronto Public Library got hit. Systems down for weeks. No online catalog. No holds. No renewals. No public computer access. Branches operated with manual workarounds—checking out books with pen and paper like it was 1995.
Since then: Multiple smaller library systems across North America have been hit. Seattle Public Library dealt with a breach in mid-2025. Chicago Public Library shut down systems for days in late 2025 after suspicious activity was detected.
The pattern is undeniable: Libraries are targets. Most still aren’t prepared.
Why Libraries Are Targets (And Why It’s Getting Worse)
Ransomware gangs attack libraries because:
- Data value: Personal information (patron accounts, staff records, payment details) has value on the black market.
- Operational dependence: Libraries can’t function without their systems. That makes libraries willing to pay ransoms.
- Weak defenses: Tight budgets. Cybersecurity isn’t a priority until it’s too late. Patches get delayed. Backups aren’t tested. Staff aren’t trained.
- Vendor vulnerabilities: Libraries use dozens of third-party systems. Each one is a potential entry point.
Now add AI to this mix.
How AI Makes Ransomware Worse
AI-powered phishing: Attackers are using AI (like ChatGPT) to write convincing phishing emails. No more obvious typos or broken English. AI-generated phishing emails look legitimate—proper grammar, context-aware messaging, personalized details. Your staff gets an email that looks like it’s from your ILS vendor. They click. They enter credentials. Attackers are in.
AI tools as entry points: Every AI tool you add is another potential vulnerability. That AI chatbot you deployed to answer patron questions? It’s connected to your network. If it has a security flaw, attackers can exploit it. The more AI tools you use, the larger your attack surface.
AI-generated malware: Attackers are using AI to write malware that adapts and evades detection. Traditional antivirus relies on recognizing patterns. AI-generated malware can change its code dynamically, making it harder to catch.
Data poisoning attacks: If your library uses AI trained on patron data, that training data is a target. Attackers can inject false data into the system—“data poisoning”—causing the AI to behave unpredictably or leak sensitive information. Imagine an AI chatbot that starts giving patrons other people’s account information because the training data was corrupted.
This isn’t theoretical. Data poisoning attacks have been demonstrated in research settings.
The Vendor Problem (Again)
Most libraries don’t build their own AI tools. You’re using vendor products—discovery systems, chatbots, research assistants, recommendation engines.
Vendors are rushing AI features to market without always thinking through security implications.
Ask yourself:
- Does your AI vendor conduct regular security audits?
- Do they have a bug bounty program to catch vulnerabilities?
- Have they been breached before?
- What happens to your data if they get breached?
- Are they following AI security best practices (like OWASP’s AI Security guidelines)?
If you don’t know the answers, you’re flying blind.
What You Should Be Doing (But Probably Aren’t)
Most libraries aren’t ready for a ransomware attack. Adding AI to your systems without upgrading security is like adding a glass door to a house with no locks.
Immediate (this week):
- Test your backups. Not just “Do we have backups?” but “Can we actually restore from them?” The British Library had backups, but recovery still took months.
- Inventory your AI tools. What AI systems are you using? Who has access? What data do they process? Where is that data stored?
- Review vendor security. For every AI vendor, ask: What’s your incident response plan? Have you been breached? What certifications do you have?
- Train your staff on phishing. Run simulated phishing tests. Make sure staff know how to spot suspicious emails—especially AI-generated ones.
Short-term (next 3 months):
- Implement multi-factor authentication (MFA) everywhere. If you’re not using MFA for staff accounts, vendor systems, and admin access, you’re making it too easy.
- Patch aggressively. Security patches need to be applied immediately, especially for internet-facing systems.
- Segment your network. If an attacker gets into your AI chatbot, they shouldn’t automatically have access to your patron database or internal network.
- Create an incident response plan. What do you do if you get hit with ransomware? Who makes decisions? How do you communicate? Write it down. Practice it.
Long-term (next year):
- Hire or contract cybersecurity expertise. Get professional help—whether it’s a part-time contractor, a shared IT security position with other libraries, or a managed security service.
- Demand security transparency from vendors. When evaluating new AI tools, make security a top-line requirement.
- Participate in information sharing. Join library security networks like LITA Security Interest Group. When one library gets breached, others need to know.
The Scenario That’s Already Happening
It’s early 2026. Your library deployed an AI research assistant in 2025 that’s popular with patrons. It answers questions, recommends resources, helps with citations.
One day, an attacker exploits a zero-day vulnerability in the AI tool’s API. They gain access to your network, encrypt your systems, and steal patron data.
Your catalog is offline. Your website is down. Patrons can’t check out books. Staff can’t work. Local news picks up the story. Parents are angry that their kids’ data was stolen.
Your director asks: “How did this happen?” And the answer is: “We didn’t think about AI security.”
This isn’t hypothetical. AI-related security incidents at libraries started happening in late 2025.
AI Security Doesn’t Have to Be Overwhelming
I’m not saying don’t use AI. I’m saying use it securely.
That means:
- Treat AI tools like any other high-risk system
- Demand security transparency from vendors
- Invest in cybersecurity basics (backups, MFA, patching, training)
- Have a plan for when—not if—something goes wrong
The British Library and Toronto Public Library learned the hard way. Don’t wait for your library to be the next case study.
Authenticity note: With the exception of images, this post was not created with the aid of any LLM product for prose or description. It is original writing by a human librarian with opinions.
Discussion
Have questions or feedback? Join the conversation using your GitHub account.