- British Library ransomware attack (Oct 2023): 3-month downtime, 7+ million pounds in recovery; Toronto Public Library was hit Feb 2024; pattern shows libraries are high-value targets.
- Libraries are attacked because they contain personal data, depend entirely on digital systems, have weak security budgets, and use many vendor integrations.
- AI amplifies ransomware risk: AI-powered phishing, AI tools as network entry points, AI-generated adaptive malware, and data poisoning attacks on patron data.
- Immediate actions: Implement multi-factor authentication everywhere, test backups regularly, audit vendor security practices, and plan for ransomware scenarios.
In October 2023, the British Library, one of the world's largest and most prestigious libraries, got hit with a ransomware attack.
For three months, their catalog was offline. Digital collections were inaccessible. Services were crippled. The final cost? Over £7 million in recovery efforts, lost productivity, and system rebuilding.
And here's the kicker: The British Library is huge, well-funded, and has professional IT staff. If they can get taken down for three months, what chance does your library have?
Now add AI to the mix, and things get worse.
What Happened at the British Library
The Rhysida ransomware gang breached the British Library's network in October 2023. They encrypted systems, stole data, and demanded payment.
The Library refused to pay the ransom. Good for them. But that decision came with consequences:
- The online catalog was down until January 2024
- Digital collections were unavailable
- Website functionality was severely limited
- Internal systems needed complete rebuilding
- Stolen data (including employee information) ended up on the dark web
The total cost? Still being calculated, but estimates run into the millions. And that doesn't count the reputational damage or the research that couldn't happen because resources were inaccessible.
Toronto Public Library: Round Two
Then, in February 2024, Toronto Public Library got hit. Another ransomware attack, another massive disruption.
Their systems were down for weeks. No online catalog. No holds. No renewals. No public computer access. Branches had to operate with manual workarounds, checking out books with pen and paper like it was 1995.
And since then? Multiple smaller library systems across North America have been hit. Seattle Public Library dealt with a breach in mid-2025. Chicago Public Library had to shut down systems for days in late 2025 after suspicious activity was detected.
The pattern is undeniable: Libraries are targets. And most still aren't prepared.
Why Libraries Are Targets (And Why It's Getting Worse)
Ransomware gangs attack libraries because:
- Data value: Libraries have personal information (patron accounts, staff records, payment details). That data has value on the black market.
- Operational dependence: Libraries can't function without their systems. Catalogs, circulation, databases. Everything runs digitally. That makes libraries willing to pay ransoms to restore service.
- Weak defenses: Most libraries run on tight budgets. Cybersecurity isn't a priority until it's too late. Patches get delayed. Backups aren't tested. Staff aren't trained.
- Vendor vulnerabilities: Libraries use dozens of third-party systems. Each one is a potential entry point. And when a vendor gets breached, their customers (you) get breached too.
Now add AI to this mix.
How AI Makes Ransomware Worse
AI isn't just another tool in your tech stack. It's a vector. A new way for attackers to get in and for breaches to cause damage.
Here's how:
1. AI-Powered Phishing
Attackers are using AI (like ChatGPT) to write convincing phishing emails. No more obvious typos or broken English. AI-generated phishing emails look legitimate: proper grammar, context-aware messaging, personalized details.
Your staff gets an email that looks like it's from your ILS vendor, asking them to "verify account settings" or "update security credentials." They click. They enter credentials. Attackers are in.
AI makes this easier, faster, and more convincing. Phishing success rates are going up.
2. AI Tools as Entry Points
Every AI tool you add to your library's tech stack is another potential vulnerability.
That AI chatbot you deployed to answer patron questions? It's connected to your network. If it has a security flaw, attackers can exploit it.
That AI-powered discovery system? It processes patron queries, accesses your databases, and interacts with other systems. If it's compromised, attackers have a foothold.
The more AI tools you use, the larger your attack surface. And most library AI tools are new, which means they haven't been battle-tested for security.
3. AI-Generated Malware
Attackers are using AI to write malware that adapts and evades detection. Traditional antivirus relies on recognizing patterns. AI-generated malware can change its code dynamically, making it harder to catch.
Your library's security software might not recognize the threat until it's too late.
4. Data Poisoning Attacks
If your library uses AI trained on patron data (usage patterns, search queries, recommendation history), that training data is a target.
Attackers can inject false data into the system ("data poisoning"), causing the AI to behave unpredictably or leak sensitive information. Imagine an AI chatbot that starts giving patrons other people's account information because the training data was corrupted.
This isn't theoretical. Data poisoning attacks have been demonstrated in research settings. It's only a matter of time before they show up in real-world library systems.
The Vendor Problem (Again)
Most libraries don't build their own AI tools. You're using vendor products: discovery systems, chatbots, research assistants, recommendation engines.
And vendors are rushing AI features to market without always thinking through the security implications.
Ask yourself:
- Does your AI vendor conduct regular security audits?
- Do they have a bug bounty program to catch vulnerabilities?
- Have they been breached before? (Check the news.)
- What happens to your data if they get breached?
- Are they following AI security best practices (like OWASP's AI Security guidelines)?
If you don't know the answers, you're flying blind.
What You Should Be Doing (But Probably Aren't)
Here's the uncomfortable truth: Most libraries aren't ready for a ransomware attack. And adding AI to your systems without upgrading your security is like adding a glass door to a house with no locks.
Here's what you need to do:
Immediate (This Week)
- Test your backups. Not just "Do we have backups?" but "Can we actually restore from them?" The British Library had backups, but recovery still took months. Test your process.
- Inventory your AI tools. What AI systems are you using? Who has access? What data do they process? Where is that data stored?
- Review vendor security. For every AI vendor, ask: What's your incident response plan? Have you been breached? What certifications do you have (SOC 2, ISO 27001, etc.)?
- Train your staff on phishing. Run simulated phishing tests. Make sure staff know how to spot suspicious emails, especially AI-generated ones that look legitimate.
Short-Term (Next 3 Months)
- Implement multi-factor authentication (MFA) everywhere. If you're not using MFA for staff accounts, vendor systems, and admin access, you're making it too easy for attackers.
- Patch aggressively. Don't wait for "convenient" times to update systems. Security patches need to be applied immediately, especially for internet-facing systems.
- Segment your network. If an attacker gets into your AI chatbot, they shouldn't automatically have access to your patron database, financial systems, or internal network. Use network segmentation to contain breaches.
- Create an incident response plan. What do you do if you get hit with ransomware? Who makes decisions? How do you communicate with patrons? Who contacts law enforcement? Write it down. Practice it.
Long-Term (Next Year)
- Hire or contract cybersecurity expertise. You can't handle this alone. Get professional help, whether it's a part-time contractor, a shared IT security position with other libraries, or a managed security service.
- Demand security transparency from vendors. When you're evaluating new AI tools, make security a top-line requirement. Ask for penetration test results, security audit reports, and vulnerability disclosure policies.
- Participate in information sharing. Join library security networks like the Library Information Technology Association (LITA) Security Interest Group. When one library gets breached, others need to know.
The Scenario That's Already Happening
It's early 2026. Your library deployed an AI research assistant in 2025 that's popular with patrons. It answers questions, recommends resources, helps with citations.
One day, an attacker exploits a zero-day vulnerability in the AI tool's API. They gain access to your network, encrypt your systems, and steal patron data.
Your catalog is offline. Your website is down. Patrons can't check out books. Staff can't work. Local news picks up the story. Parents are angry that their kids' data was stolen.
Your director asks: "How did this happen?" And the answer is: "We didn't think about AI security."
This isn't a hypothetical future scenario. AI-related security incidents at libraries started happening in late 2025. Don't be next.
AI Security Doesn't Have to Be Overwhelming
I'm not saying don't use AI. I'm saying use it securely.
That means:
- Treat AI tools like any other high-risk system
- Demand security transparency from vendors
- Invest in cybersecurity basics (backups, MFA, patching, training)
- Have a plan for when (not if) something goes wrong
The British Library and Toronto Public Library learned the hard way. Don't wait for your library to be the next case study.
Need help assessing your library's AI security risks?
If this resonated with a challenge you're facing, let's talk. No sales pitch, just a real conversation about what would actually help your library stay secure.
Let's Talk