Codegen AI agents are transformative, amazing tools, but we’re increasingly seeing that they’re reintroducing old bugs into modern codebases by using outdated documentation for training. I’ve been toying with effective ways to communicate this problem since I began helping out with Dosu, which as you may know is an AI-powered knowledge management platform designed to automatically keep stakeholders informed and documentation up to date. I figured a listicle could be an eye catching way to drive the point home (don’t judge), so here goes.
Ok, so why does this happen? AI tools, trained on historical data, suggest solutions based on outdated knowledge that leads to zombie problems being reintroduced. As usual, the problem is not the code itself produced by AI that’s the real problem - it’s everything else that informs the writing of it.
Here are five examples of zombie problems resurrected by AI tools that highlight the increasing importance of maintaining an up to date, accurate and comprehensive knowledge base.
1. The Left-Pad time bomb that won't stay dead

In March 2016, Azer Koçulu's unpublishing of the tiny "left-pad" JavaScript package from npm crashed build systems globally due to its widespread use. Despite this being a prime example of dependency management failure, AI coding tools continue to suggest recreating similar, vulnerable dependency patterns. This is because AIs are trained on pre-2016 code that considered micro-dependencies "best practice." GitHub Copilot, for instance, will often recommend replicating left-pad's flawed structure. The root cause is a "documentation gap": AI agents can't differentiate between outdated and current best practices in dependency management guides that lack historical context.
2. Buffer Overflows: The zombie vulnerability

Strcpy(), known since the 1990s as dangerous due to buffer overflow vulnerabilities, is routinely suggested by AI coding assistants. This is because it appears in millions of legacy code examples, despite being deprecated for decades. A recent analysis found over 40% of AI-generated C code has security flaws, primarily buffer overflows. This reintroduces old vulnerabilities as new developers unknowingly use AI to write outdated C code. The problem stems from outdated educational resources and code examples, which AI agents cannot differentiate from current best practices.
3. SQL Injection: A 20-year-old hack AI frequently re-enables

SQL injection, a known security risk since 2004, occurs when user input is directly concatenated into SQL queries. AI coding assistants, trained on outdated documentation, often generate this vulnerable pattern. As a result, development teams are inadvertently creating applications with legacy security flaws, overlooking the correct solution of parameterized queries due to the abundance of incorrect historical examples.
4. OpenSSL's deprecated functions lead to security flaws

OpenSSL 3.0 deprecated functions like AES_encrypt() and RSA_new_method() due to security flaws. AI coding tools, however, frequently suggest these outdated functions because they are prevalent in older documentation (2012-2020). This leads developers to use insecure APIs, building up technical debt and forcing major security rewrites later. AI agents fail to recognize the temporal context of API deprecation, recommending historical usage as current best practice.
5. Heartbleed-style memory mismanagement

The Heartbleed bug in OpenSSL (CVE-2014-0160) was a critical flaw caused by missing bounds checks. AI, trained on pre-2014 OpenSSL and outdated documentation, recreates similar vulnerable memory handling patterns. This happens because AI recognizes widely used patterns but can't differentiate between historical and secure implementations. Additionally, documentation often doesn't clearly mark deprecated methods, leading AI to treat both vulnerable and secure approaches as equally valid.
The Real Problem: AI’s reliance on outdated information as truth
These examples reveal a deeper issue: without proper context, AI coding assistants treat all documentation as equally valid, regardless of age or accuracy. They can’t distinguish between:
- Historical examples showing how things used to work
- Current examples showing how things should work
- Deprecated patterns that should be avoided
- Security vulnerabilities disguised as functional code
This is where companies like Dosu become critical. Traditional documentation approaches can’t keep pace with the rate at which AI amplifies outdated information. We need automated systems that can:
- Track the relationship between code changes and documentation updates
- Identify when examples become deprecated or insecure
- Maintain context about why certain patterns should be avoided
- Generate documentation that’s explicitly optimized for AI consumption
The stakes are higher than ever
When it was just human developers that referenced outdated documentation, the impact was limited—maybe one team would implement a problematic pattern. But AI coding assistants scale this problem exponentially. A single outdated example can influence thousands of codebases simultaneously.
The solution isn’t to abandon AI coding tools— they aren’t going anywhere anyway, and they’re genuinely transformative when used correctly. Instead, we need to fundamentally rethink how we create and maintain technical documentation in an AI-first world.
The future of software development will be built on the partnership between human developers and AI agents. But that partnership only works when the AI has access to accurate, current, and contextual information.
Investing in proper documentation infrastructure and maintenance isn’t just about helping your team—it’s about preventing your AI tools from becoming necromancers that resurrect historical bugs we’ve already killed.
Michael Ludden works on marketing at Dosu. You can find him at ludden.social and learn about AI-powered knowledge management at dosu.dev. And if you’ve got great examples of AI resurrecting zombie problems due to outdated information, we’d love you to share in our Discord community!