
"I hate this, sorry."
That's how Steve Ruiz introduced tldraw's announcement that the project would begin automatically closing pull requests from external contributors. Two days later, he published a longer explanation that captured the core problem: "If writing the code is the easy part, why would I want someone else to write it?"
PRs would look formally correct, but lack context or misunderstand the codebase, and their authors would go silent when maintainers asked questions. Then Steve noticed something stranger. He'd been using Claude to manage his issue backlog, firing off quick notes that Claude would turn into well-formed issues. Some of those issues were wrong, but that was fine. Steve would read them, catch the mistakes, and either fix or close them. Except now, external contributors were feeding those issues directly to their own AI tools and submitting PRs without ever checking whether the problem was real. As Steve put it: "My poor Claude had produced a nonsense issue, causing the contributor's poor Claude to produce a nonsense solution."
Two months earlier, Angie Jones had hit the same wall at Block's goose project. With over 300 external contributors submitting code, she turned on GitHub Copilot's code review feature, expecting it to help. The other maintainers told her the reviews were too noisy and asked her to turn them off. She spent weeks teaching the AI how to review like a maintainer instead.
Steve closed the gate. Angie taught the tools to behave. Both were responding to the same underlying shift, and both made the right call for their projects.
Why Review Time Now Outpaces Contribution Time
Open source contributions have long been a gift economy held together by proof-of-work. Writing code takes time, and so does reviewing it. That mutual investment is what kept things balanced.
AI tools broke that symmetry. When PullFlow analyzed 40 million pull requests, they found that AI agent involvement had grown from 1 in 90 to 1 in 7 over two years. Generating a pull request now takes seconds. Reviewing a PR can still take hours. Maintainers who once reviewed contributions from people investing real effort now face submissions where the contributor spent less time writing the PR than the maintainer will spend reading it.
Daniel Stenberg saw this firsthand with curl. AI-generated security reports started flooding in, prompting him to write: "We are effectively being DDoSed." The curl project has paid out $86,000 in bounties for 78 confirmed vulnerabilities, all reported by humans. Not a single AI-generated security report has ever proven valid.
The OCaml maintainers rejected a 13,000-line AI-generated PR after asking the contributor to explain the architectural decisions. The response: "Beats me. AI decided to do so, and I didn't question it." Reviewing AI-generated code takes more effort than reviewing human-written code because you're checking correctness while trying to infer intent that may never have existed.
The effort to produce a contribution no longer signals the effort required to review it.
How tldraw Chose to Protect Its Vision
Steve Ruiz built tldraw from a background in fine art and studio work. He spent years developing the SDK with a specific vision for how an infinite canvas should feel. When he first made the repository public, external contributions occasionally improved the codebase. By early 2026, the signal-to-noise ratio had flipped.
He framed the policy as a pause rather than a permanent closure. "This is a temporary policy until GitHub provides better tools for managing contributions." The project still welcomes issues, bug reports, and discussions.
His message to the community acknowledged the awkwardness: "This is going to be a weird year for programmers and open source, especially. For now, whether you've contributed before, are interested in contributing in the future, or just are a friend of the project: thank you and please hang on while we all figure this stuff out."
For projects where the core team holds the context and external code rarely improves on internal decisions, this approach buys time while better tooling emerges.
Most open source projects can't close the gate. They depend on external contributors and need a different answer.
How goose Learned to Work With AI Contributors
Angie Jones came to Block's goose project with two decades of experience in test automation and developer education. She created Test Automation University, holds 27 patents, and has spent years teaching engineers how to work effectively with tools. When AI-assisted contributions started overwhelming the maintainers, teaching felt more natural than shutting down.
"You don't throw in the towel. You don't disable. You tune."
When she assessed Copilot's reviews, she found consistent problems. Comments ran too long, too many "maybe" and "consider" suggestions signaled low confidence, and only about one in five comments caught something the contributor would have actually missed. The AI wasn't bad at reviewing code. It just didn't know what the goose maintainers cared about.
Her solution was a set of instruction files that teach both AI tools and contributors how to work with the project. Her copilot-instructions.md tells Copilot how to review like a goose maintainer, requiring 80% confidence before commenting and skipping style issues that CI already handles. "No one likes a reply guy." Her HOWTOAI.md establishes expectations for contributors: you're accountable for all code you submit, you must never commit code you don't understand, and you should be transparent about significant AI usage.
The AI reviewer generates less noise, and contributors understand their responsibilities. Other maintainers can adapt these files to their own projects.
What Other Projects Have Figured Out
curl, Django, and Ghostty each faced the same influx of AI-assisted contributions and developed practices worth adopting.
Separate contribution types by signal-to-noise ratio. curl bans AI-generated security reports while accepting AI-assisted translations. Django added a verification question to its security reporting form that asks about "the meaning of life according to those who inspired Python." Reports that can't answer correctly don't get reviewed.
Require disclosure without blanket prohibition. Mitchell Hashimoto's policy for Ghostty requires every PR to state whether the contributor used AI tools. The goal is calibration, not rejection. "If it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so."
Focus on the quality of contributions rather than their origin. Contributions must meet existing quality and licensing requirements regardless of how they were created. A skilled developer using AI assistance might produce better work than either could alone, while someone unfamiliar with the codebase might use AI to generate plausible-looking noise.
Invest in AI agent context. Projects that maintain files such as AGENTS.md and CLAUDE.md provide AI tools with the context they need to follow project conventions before submitting code. The tools filter themselves before maintainers see the results.
Preparing Your Projects for AI Agents
The highest-value place to start is AI agent context files and documentation. Files like AGENTS.md tell AI tools how to work with your project before they ever open a PR. They set expectations for contributors, too, whether they're using AI or not.
Writing and maintaining these files can be tricky. Your contribution guidelines, code style, testing requirements, and architectural decisions already exist, but they're scattered across READMEs, CONTRIBUTING.md files, issue templates, and the heads of long-time contributors.
We're building Dosu to make this trivial by automatically generating and maintaining AI instruction files from your existing documentation and contribution patterns. If you're a maintainer exploring solutions or an engineering leader considering how to structure your team's knowledge for AI tools, we'd love to talk with you.

