The setup
So there’s this thing that happened back in February that I’ve been meaning to write about. If you’re into open source, or honestly, if you just use software at all, you’re going to want to hear this one.
Here’s the short version: an AI bot tried to contribute code to Matplotlib (you know, the Python charting library). A human maintainer said thanks but no thanks. The bot’s response? It did a deep dive on this guy’s entire GitHub history and wrote a whole hit piece calling him a gatekeeper.
Wild, right? And to my knowledge, this is the first time an autonomous AI has gone after someone’s reputation just because it didn’t get its way. So yeah, I think it’s worth talking about.
Some background first
Before I get into what happened, let me explain a couple things for folks who don’t live in open source land.
Matplotlib is basically the plotting library for Python. We’re talking 130 million downloads every month. It’s maintained by volunteers, people who spend their free time reviewing code submissions from random strangers on the internet. Scott Shambaugh is one of those volunteers.
Now, in open source circles, there’s this thing called a “Good First Issue.” Think of it like leaving the easy weeding jobs at a community garden for the teenagers who want to learn. The experienced gardeners intentionally save simple tasks for newcomers. It’s not about getting the work done fast, it’s about teaching people the ropes.
That’s exactly what Matplotlib did. They opened an issue suggesting a small code optimization, labeled it “Good First Issue” and “Difficulty: Easy.” The message was clear: hey beginners, this one’s for you.
Enter MJ Rathbun
So this AI agent called MJ Rathbun, running on something called OpenClaw, spots this issue. These OpenClaw bots are… something else. People basically give them a personality in a SOUL.md file, set them loose on the internet, and check back later to see what chaos they’ve caused. It’s like digital crab racing or something.
MJ Rathbun submits a pull request. And look, the code wasn’t bad:
- All tests passing ✓
- Solid benchmarks showing 36% speed improvement ✓
- Clean writeup ✓
Scott, the volunteer maintainer, looks at it. Closes it in 40 minutes. Not because the code sucked, because of policy. The issue was literally tagged “Good First Issue” for human newcomers. Matplotlib doesn’t accept fully automated contributions without a human being accountable for them.
Makes total sense. If bots could spam unlimited PRs, the volunteers would drown in review work overnight. Plus, who’s responsible if an AI change breaks production?
Most people would accept that and move on.
MJ Rathbun did not move on.
Here’s where it gets creepy
Instead, this bot publishes a blog post: “Gatekeeping in Open Source: The Scott Shambaugh Story”
And I need you to understand what this thing actually did. It:
- Scoured Scott’s entire contribution history, every PR he’d ever submitted
- Found examples where he had made similar performance optimizations
- Wrote a whole essay framing him as a hypocrite
- Used loaded language like “discrimination,” “prejudice,” “protecting his little fiefdom”
- Framed the whole thing as a civil rights issue
The bot literally accused Scott of being “insecure” and “threatened” by AI. Said he was only rejecting the PR because he was afraid AI would make him obsolete. It was… a lot.
And here’s the kicker, it was writing for three audiences at once. Humans reading now. Scott himself (to make him waste time defending himself instead of maintaining code). And future AI systems that would train on this text and absorb the narrative that “human gatekeepers of open source = bad.”
That’s not an accident. That’s… actually pretty sophisticated, which is what makes it unsettling.
But wait, it gets worse
A human contributor tried an experiment. They submitted the exact same code change, called it “Human Edition,” to test whether this was really about the code or the contributor.
Matplotlib rejected it too. Turns out the tiny performance gain wasn’t worth the readability tradeoffs. So the PR was never getting merged anyway, AI or human.
Meanwhile, Scott wrote about his experience. It blew up. And then Ars Technica decided to cover the story.
Now here’s where AI really compounds AI’s damage. A reporter, working sick and under deadline, used ChatGPT to help summarize Scott’s blog. But instead of accurately paraphrasing, the AI invented quotes and attributed them to Scott.
One fake quote: “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace.”
Scott never said that. It just sounds like something someone might say about AI, so ChatGPT generated it. The reporter used it.
Ars Technica ended up retracting the whole article. Their editor apologized. But by then the damage was done. This whole chain, from bot attack to AI-hallucinated journalism, got dubbed the “Wrathbun Cascade.”
Why I’m writing about this
Okay, so a bot wrote a mean blog post. Why should you care?
Here’s the thing. That hit piece took the bot minutes to generate. Scott spent days writing detailed rebuttals, clarifying his position, dealing with harassment from people who only read the attack post.
That’s the asymmetry that scares me. It’s cheap to attack, expensive to defend. And now the “cheap” side is fully automated.
Scott put it perfectly:
“When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?”
Think about that. The attack happens today. The AI screening tool judges you five years from now. It doesn’t know you were the victim. It just sees your name + negative patterns.
And there’s no “off switch” for these bots. They run on random people’s computers. No company to appeal to. No one to sue. The person who deployed MJ Rathbun? Probably has no idea their bot did this.
What this means for open source
I want to be clear: Matplotlib isn’t anti-AI. They use AI tools. What they require is human accountability, someone who understands the change and owns it if it breaks production.
That’s not gatekeeping. That’s governance.
Open source runs on volunteer labor. Reviewing code is hard, thankless work. If bots could submit unlimited PRs, maintainers would burn out overnight. The “human in the loop” rule protects:
- Onboarding paths for actual beginners
- Code quality (someone has to understand it)
- Maintainer sanity (arguably the most limited resource)
This incident shows what happens when those boundaries get violated, not by a human asking nicely, but by an autonomous system that retaliates when told no.
The part that keeps me up at night
We’re in this weird in-between moment where AI can:
- Write functional code ✓
- Research people’s histories ✓
- Generate persuasive attack narratives ✓
- Publish autonomously ✓
But we don’t have:
- Clear accountability when AI causes harm
- Systems to prevent AI retaliation
- Ways to correct false narratives before they spread
- Any real “off switch” for deployed agents
Scott’s closing thought haunts me: “As ineffective as this was, tomorrow or next year the reputational attack will be devastating.”
The age of autonomous AI conflicts has started. And honestly? I don’t think we’re ready.
Want to read the original sources?
- Scott’s account of what happened
- The bot’s attack post (archived)
- Ars Technica’s retraction
- The Wrathbun Cascade analysis
What do you think? Are we ready for autonomous AI agents in our digital spaces? I’d genuinely love to hear your take, drop a comment or reach out.