Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code
5 92"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."
"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)
From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.
It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.
I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...
It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.
"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")
And amazingly, Shambaugh then had another run-in with a hallucinating AI...
I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...
So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.
Thanks to long-time Slashdot reader steak for sharing the news.
5 comments
Re:Good times (Score: 5, Funny)
by Kokuyo ( 549451 ) on Saturday February 14, 2026 @04:36AM (#65988334)
If you can exclude the poverty you pretty much can sugarcoat every civilization ever.
If you exclude the sacrifices, life was pretty good under Aztec rule.
If you exclude the Holocaust...
If you exclude Gulags....
You get my drift?
End times. (Score: 5, Insightful)
by geekmux ( 1040042 ) on Saturday February 14, 2026 @06:34AM (#65988432)
If you can exclude the poverty you pretty much can sugarcoat every civilization ever.
If you exclude the sacrifices, life was pretty good under Aztec rule.
If you exclude the Holocaust...
If you exclude Gulags....
You get my drift?
At this rate, AI will ensure whatever exists today, is the last representation of human civilization.
Consider just how infectious AI now is, on the lone site responsible for carrying most professional resumes. Which "networking" with "friends" on that site is all part and parcel to your professional persona now. Much to the detest of people who preferred the old way (with a piece of paper and an introductory handshake), the way to secure and maintain employment for millions, isn't changing anytime soon.
Imagine pissing off your AI-ssistant enough, and it manufactures and spread enough shit on you before you can even get back from the pisser, spread on a platform full of enough gullibility to believe every word. As they often do today.
AI won't bother playing nice after this. If shit talk doesn't work, Skynet certainly will. Please. As if the massive drone armies practicing with firework displays aren't already infected.
Re:Extremely unpopular take (Score: 5, Insightful)
by Zitchas ( 713512 ) on Saturday February 14, 2026 @07:48AM (#65988510)
You are correct on all counts; but you're also missing something:
Open source projects as a whole face a chronic shortage of highly knowledgeable people to review and maintain them. Having "easy first issues" reserved specifically for new people to get involved in is a deliberate effort to maintain on "on-ramp" that brings people into the project without requiring them to be late-career experts. Historically, if people don't get involved early, they don't get involved at all. They'll have other hobbies and projects by the time they become experts. And then each and every OSS project gently declines to the point that it's being maintained by a solitary underpaid programmer in their basement who just quietly dies one day and the whole world realizes that nobody has access to the repo anymore, or noone knows exactly how everything works, etc.
These "Good first issues" are very literally a survival mechanism to ensure that the project retains a group of people involved in it, and thus will survive long-term. It's a long-term strategic decision.
Basically, allowing an AI to swoop in and wrap up all these minor fixes and optimizations is like shareholders firing all the staff in order to reduce costs and boost the next quarter's profit margins. It's great for their immediate share payout, but it dooms the company. Likewise, it's great for the current users and corporations that need the code today, but it's terrible for people who want the project to keep moving forward and handle the unknown problems of next year or the year after, etc.
From a human perspective, there's that other thing you mentioned: It's great that all the talented people will still have serious work... But how are they going to make that leap from "inexperienced" to "talented" if there's literally no tasks for them to do? Assuming they can afford to spend time just doing stuff on their own without worrying about living, I suppose they can ignore the world around them and just spend their time re-inventing the wheel. Program a calculator for themselves. Program a web browser. Program a replacement matlib. Ditch the OSS (since it'll have become a purely AI-and-experts-only place by then), and rebuild everything from scratch, because otherwise there's nowhere to get started.
And no, it's not anti-AI racism. I'm very sure that will be a thing once AI itself is actually a thing, but until it is actually conscious, it's just a tool. AI-racism makes as much sense right now as saying someone is racist against hammers because they prefer to use screws instead of nails.
Re:"AI" agents don't get angry (Score: 5, Interesting)
by Rei ( 128717 ) on Saturday February 14, 2026 @12:32PM (#65988914)
IMHO, it's a mix of that, and a side effect of the prompting. The agent was clearly tasked to do two things. One is to implement open feature requests in OSS projects. And the other is to blog about its journey (it's common for people running agents to have them maintain blogs or social media accounts, as it's a convenient way for their owner to check in on them now and again). So it made a fix, the fix got rejected, and so it wrote a blog about its rejection (in this case, how it found it to be unfair bigotry causing the rejection of an important improvement). If they hadn't been asked to blog about their journey, it's unlikely that would have been their go-to approach.
Ha ha... (Score: 5, Insightful)
by SeaFox ( 739806 ) on Saturday February 14, 2026 @05:55AM (#65988386)
...and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes."
AI will be a part of journalism only until a publisher gets hit with a libel lawsuit from something like this.