Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessAnthropic’s Claude Code Leak Exposed AI’s Ugliest Weakness [TK]Medium AIWhat Claude Code’s Leaked Permission Classifier Misses — And What Fills the GapMedium AIAI DATA CENTERS ARE CREATING HEAT ISLANDS AND WARMING SURROUNDING LANDMedium AI20 Careers that Will Dominate the Next 10 Years…Medium AI30 ChatGPT Prompts That Actually Work for Sales Reps (Copy & Paste Ready)Dev.to AI【営業マン向け】ChatGPTで商談前の準備を10分で完結する方法Dev.to AI“Actions and Consequences” (With the added detailed explanation of my writing by Gemini 3.1)Medium AIClaude Code Skills Have a Model Field. Here's Why You Should Be Using It.Dev.to AIHow SunoAI + ChatGPT Are Changing AI Content Creation (And How You Can Profit)Medium AICipherTrace × TRM LabsMedium AIHow to Build a Professional AI Agent with EClaw: Identity, Rules, and SoulDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessAnthropic’s Claude Code Leak Exposed AI’s Ugliest Weakness [TK]Medium AIWhat Claude Code’s Leaked Permission Classifier Misses — And What Fills the GapMedium AIAI DATA CENTERS ARE CREATING HEAT ISLANDS AND WARMING SURROUNDING LANDMedium AI20 Careers that Will Dominate the Next 10 Years…Medium AI30 ChatGPT Prompts That Actually Work for Sales Reps (Copy & Paste Ready)Dev.to AI【営業マン向け】ChatGPTで商談前の準備を10分で完結する方法Dev.to AI“Actions and Consequences” (With the added detailed explanation of my writing by Gemini 3.1)Medium AIClaude Code Skills Have a Model Field. Here's Why You Should Be Using It.Dev.to AIHow SunoAI + ChatGPT Are Changing AI Content Creation (And How You Can Profit)Medium AICipherTrace × TRM LabsMedium AIHow to Build a Professional AI Agent with EClaw: Identity, Rules, and SoulDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

How AI has suddenly become much more useful to open-source developers

ZDNet Big DataApril 1, 20261 min read0 views
Source Quiz

More open-source developers are finding that, when used properly, AI can actually help current and long-neglected programs. However, legal and quality issues loom.

imaginima/ iStock / Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

  • Top open-source maintainers find that AI has suddenly become much more useful.
  • There are still legal and 'AI slop' problems to overcome.
  • By year's end, AI programming tools should be much more reliable.

With open-source software running pretty much everything, you might think that multiple developers maintain most of the important programs with help from corporate sponsors. You'd be wrong.

As Josh Bressers, VP of security at software supply-chain company Anchore, pointed out last year, the vast majority of open-source projects, 7 million out of 11.8 million programs, have only a single maintainer. You might think that those programs are obscure or no longer used. You'd be wrong about that, too.

Also: I built two apps with just my voice and a mouse - are IDEs already obsolete?

Bressers looked closely at the JavaScript NPM ecosystem and found that, among the projects downloaded over a million times a month, "about half of the 13,000 most downloaded NPM packages are [maintained by] one person."

Ow!

To think of it another way, thousands of vital programs are one car accident or heart attack away from being knocked out. That is not good.

AI tools have recently become much better at coding

What can we do about it? You can't wave a magic wand and miraculously find thousands of ready-to-go expert maintainers. Instead, several prominent open-source maintainers have been considering using AI to keep legacy codebases alive or to make them easier to maintain.

That's possible because, believe it or not, AI coding tools have recently become much better at coding. That's not my opinion. At my best, I was an OK programmer. No, that's the opinion of Greg Kroah-Hartman, maintainer of the Linux stable kernel.

Kroah-Hartman and I got together at KubeCon Europe in Amsterdam recently. He told me, "Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality."

Also: Google's Gemma 4 model goes fully open-source and unlocks powerful local AI

Then, something wonderful happened. "A month ago," he continued, "the world switched. Now we have real reports. All open-source projects have real reports that are made with AI, but they're good, and they're real. All open source security teams are hitting this right now."

What happened? Kroah-Hartman shrugged: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, 'Hey, let's start looking at this.'"

Now that doesn't mean that Anthropic Claude is going to replace Linus Torvalds anytime soon, or even a mid-level programmer at your company. What it does mean, though, is that, when used properly -- no vibe coding here -- AI could help clean up old but still used code; maintain abandoned programs; and improve existing code.

Also: The overselling of AI - and how to resist it

For example, Dirk Hondhel, Verizon's senior director of open source, posted on LinkedIn that while AI coding tools aren't yet ready to maintain code, he believes they will be soon. "This is almost possible today. And at the rate of improvement these tools have seen over the last couple of quarters, I am convinced that it will be possible with acceptable results at some point this year."

He's not the only one. Ruby project maintainer Stan Lo (st0012) wrote that AI has already helped him with documentation themes, refactors, and debugging, and he explicitly wonders whether AI tools will "help revive unmaintained projects" and "raise a new generation of contributors -- or even maintainers."

Indeed, there's already one AI project, Autonomous Transpilation for Legacy Application Systems (ATLAS), that helps developers modernize legacy codebases for modern programming languages. We can expect to see other such AI tools appearing soon. There's a lot of obsolete but still-used code out there that could use a modern refresh.

The lawyers are going to have a field day

Before breaking out the champagne, let's consider several major problems. First, if we can improve open-source code with AI, what's to stop someone from copying and rewriting existing code and then putting it under a proprietary license? The lawyers are going to have a field day with this. Oh, wait! -- they soon will: Dan Blanchard, maintainer of an important Python library called chardet, just released the latest "clean room" version of the program under the MIT license, replacing its GNU Lesser General Public License (LGPL). By "clean room," he means he used Anthropic's Claude to rewrite the library entirely. Claude is now listed as a project contributor.

A person claiming to be the project's original developer, Mark Pilgrim, is not happy. Pilgrim says, "[The maintainers'] claim that it is a 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code. Adding a fancy code generator into the mix does not somehow grant them any additional rights."

Also: 7 AI coding techniques I use to ship real, reliable products - fast

Blanchard, however, claims that "chardet 7 is not derivative of earlier versions." Did I mention that using AI to modify or clone open-source code will end up in court?

There's another problem: Although it appears that AI is much more useful than it used to be for fixing code issues, there's still a lot of AI slop out there, and open-source project maintainers are drowning in it. Just ask Daniel Stenberg, creator of the popular open-source data transfer program cURL.

Pretty much every open-source project maintainer can tell the same story. In some cases, the AI slop has proven so poisonous that the project itself has died. For example, Python Software Foundation's Jannis Leidel, the lead maintainer of Jazzband, closed the program down because the "flood of AI-generated spam PRs and issues" drowned the project.

Torvalds himself, a wary AI user, warns that while AI generates code quickly, the results can be "horrible to maintain." He views AI as a tool that boosts productivity, but it doesn't replace the need to actually understand what's going on in a program when things break. And, I assure you, things will break.

Also: How Claude Code's new auto mode prevents AI coding disasters - without slowing you down

The Linux Foundation's security organizations, the Alpha-Omega Project and the Open Source Security Foundation (OpenSSF), are addressing this issue by making AI tools available to maintainers at no cost. Kroah-Hartman said of it, "OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving."

While AI is becoming truly useful for open-source developers and maintainers, there are still a lot of legal, coding, and quality issues to address before AI and open-source programming will truly work together in harmony.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

open-sourcelegal

Knowledge Map

Knowledge Map
TopicsEntitiesSource
How AI has …open-sourcelegalZDNet Big D…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 166 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Releases