Can Open Source Survive AI?
The stories of Stack Overflow, Tailwind, FFmpeg, and curl

In the first month of Stack Overflow’s existence (in late 2008) over 3700 questions were asked. In the last month of 2025, less than 3100 questions were asked and if the trend continues it will only get worse. Over the past few years, ChatGPT and its peers replaced much of the need for a question answer site, why bother a human when you could ask an AI?1
Stack Overflow was a titan in the developer space. Whenever it would go down for maintenance, every programming discussion board would erupt. People would joke that there was no point in continuing work without the Q&A site. Now, it is silently going stale because people just don’t need it anymore.
The website’s decline wasn’t too surprising. There were signs this was going to happen from the moment ChatGPT launched, and many predicted it. But LLMs are no longer just simple question and answer machines; they write code, run it, debug it, and work autonomously.
The problem is that they do this based in part on what they learned from Stack Overflow. LLMs may be a good substitute for Stack Overflow when it comes to answering existing questions. But where will LLMs learn from humans when there is no longer a convenient way to share new expertise? What might replace Stack Overflow? Is there an analogous knowledge sharing capability that needs to be built to feed agents?
The Rise and Fall of Tailwind
Stack Overflow is just the tip of the iceberg. Unlike Stack Overflow, which is not open source, Tailwind arguably did stand to gain from the rise of LLMs. It was already a popular CSS framework — think pre-built IKEA furniture kits, but for the CSS code that controls how websites look (where HTML is the structure and CSS the interior design). Being highly customizable and easy to use, it lent itself well to agentic use by AI. And Tailwind did indeed become more popular. In 2025, Tailwind received many magnitudes more daily downloads as it received prior to LLM web search and agentic capabilities being released. Yet Tailwind recently announced that the company had to lay off 75% of its engineering team (3 out of 4 staff members) as its revenue plummeted.
Tailwind Downloads
A key fact about Tailwind is that it is open source software. It does not cost money for developers to download it or use it. Instead, the developers of the project had a business model that relied on selling a product called Tailwind Plus, a one time purchase that entitled users to custom components and templates.
Importantly, Tailwind Plus was sold from Tailwind’s official website and doc pages. By virtue of all the direct traffic this page received, Plus received good visibility and allowed the developers to support themselves while still making a free product used by millions. With AI, Tailwind’s marketing funnel is now nearly gone. People no longer needed to visit the documentation page to get an answer to their questions; they could just ask the chatbot off to the side of their IDE. This became apparent to everyone after a benign request made to add an llms.txt file — so that LLM’s could more easily access Tailwind’s documentation — blew up.
The lead developer closed the request saying that they had to focus their time on making the project profitable. After further prompting by developers, he went on to say that “75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business” and that “Traffic to our docs is down about 40% from early 2023 despite Tailwind being more popular than ever. The docs are the only way people find out about our commercial products, and without customers we can’t afford to maintain the framework.” Despite their usage and downloads being up, their profits fell and Tailwind had to let most of their team go.
Shortly after the layoffs went public Tailwind gained many new corporate sponsors and Adam Wathan called their situation comfortable given their current size and new sponsors. But is relying on corporate sponsors a scalable or sustainable business model? What will the impact be on less widely used open source projects?
Agents to Keep Code Safe?
Parallel to the agentic coding takeoff there was also an attempt to use AI to identify security holes, including in open source software. Google runs an AI agent under the title Google Big Sleep that automatically examines open source code and looks for security holes. This agent is apparently good at its job and has discovered over 80 security holes.2 Yet these reports, far from helping the open web, appear to be harming it. By overwhelming developers with AI slop reports, this is harming resource constrained open source projects in particular.
For example, take the open source library FFmpeg. FFmpeg is known for being media’s Swiss Army knife: convert, combiner, extract, you name it. As an open source and highly capable library, it is used by companies of all sizes for video and audio support.
FFmpeg recently came out against Google’s practice of sending automated security reports, noting that FFmpeg is maintained exclusively by volunteers and that some of the reports amount to “CVE slop” rather than important security holes. One example they cite is of a ‘security’ report about a video format only used once in 1995 — “specifically the first 10-20 frames of Rebel Assault 2”!
The problem is amplified by Google’s policy to disclose security holes 90 days after they are discovered whether or not they were fixed. This may be a good policy from a security perspective. After all, the developers are more likely to fix it if their reputation is on the line. But agents like Big Sleep don’t provide solutions to the bugs they report and the time limit adds pressure to an already underfunded open source ecosystem.
DDoSed by AI Slop
Curl — a command line tool for data transfer from the internet — has arguably had it much worse than the open source library FFmpeg. If a web browser is like walking into a restaurant and requesting a nicely plated meal, curl (lowercase) is like going to the kitchen window and saying "just hand me the raw ingredients", and in return you get the raw data from the web, no pretty web page needed.
Curl, as an essential library for data transfer, comes preinstalled on millions of devices across Windows, MacOS, and Linux. Due to its importance the project has had a strong bug bounty program since 2019 that paid out an average of $700-$800 for each accepted report. But as of January 31st 2026, that program no longer exists.
In mid-2025 the founder of curl, Daniel Stenberg, took a hard stance against AI generated bug reports, requiring any AI assisted report to jump through extra hoops. He wrote that “a threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time … AI slop is overwhelming maintainers today and it won’t stop at curl but only starts there.”
What’s interesting to note is that these AI-generated bug reports were not coming from large companies like Google, but from random people with access to AI, either via API or through a subscription.
Since then, the situation has not improved but instead worsened, to the point where the project maintainers felt they had to shut down the bug bounty program to ensure their “survival and intact mental health.” Based on comparing their bug reports to open source products of similar sizes, Stenberg believes that their bug bounty was a large part of what was driving the AI slop wave and that shutting down all payments is the best option. Despite the bug bounty shutdown, curl still welcomes high quality AI assisted submissions, just not the slop. This approach may reflect differences in their funding and business model compared with FFmpeg.
The Future of Open Source
Where does that leave the future of open source — and not-for-profit web initiatives — in the AI era?
As much as it may seem otherwise, AI does not in fact write things from scratch. AI is both trained on, and continues to build on, libraries and languages that have to be maintained. Odd couple as it may seem to some, AI needs open source. It needs the continuous provision of high-quality resources that it can openly access.
Open source, however, wasn’t built with AI agents in mind. One of the most successful open source projects, Tailwind, is now struggling to survive and monetize their offerings, to the point where they have to downsize due to their ‘success’ in the AI-era. A project as central as curl has had to shut down its bug bounty program as the only reasonable option. Something clearly has to change.
Open source forms the backbone of the modern software industry; and while the productivity and security gains brought by AI are real, they are also having unforeseen costs. We need to take these problems seriously, and begin to search for solutions.
Introducing LLM watermarks to text outputs, submitting tested solutions alongside bug reports, or more companies directly funding projects they use are all small steps that could alleviate the issues identified here. But it’s unclear if these are long-term fixes. Open source can only survive if the incentives to produce — including to create, to contribute — can be sustained in the agentic era.
Thank you to Tim O’Reilly for input on an earlier draft.
Considering the 90 day deadline prior to disclosure it is possible there are many more.





Now thinking a few steps further: Can Humanity survive AI? or should we maybe PauseAI instead of continuing the race.