Weekly Roundup (April 9, 2025)
Image generators without guardrails, AI slows down web traffic, Amazon's shopping agents, and more.
Happy Wednesday! This week we cover what people are really doing with AI image generators online, how Google’s AI overviews are dramatically changing web traffic patterns (or maybe not at all), Amazon’s new AI agent, and the growing vertical integration of social media platforms with in-house AI models.
What “unrestricted” use of an AI image generator looks like (it’s not pretty). Wired reported last week that an AI image generator's database of user-generated photos was discovered completely unprotected. The website, GenNomis (subsequently no longer active), marketed itself as an "unrestricted" AI photo generator and editor, despite having usage policies that ostensibly prohibited child sexual abuse material (CSAM) and claimed to allow only "respectful" content. The exposed database though contained many illegal images, including CSAM, “nudified” celebrities, and what appeared to be non-consensual pornographic content.
Even though GenNomis had policies in place to prohibit illegal usage, lack of enforcement rendered them useless. Commercial incentives can play an important role in the lack of guardrails on an AI service in an effort to rapidly grow users. In this context, policies can mean little, and guardrails rely instead on enforcement and practice.
Are Google’s AI answers in Search discouraging clicks? The internet’s central monetization engine is drying up before our very eyes. Bloomberg reports that website traffic has dropped dramatically since Google incorporated “AI overviews” into every one of its search result pages. Its AI overviews provide information to users from relevant websites at the top of its SERP page without having to click further. And the longer the overview the less the incentive the user has to find out more by visiting third-party websites. This appears to have led to a dramatic fall in revenue for third-party websites whose content frequently appears in Google’s AI overviews, as they tend to make most of their income from users seeing advertisements on their pages.
Note: What Google’s AI overviews looks like in an answer on its SERP. Source: Google Search (09 April, 2025). As AI fundamentally changes how people interact with information on the internet, the way we monetize and compensate creators for such information also needs to adapt. Currently, Google runs the risk of putting its own services in jeopardy, as its AI-massaged content takes all the oxygen out from the very third-party content creators and websites that Google’s services rely on. While music labels and news distributors may be able to strike (fairly static) licensing deals with AI developers, smaller businesses are left even further behind. Whatever new monetization system for internet content does emerge, it needs to learn the lessons from the existing advertising system, which relied on a highly integrated and opaque advertising stack — tightly controlled by Google.

Or maybe AI ain’t so bad? A detailed analysis published in Generative AI in the Newsroom by Nick Hagar of Northwestern University found instead that the rollout of AI overviews did not have a negative effect on the traffic received by ten major news providers. This could be because the overviews were less likely to appear for news related search queries. (One publisher reported that only 8% of queries that lead to their website included an AI overview).
The analysis also found no relationship between the number of citations by chatGPT and if the publisher explicitly blocked or allowed their web crawler. This suggests that OpenAI may not be respecting the robots.txt restrictions put in place by publishers — perhaps unsurprising given the wealth of evidence suggesting the same thing.
Amazon’s personal shopping agentic dreams. As more and more foundation models flood the market, developers are increasingly turning their eyes towards agents — more autonomous models capable of carrying out entire tasks largely independently of human oversight. Amazon’s AI lab released its latest web browsing agent called the Amazon Nova Act. Wired reported last fall how Amazon is already planning on integrating future AI agents into their e-commerce business by having it pick a product for you and place it in your shopping cart, even if you didn’t know you needed it (yet).
It’s interesting to see how Amazon envisions the agentic future more broadly. They write about being able to control this newly released agent with a specific instructions at each step, such as: “don’t accept the insurance upsell.” The head of its AI lab told Wired that the hope is that the agents don’t fall for the upsell on its own. If they do, then are they really ready for deployment?
While agentic AI might be more impressive than a chatbot that just sits in one place (making the occasional internet search), it comes with greater risks. Amazon has already degraded consumer search result quality to extract higher advertising rents from its ecosystem of third-party producers. So it’s easy to imagine Amazon eventually being tempted to leverage agentic AI’s information asymmetries and autonomous behaviour for its own advantages too, absent any meaningful constraints on its behaviour. (That being said, the “faster at any cost” era may be ending at Amazon, reports the Information.) In the final analysis, without defined standards or transparency around AI agents' intent and efficacy, we may find that AI agents sometimes extract more value than they create.
Social media platforms’ ongoing vertical integration. Speaking of huge tech companies creating AI models, Tech Policy Press released a great review from AI regulation expert Ameneh Dehshiri of how social platforms are well poised to exploit their users’ data to create AI models. Last week Elon Musk’s AI company, xAI, acquired his social media platform, X. One of the many possible reasons is that this removes barriers to preventing data sharing between the two companies. Such vertical integration of social media platforms with in-house AI model development is now evident across all the major platforms, including Meta’s family of apps, TikTok, X, and to some extent YouTube. Meta has trained its models on all of its social media platforms’ data (and more); Amazon has immense stores of user shopping data waiting to be exploited; and LinkedIn has already gotten in trouble for training on users’ private conversations.
Beyond social media data being full of bias and hate, Dehshiri argues that this integration further erodes users’ privacy. She writes: “the opacity surrounding data use cannot be treated as a technical inevitability. Instead, it must be seen as a regulatory failure.” We agree that training data sources should be disclosed. See our most recent research paper which finds that OpenAI’s models were probably increasingly trained on copyrighted, non-public, book content. Unregulated vertical integration can entrench the power of already large technology corporations, risking greater regulatory difficulties down the line, as public-private power imbalances and information asymmetries grow exponentially.
That’s all folks. Thanks for reading. If you enjoyed it, please subscribe.