In his multi-volume Great Ideas of Western Man, George Santayana famously remarked that “Those who cannot remember the past are condemned to repeat it.” But AI regulations don’t just need to learn from past policy and regulatory regimes. They need to be incorporated into ongoing regulatory experiments that target Big Tech’s existing digital platforms, such as the Digital Services Act and Digital Market Act in Europe, and into existing intellectual property and third-party content liability laws in the U.S., including the Digital Millennium Copyright Act and Section 230 of the Communications Act of 1924.
Governments keep telling us that AI is an entirely new phenomenon, bringing new risks, new capabilities, and requiring new regulatory bodies.
The October 24, 2024 White House memorandum on AI continues to emphasize new, future, unknown risks — “capabilities that might pose a threat to national security”, defined as “models’ capabilities to aid offensive cyber operations, accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models with such capabilities”. The global context is given pride of place here, hinting at a potential weakening of domestic AI oversight in order to meet the global challenge (read “China”). Yet measures that target AI’s abuse through existing platforms remain conspicuously absent from The White House’s AI risk priorities.
AI’s ability to automate cognition at increasingly higher levels does pose new risks to society. But we don’t have to look far to see that the most immediate risks from AI lie in amplifying existing harms online. All of this suggests that rather than a new agency or new set of regulations, we need to start by updating existing regulations and risk mitigations for the age of AI.
For one, social media algorithms optimized for engagement – and all their deleterious consequences – are likely to be replicated through AI chatbots. The death of a teen from suicide after excessive engagement with an AI bot on Character.AI raises questions about what the Character.AI algorithm was optimizing for and whether optimizing for ‘helpfulness’ could actually be harmful. Regulating algorithmic optimization in a commercial context is an existing challenge that, if ignored, companies deploying AI will only exploit further.
In addition, AI generated or amplified content by bad actors was ubiquitous before generative AI. Prior to the 2020 election, Russian trolls on Facebook were reaching about 140 million Americans a month via Facebook’s amplification algorithms. On Twitter, 2022 estimates are that bots, although small in number, already accounted for at least a quarter of all platform content and a notable portion of information engaged with. Generative AI will add considerable sophistication to the existing deluge of bots and bot-generated content online. But it shouldn’t fundamentally confuse us into thinking that we need to start from scratch when thinking through the risks from AI.
In May 2024, leading AI experts, including Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, joined with others to write on “Managing extreme AI risks amid rapid progress”, noting not just that AI risks acting autonomously, guided by an instrumental rationality, but also that:
Harms such as misinformation and discrimination from algorithms are already evident today; other harms show signs of emerging. It is vital to both address ongoing harms and anticipate emerging risks. This is not a question of either/or. Present and emerging risks often share similar mechanisms, patterns, and solutions; investing in governance frameworks and AI safety will bear fruit on multiple fronts.
This makes sense. People still spend most of their time online on existing commercial platforms, consuming content and interacting with others. A recent July 2024 report from NIST identifies information integrity as one of a dozen key risks from AGI. In the meantime, 3.27 billion people continue to use at least one of Meta’s core – largely unrelated – products daily (Facebook, WhatsApp, Instagram, or Messenger.) If a bad actor wanted to undermine “information integrity” they would probably still use these platforms.
That’s not to say that websites dedicated to AI chat bots don’t matter. They do. ChatGPT and Llama may already have a similar number of monthly active users as Twitter.* For a commercial AI product though – setting aside the fine-tuning of LLMs –its safe regulation post-deployment should incorporate safeguards similar to those still absent from existing platforms like Google Search, Facebook, and Amazon’s Marketplace: measures that protect the informational environment and prevent its excessive exploitation for profit. The fresh appetite for regulation that has come with AI gives us an opportunity for a bit of a “do over” with digital platforms, bringing visibility not only to emerging AI businesses but their integration into existing businesses, where many of the new vulnerabilities may emerge.
For insights into AI’s actual effects online, regulators should consider mandating enhanced disclosures, to be made by the existing dominant platform along with the new AI platforms, so long as a threshold for monthly active users is met. Such disclosures might cover things like the distribution of clicks, engagement rates, content shown, and number of monthly users, as well as the methods by which these metrics are measured and optimized. For societal risk assessments, the deployment stage is critical since it is where “the rubber meets the road” – the point at which a technology is iteratively optimized to meet profit targets.
In the final analysis, the worst approach to AI risk mitigation would be to abandon efforts to strengthen openness and accountability on existing digital platforms. If AI indeed risks “weaken[ing] our shared understanding of reality that is foundational to society”, this is likely to occur through existing digital platforms and other highly networked digital services. Consumed by the dizzying array of potential future AI risks, policymakers need to put their feet back on the ground and reexamine the known risks lurking in today’s digital environment.
* Given that many of these users are from companies using their APIs or third-party cloud platforms, end user numbers are harder to gauge.