Jason Furman in an FT interview with Robert Armstrong: "I don’t want an AI super regulator — I want the Highway Administration, the SEC and the FDA to have expertise in AI so they can understand how it’s used in their different domains, but regulate it just like they regulate auto safety or medical device safety." This seems exactly right. And one model that might inform this approach is the United States Digital Service, which was founded in 2014 to bring digital expertise to US Federal government agencies. Rather than impose mandates and controls on agencies’ use of digital technology, they helped establish digital service units in agencies, building their capacity and competencies from within. I may write more about this model in future. I (Tim) had a birds-eye view of the creation of the USDS because it was the brainchild of my wife, Jennifer Pahlka, when she was Deputy CTO for Government Innovation for President Obama in 2013-2014.
AI Companies Almost Get Bought - A good analysis from Bloomberg’s Matt Levine on whether the huge acquihires of AI startups are the latest move in a cat-and-mouse game with antitrust authorities, who might have had grounds to contest a more normal acquisition. A good reminder that governance is never once and done, but always subject to gaming.
We are not at the end of the AI revolution. As Jeff Bezos used to say about the internet, “It’s still Day One.” In that context, consider this long and important thread from Andrej Karpathy (@karpathy) on X, posted at 1:08 PM on Wed, Aug 07, 2024: “Reinforcement Learning from Human Feedback (RLHF) is the third (and last) major stage of training an LLM, after pretraining and supervised finetuning (SFT). My rant on RLHF is that it is just barely RL, in a way that I think is not too widely appreciated. RL is powerful. RLHF is not. Let's take a look at the example of AlphaGo…” Karpathy’s point is that training an LLM on a kind of “vibe check” with humans will not lead to the same kind of breakthroughs that happened with AlphaGo. Commenter @ml_anon put the point succinctly: “training to maximize reward of a proxy function is qualitatively different than learning from the actual signal.”
Amazon Web Services, which hosts the Perplexity crawler, has a terms of service clause prohibiting its users from ignoring the robots.txt standard. Amazon began a “routine” investigation into the company's usage of Amazon Elastic Compute Cloud, according to Wired. A good reminder that regulation is not just something done by governments or AI safety boards. Whether or not cloud hosting providers and other large, powerful internet platforms enforce their terms of service, or whether they decide it is in their interest to look the other way, is an important component of what Gillian Hadfield and Jack Clark call “regulatory markets.” And it highlights what we call commercialization risk: the way that economic incentives and business decisions will influence the course of AI safety.
Can Western carmakers derisk in China? While not directly related to AI, this post by Adam Tooze speaks quite directly to the way that economic incentives trump the wishful thinking of policymakers. Food for thought.
Jeanna Matthews, in How Should We Regulate AI?, argues that the 1EEE 1012 standard for cybersecurity risk also provides a sensible framework for assessing AI risk: “... the approach in IEEE 1012 focuses risk management resources directly on the systems with the most risk, regardless of any other factor. It does so by 1) determining risk as a function of both the severity of consequences and their likelihood of occurring and then 2) assigning the most intense levels of risk management activity to the highest risk systems and lower levels of activity to systems with lower risk.” A good common sense argument in the face of risk frameworks that start from considerations like the number of parameters a system is trained on.
Call for Papers for the 2025 Cambridge Disinformation Summit, King’s College Hall, 23-25 April 2025.
Discussion about this post
No posts