Weekly Roundup: U.S. Election edition
Unconventional readings on the impact of the U.S. election on AI governance
“Lock her up” – Short-term economic impacts of a Trump presidency. Private prison company Geo Group's share price is up nearly 60% since Trump won, notes Heather Long, an economist at the Washington Post. But when it comes to the potential impacts on Big Tech from a Trump presidency, a lot will hinge on whether the incoming administration allows Lina Khan, chair of the FTC, to stick around. The NYT argues her days are numbered. But J.D. Vance and several MAGA Republicans admire her, making things more complicated than an unfavourable Musk Tweet. As Ben Thompson notes, if anything Trump wants to continue to go after Big Tech (which Lina Khan is doing), and is instead favourable to “little tech” (not Big Tech). UPDATE: Bloomberg reports that Khan’s replacement is being decided by Gail Slater, expected now for January.
Anthropic and Palantir walk into a bar – its ‘dual-use’ baby. The U.S. maintaining its military advantage is said to increasingly rely on hardware and software capabilities. Anthropic is one of the last major AI companies to sign a military contract with the U.S. government or its contractors. As military use of AI is prioritized, we can expect lighter U.S. government regulation on AI — or at least a growing bifurcation in the regulatory environment for AI’s dual uses (military vs. civilian). This is what I see when reading Nabeel S. Qureshi — previously of Palantir — on Twitter:
Stanford on the EU AI Act – “we don’t need another hero”, says the U.S. in response? Stanford’s Center for Research on Foundation Models recently published analysis of the regulations facing companies’ foundation models under the EU’s AI Act is a litany of fine-grained labels and categories. It makes me think of just how irresistible it will be for the incoming U.S. administration to provide a low regulatory environment for AI companies.
Let’s go Phishing: Real time voice cloning is here. Standard Intelligence, a self proclaimed ‘aligned AGI lab’, just released a new LLM foundation model that was trained solely on sound. Although not yet fine-tuned, this model works like a normal generative AI text-based model – being able to continue the sequence of sounds or voices based on what it thinks is most likely to come next. This means it’s able to clone voices to continue a sentence. You cannot yet determine the outputs of the model precisely, but this will change with fine-tuning, where serious harm – including for more sophisticated phishing attacks — is likely to follow. Try out the voice model here with your own voice — its nuts.
LLMs are now making freedom of information requests. Microsoft released a new model agnostic agent system called Magnetic-One, based on their earlier AutoGen framework. They gave it the ability to execute code, surf the web and look through files. In their release post they mention a few incidents that illustrate some of the problems with letting LLMs interact with the outside world. They report the agents tried “posting to social media, emailing textbook authors, or, in one case, drafting a freedom of information request to a government entity” and were only stopped by either human intervention or lack of tool access.
Measuring the Impact of LLMs on Corporates. A new NBER (read “draft working”) paper by financial economist Andrea L. Eisfeldt provides several methods for approximating exposure and impact of “AI” at the corporate, financial, and human capital level. (Ignored the sin of linearity in Figure 3.) Hat tip to Marisa Ferrara Boston for the paper.
AWS Call for AI Research Proposals (funding opportunity). Amazon is inviting researchers to submit proposals for Fall 2024 in four AI related areas (Generative AI; Governance and responsible AI; Distributed training; and Machine learning compilers and compiler based optimizations). Cash funds and AWS credits are on offer.