d/acc: one year later - Vitalik Buterin’s update on what he calls d/acc: “decentralized and democratic, differential defensive acceleration. Accelerate technology, but differentially focus on technologies improve our ability to defend, rather than our ability to cause harm, and in technologies that distribute power rather than concentrating it in the hands of a singular elite that decides what is true, false, good or evil on behalf of everyone.” Later on, he says it more plainly: “Build technology that keeps us safe without assuming that "the good guys (or good AIs) are in charge". We do this by building tools that are naturally more effective when used to build and to protect than when used to destroy.”
The essay is rambling, but full of good ideas, including one that we’ve been thinking about as well: how if there is a need to control future superintelligent AI, it may lie in creating robust protections at the infrastructure layer. Vitalik called this a “Global ‘soft pause’ button on industrial-scale hardware.”
America is losing the physical technologies of the future. OK, this entry isn’t about AI, but it is about the geopolitical risk of falling behind in fundamental technologies of the future, as the US is doing, because of the politicalization of electrical power technologies (solar panels, batteries, EVs and more). As Noah Smith notes, China is utterly dominating these technologies that are reshaping the future of warfare and geopolitical power. How will this intersect with AI? It’s anyone’s guess, but at the least, it reinforces one finding from the Bi-Partisan House Task Force on AI, covered in the previous Weekly Roundup, that energy development is important to national security through the AI channel.
Future of Life Institute AI Safety Index 2024. What does it say about the real commitment of AI model developers to AI safety, when according to FLI’s panel of experts “evaluat[ing] safety practices of leading AI companies across critical domains,” even Anthropic only gets a C grade?
Things we learned about LLMs in 2024 - Simon Willison’s brilliant take on what we learned about the progress of AI in the last year. Full of pithy insights.
Summary of 2024 Artificial Intelligence Legislation. "Since the beginning of the current legislative biennium, at least 40 measures dealing with disclosures or disclaimers about the use of AI have been considered in 15 states,” according to Lexis Nexis State Net Insights,July 16, 2024. “Such measures have been enacted in five of those states.” Everyone thinks about the impact of the EU AI Act and other national legislation on the AI marketplace, but with the failure of thoughtful national leadership in the US, we are heading into an era of fragmented state and local rules. This imposes a far higher regulatory burden than a consistent set of national rules. In our opinion, those rules should start out relatively minimal, focusing on regulatory enablers like registration and efforts to standardize management best practices.
Paradigm Shifts of Eval in the Age of LLMs. Lili Jiang makes the case that for anyone developing AI applications, “Evaluation is the cake, no longer the icing”:
a) The relative importance of eval goes up, because there are lower degrees of freedom in building LLM applications, making time spent non-eval work go down….
b) The absolute importance of eval goes up, because there are higher degrees of freedom in the output of generative AI, making eval a more complex task….
This paradigm shift has practical implications on team sizing and hiring when staffing a project on LLM application.
Sam Altman - algorithmic feeds are the first at-scale misaligned AIs. Hear, hear. I’ve been talking about this since my 2017 book WTF? What’s the Future and Why It’s Up to Us. And of course that was the focus of our prior work on “algorithmic attention rents.” It’s great to hear the same message from Sam.
Shared Code: Democratizing AI Companies. The Collective Intelligence Project’s annual report highlights a lot of groundbreaking work on AI governance. I couldn’t help noticing this particular passage, though:
“A major risk factor in the landscape of AI companies are the institutional “default containers” for these organizations — the venture-capital funded startup. These work well for asset-light and high-growth entities, but carry risk when the default outcomes for these are the traditional exits of either IPOs or corporate acquisitions. Organizations tend to do what organizations are designed to do, and in the case of corporations and startups, the pull of profit-maximizing incentives will often force them to drift away from initial public benefit missions.”