Born on the Fourth of July? The Bipartisan House AI Task Force Report
Standards, industrial policy, and the very visible shadow of China
“AI transcends politics – and it can be an area of common ground between Capitol Hill and the incoming Trump administration.”
– OpenAI’s response to the House Report.
This post in brief: The recent Bipartisan House AI Task Force Report is worth reading. Not only because the next administration may actually build upon it, but because it reflects a growing consensus that America’s AI policy should increasingly be shaped by the capabilities and actions of China.
The report
In a nutshell, the report argues that something as dynamic and new as generative AI should be:
Governed by an incremental, principles-driven, regulatory framework;
Used by government itself;
Sector-specific and agency specific in governance;
And driven by American-led standards, including through a strengthening of NIST.
Before getting to what I think are the key takeaways from the report (“China” and “industrial policy”), I want to mention its emphasis on building “standards” for AI – a focus that aligns with our own thinking at the AI Disclosure Project. We believe that standards can help govern AI, and that standardization is best thought of as codifying best practice from the bottom-up, whether in corporate behaviour, product usage, or technological standards and protocols.
Standards
The report comes out guns blazing – standards really matter and America must lead on them:1
“The strength of the United States in international standards development will be instrumental to its global technological leadership in the development and governance of artificial intelligence.”
The report defines standards as:
“a repeatable, harmonized, agreed upon, and documented way of doing something”
containing “technical specifications, requirements, guidelines, or characteristics”
with the goal of ensuring “that materials, products, processes, and services are fit for purpose.”
The U.S. market-driven approach to standard setting is highlighted, noting that: “The United States has long maintained an industry-led, bottom-up approach to most standard setting. The U.S. standards system protects against poor standards by enabling vibrant deliberation and competition and ensuring that technical merit prevails.”
The report notes several channels to help ‘get the market there’, including international standards (which impact areas such as trade), technical standards adopted by government, and internal standards imposed on government agencies. It also mentions “some kind of incentive-based system” but does not elaborate.
It is most specific when outlining what the Department of Defense (DoD) is doing on AI standards (p.g.50), namely that when experimentation is warranted, no single standard is adopted immediately, but in areas where “a common enterprise approach” is deemed critical then standards are mandated. The DoD’s need to evaluate a product over its lifecycle, for example, seems to warrant a common enterprise approach.2
In a recent interview with Kevin Werbach, Tim goes a step in noting that standards are essential not only to enable meaningful corporate disclosures to governments, but to construct healthy markets from the bottom-up, through “communication and information standards – as a common symbolic language.” People don’t think of disclosures as a kind of communications protocol, Tim notes, yet standardized accounting standards disclosures enabled a marketplace of accountants and auditors, much as important open protocols made the internet open, viable, and competitive. We need to approach AI disclosures in the same spirit.
That brings us to the dominant theme of the report, which is the international context within which U.S. AI policy is being constructed.
China
Unmistakable throughout the report is China’s role in exerting competitive pressure across every facet of U.S. AI policy, from setting standards and driving public and private R&D spending, to advancing energy infrastructure development.
The cheerleading from “Big AI”3 on this point made me think I could have been at a 4th of July rally. Here’s Chris Lehane, Head of Global Affairs at OpenAI, on the report:
a big day for all of us who want to make sure free, democratic AI wins out over the authoritarian path being pursued by China. Given the stakes, this is a competition the US has to win. The report focuses on how AI is a transformative technology with significant implications for our national security and economic competitiveness, and it prioritizes the imperative that the US lead on AI innovation.
Economists are scared of viewing competition as zero-sum, as a national security approach inherently does, since then zero-sum policies – especially tariffs, exchange rate depreciation, and low-wage competition – may end up being adopted.
Not everyone has bought into this 4th of July narrative though. In his recent departure letter as OpenAI’s head of policy research, Miles Brundage argued that cooperation is the safer choice, since: “having and fostering such a zero-sum mentality increases the likelihood of corner-cutting on safety and security…[and cooperation will be] essential to managing catastrophic risks”. In a recent Berkeley lecture, Miles goes further calling for a “CERN for AI” to greatly accelerate AI cooperation.
The Bipartisan Report does mention that cooperation on AI is essential in a military context: “International cooperation will be key to addressing the broader security concerns posed by AI in military contexts.” But then it’s unclear where the U.S.’s AI competition with China should start and where it should end.
A related branch of the competitiveness tree that the report spends a lot of time on is state-led industrial policy.
AI Safety Policy as AI Industrial Policy
As OpenAI’s Chris Lehane notes:
the report rightly recognizes that infrastructure is destiny when it comes to US-led AI staying ahead, and that Washington needs to invest in modernizing our electric grid and building energy-efficient AI infrastructure
This aligns with Singapore’s approach to AI regulation, which is really a state-led AI industrial policy. And there is a lot of sense to this approach: if you lead in a given technology, then you can have an overwhelming say in the rules of the game – just ask Russia after its banks got shut out from SWIFT, the global banking interchange network, led by the U.S.
But you don’t need to “lead” at every point in a value chain to exert pressure at critical nodes. For instance, the U.S. is able to impose sanctions on Chinese access to chips – even those it does not manufacture directly. Its contributions to essential components for chip-making machinery and its dominance in intellectual property for chip design (EDA tools) are sufficient to maintain leverage. And “leading” is not always dependent on machinery and software but also on organizational ‘technologies’. For example, the greater challenge to U.S.-led export controls lies not in technological capability but in operational capacity.4 Remarkably, the Commerce Department has only 11 export control officers worldwide to enforce these sanctions and prevent evasion.
That aside, leading on regulatory standards through leading on technology has been the approach adopted by the U.S., according to the report, and one which Europe, in the form of the Draghi Report, has reawakened to. Yes, the European Union has the EU AI Act, but the left and the centre may be coalescing around an “industrial policy” approach too.
So, while the Bipartisan House AI Task Force Report offers hope to those advocating for a more active state role in AI’s development, the national security lens it adopts is unlikely to offer meaningful guidance on the specifics of AI standards. Instead, it risks fostering a more closed informational environment – one that could hinder the diffusion of best practices, slow technological adoption, and ultimately undermine long-run market dynamism. Ensuring that “technical merit prevails” requires an open and collaborative informational environment where both best and worst practices can be shared, scrutinized, and built upon. We can’t forget the national security implications of AI, but an “arms race” should not be our guiding metaphor.
There is a great 2022 NIST presentation to the Subcommittee on Research and Technology on why U.S. leadership in standard setting matters.
Department of Defense, DoD Adopts Ethical Principles for Artificial Intelligence (February 24, 2020), online: https://www.defense.gov/News/Releases/release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/; Department of Defense, Responsible Artificial Intelligence Strategy and Implementation Pathway (June 2022), online: https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF
Big model developer companies, who are necessarily big due to the high fixed cost and high inference cost (as a marginal cost?) needed to compete in this market.
Or some might simply say because of “globalization” — and the impossibility of stopping evasion through transhipments and corporate shell companies.
Ridiculous…
Using standards to govern the use of genAI for international control will only create divisions between the US and its competitors.
Again and again, we continue to hear the drumbeat of international competition as we march ourselves into the fire of climate catastrophe. When will these supposed “leaders” take responsibility for our actual physical and mental wellbeing instead of this obvious corralling of public sentiment for their own profits?
Until we accept that the only solution to the climate crisis is collaboration away from the cliff created by industrial capitalism via “technological superiority” and create a safer world for our future generations.