Foreword: One Year In
It’s hard to believe it, but it’s already been a year since the founding of the AI Disclosures Project and this Substack. We have learnt a lot. The disruption that AI is causing in knowledge consumption and production has already been far reaching, though our visibility into these impacts – increasingly shaped by powerful technology companies – remains incredibly weak.
Our goal remains how best to ensure that commercial incentives do not facilitate this powerful technology harming society, firms, and consumers. Disclosures have been our north star. What this means in practice has evolved as we increasingly focus on building governance, observability, and auditing trails into the architecture of AI systems itself. Digital markets need rules of the road built into them, otherwise they harden around powerful digital monopolies that misbehave.
Looking Back
We now have over one thousand subscribers on Asimov’s Addendum, published three high-impact working papers that highlighted urgent market harms and research gaps, written over 50 posts on here, interviewed dozens of experts in the field (thank you), and engaged with policy makers and public policy experts in governments and leading AI labs. Whether you’ve subscribed from the beginning or just found our work, thank you so much for following along, supporting our journey, and offering an open mind and a critical voice.
What Can AI Learn from Accounting?
On this one year anniversary we are reposting one of our first essays (What Can AI Learn from Accounting Standards?). It’s an essay foundational to our thinking, written by our co-director Tim O’Reilly, and we wanted to share it with our new subscribers. It also highlights where our thinking was when we first started the AI Disclosures Project and how it has evolved since then.
At our founding we drew inspiration from the institutional webs that regulate financial disclosures by publicly listed companies – through GAAP accounting, SEC regulations, third-party auditors, investors, and expert citizens. The argument is that much as the standardization of financial disclosures by public companies enabled the development of robust securities markets, standardization of AI operating disclosures by companies using AI would increase trust in AI systems, ease the adoption of AI-based services, and enable innovation. As Tim succinctly puts it: “develop[ing] a systematic disclosure and auditing framework that can become the basis for a set of ‘Generally Accepted AI Management Principles’”.
And while we still believe in that, the struggles in coming up with a standard language for such a nascent technology meant that we didn’t get very far initially. But along the way we stumbled upon two things.
Telemetry
The first is that disclosures are already used by tech companies internally. This has been the case since the DevOps movement and before. Companies have internal dashboards for monitoring and observability built around logs, traces, and telemetry. These monitor performance, meet security and third-party data handling requirements, and ensure operating performance – including of a commercial nature – are being met. With the rise of telemetry protocols, we realized disclosures and audit trail hooks already exist within companies. It just required regulators and third-parties to use them as a basis for their regulatory efforts.
Protocols as Market Shaping Devices
The second is that the best way to ensure AI markets remain healthy is to ensure disclosures become built into the market’s structure itself. In digital markets that means the various layers that construct the software, and allow for communication between potentially competing devices or applications. This has seen us turn towards developing protocols and standards as a way of shaping markets and ensuring positive outcomes from the commercialization of AI. Not only can protocols include disclosures that shape markets indirectly, but they are themselves conduits of power that have been shaping tech markets since the dawn of HTTP. This has also broadened our focus from just model providers to the whole AI ecosystem, from applications to the entire web. Our recent piece on MCP is a great example of this line of thinking. We care about an open, fair AI market and how the technological infrastructure can facilitate it.
As we look ahead, our commitment is to continue probing how disclosures, telemetry, and protocols can serve as levers for accountability in AI. Thank you for being part of this first year – we’re just getting started.
and now onto the original post…
What Can AI Learn from Accounting Standards?
Back in June of 2023, I made the observation that AI regulation should begin with mandated disclosures. But what should be disclosed, by whom, to whom, and why? As the debates about AI safety and fairness became more heated, I worried that ill-informed regulations would hamper the growth of the AI industry. So I made the case that
“Regulators should start by formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems…. Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.”
My thinking was rooted in a historical analogy. After the 1929 stock market crash, the term “generally accepted accounting principles” was proposed by the American Institute of Accountants (AIA), a private organization, to bring regularity to what had been a kind of “wild west” of misrepresentation of the financial condition of the companies whose securities had been offered to the public. When the AIA identified best practices for financial reporting, they didn’t make up new rules from whole cloth, but rather standardized accounting practices that had been developed and used by businesses over the course of centuries to understand, control, and improve their finances.
This was followed by government action that enshrined the notion into law. The Securities Act of 1933 (also known as the “Truth in Securities Act”) and the Securities Exchange Act of 1934 required companies selling or intending to sell stock to the public to register with the Securities and Exchange Commission and mandated that the financial condition of the companies represented by those securities be truthfully reported in a standard format on a regular basis. What these laws did not do was to specify the exact details of those disclosures. They left that to the accountants.
Through a variety of organizations and over a period of decades, the accounting profession refined and standardized what they believed to be best practices, until in 1973, this task was centralized in a non-profit organization called the Financial Accounting Standards Board. The FASB is blessed by the SEC as the source of Generally Accepted Accounting Principles, but the SEC itself does not define those principles.
These laws effectively created the conditions for what Gillian Hadfield and Jack Clark (applying this idea prospectively to AI) call “regulatory markets”. Once securities law mandated registration and regular, standardized reporting for those companies that wished to issue securities to the public, banks came to require the same standardized reporting when issuing loans, investors when considering injecting capital, or when evaluating whether to buy a smaller company or a line of business from another. Standardized accounting principles then also shaped income tax reporting for businesses and individuals, and as Congress changed the tax code, tax reporting requirements in turn shaped GAAP.
What makes this a regulatory market is that the government itself does not do all the monitoring of the accuracy of the required reporting. Except in the case of tax auditing, compliance is delegated to an ecosystem of auditors and tax professionals who are given privileged access to a company’s finances in order to verify and attest that they are correct. But more importantly, those disclosures that are required to be made public are scrutinized by “the market” of investors, who use them to shape their decisions whether to invest or to short a stock. Government regulations that thoughtfully mandate disclosures that enable the scrutiny of the market will, I suspect, enable far more extensive oversight than any kind of centralized AI auditing agency. (I will write more about regulatory markets in a future post.)
There are a few important lessons here for the development of what might one day be called “Generally Accepted AI Management Principles”:
The SEC requirement for companies to register and provide standardized information is not an outlier. Registration is an essential component of virtually any regulatory regime. Governments not only require businesses to register in order to perform many activities, they also require individuals to register and provide information in order to vote, attend school, get a job, pay taxes, receive government benefits, drive a car, purchase a weapon. Businesses also require individuals and other businesses to register in order to use services such as email, phone and text messaging, social media, to sell apps in an app store or products on Amazon.
Registration should also be required for AI systems. I joined Gillian Hadfield and Tino Cuéllar in making the case that registration is a good starting point for government intervention in AI markets because it provides a foundation for regulation when it becomes clear what exactly needs regulating, but it does not establish a premature regulatory straitjacket. But there’s more to it than that. I envision a world of cooperating AIs, not one big centralized winner-takes-all AI, and in that world, AIs will need to be registered in much the same way that web sites need to be registered (e.g., have an IP address and a domain name) in order to participate in the world wide web. It’s not an arbitrary requirement. It is part of what makes a cooperative system work. I will have more to say on that in the future.
Private industry needs to get its act together to share best practices, agree on how they should be measured and reported, and create an organization that will formalize those best practices. We think it’s particularly important to see beyond evals that are used for marketing purposes, and to understand what controls are actually in place, being monitored and tweaked on a regular basis as a key part of business operations.
That’s where our AI disclosures project at the SSRC comes in. We’re looking to collect information on best practices at the leading AI firms in order to provide insight into what is being done right, and perhaps more importantly, what is being done wrong or not being done at all. It’s too early to establish the standards themselves. We hope that forward looking AI practitioners, as well as the auditing community that’s been given privileged access to company efforts to evaluate and improve their AI models, will help us by sharing what they consider to be the best practices – not just in the narrow area of AI safety, but in every aspect of business operations. As with the previous generation of technology businesses, we expect that AI businesses will develop various operating metrics that are measured and optimized, guiding management decisions on how best to acquire and serve customers, how to optimize the results from AI models for particular purposes, how to streamline operations, and so on. Some of these will be trade secrets, but many of them will become common practice. There will need to be a lot of thought and debate to understand which disclosures might be disclosed privately to auditors (including potentially, government auditors) and which should be made public.
We believe that left to themselves, regulators may come up with ideas that sound plausible but don’t actually work. It’s in the best interest of AI companies at all levels to share best practices, so that we can converge on a set of generally accepted practices that will make it easier for regulators, investors, and the public to see those AI developers and service providers who are behaving responsibly and those who are not. (We’d also love to hear from people who are far more knowledgeable than we are about the ongoing development of accounting standards as well as any other regulatory regimes that might provide models for AI regulation.)
Eventually, the AI auditing community (which will likely include but extend far beyond the current financial auditors that are currently looking to add AI auditing to their portfolios) will need to come together to establish the organization that will develop the standards. As during the early efforts to create accounting standards, there are now multiple overlapping efforts that eventually will have to be brought together. (I’ll write more about this in a future post.)
Ultimately, registration, disclosures, and auditing are key enablers of successful markets. Let’s figure out what that looks like for AI.
It has been a great year! I greatly appreciate the work you have all put in to this wonderful newsletter and the papers you have produced.