Back in June of 2023, I made the observation that AI regulation should begin with mandated disclosures. But what should be disclosed, by whom, to whom, and why? As the debates about AI safety and fairness became more heated, I worried that ill-informed regulations would hamper the growth of the AI industry. So I made the case that
“Regulators should start by formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems…. Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.”
My thinking was rooted in a historical analogy. After the 1929 stock market crash, the term “generally accepted accounting principles” was proposed by the American Institute of Accountants (AIA), a private organization, to bring regularity to what had been a kind of “wild west” of misrepresentation of the financial condition of the companies whose securities had been offered to the public. When the AIA identified best practices for financial reporting, they didn’t make up new rules from whole cloth, but rather standardized accounting practices that had been developed and used by businesses over the course of centuries to understand, control, and improve their finances.
This was followed by government action that enshrined the notion into law. The Securities Act of 1933 (also known as the “Truth in Securities Act”) and the Securities Exchange Act of 1934 required companies selling or intending to sell stock to the public to register with the Securities and Exchange Commission and mandated that the financial condition of the companies represented by those securities be truthfully reported in a standard format on a regular basis. What these laws did not do was to specify the exact details of those disclosures. They left that to the accountants.
Through a variety of organizations and over a period of decades, the accounting profession refined and standardized what they believed to be best practices, until in 1973, this task was centralized in a non-profit organization called the Financial Accounting Standards Board. The FASB is blessed by the SEC as the source of Generally Accepted Accounting Principles, but the SEC itself does not define those principles.
These laws effectively created the conditions for what Gillian Hadfield and Jack Clark (applying this idea prospectively to AI) call “regulatory markets.” Once securities law mandated registration and regular, standardized reporting for those companies that wished to issue securities to the public, banks came to require the same standardized reporting when issuing loans, investors when considering injecting capital, or when evaluating whether to buy a smaller company or a line of business from another. Standardized accounting principles then also shaped income tax reporting for businesses and individuals, and as Congress changed the tax code, tax reporting requirements in turn shaped GAAP.
What makes this a regulatory market is that the government itself does not do all the monitoring of the accuracy of the required reporting. Except in the case of tax auditing, compliance is delegated to an ecosystem of auditors and tax professionals who are given privileged access to a company’s finances in order to verify and attest that they are correct. But more importantly, those disclosures that are required to be made public are scrutinized by “the market” of investors, who use them to shape their decisions whether to invest or to short a stock. Government regulations that thoughtfully mandate disclosures that enable the scrutiny of the market will, I suspect, enable far more extensive oversight than any kind of centralized AI auditing agency. (I will write more about regulatory markets in a future post.)
There are a few important lessons here for the development of what might one day be called “Generally Accepted AI Management Principles”:
The SEC requirement for companies to register and provide standardized information is not an outlier. Registration is an essential component of virtually any regulatory regime. Governments not only require businesses to register in order to perform many activities, they also require individuals to register and provide information in order to vote, attend school, get a job, pay taxes, receive government benefits, drive a car, purchase a weapon. Businesses also require individuals and other businesses to register in order to use services such as email, phone and text messaging, social media, to sell apps in an app store or products on Amazon.
Registration should also be required for AIs. I joined Gillian Hadfield and Tino Cuéllar in making the case that registration is a good starting point for government intervention in AI markets because it provides a foundation for regulation when it becomes clear what exactly needs regulating, but it does not establish a premature regulatory straitjacket. But there’s more to it than that. I envision a world of cooperating AIs, not one big centralized winner-takes-all AI, and in that world, AIs will need to be registered in much the same way that web sites need to be registered (e.g., have an IP address and a domain name) in order to participate in the world wide web. It’s not an arbitrary requirement. It is part of what makes a cooperative system work. I will have more to say on that in the future.
Private industry needs to get its act together to share best practices, agree on how they should be measured and reported, and create an organization that will formalize those best practices. We think it’s particularly important to see beyond evals that are used for marketing purposes, and to understand what controls are actually in place, being monitored and tweaked on a regular basis as a key part of business operations.
That’s where our AI disclosures project at the SSRC comes in. We’re looking to collect information on best practices at the leading AI firms in order to provide insight into what is being done right, and perhaps more importantly, what is being done wrong or not being done at all. It’s too early to establish the standards themselves. We hope that forward looking AI practitioners, as well as the auditing community that’s been given privileged access to company efforts to evaluate and improve their AI models, will help us by sharing what they consider to be the best practices – not just in the narrow area of AI safety, but in every aspect of business operations. As with the previous generation of technology businesses, we expect that AI businesses will develop various operating metrics that are measured and optimized, guiding management decisions on how best to acquire and serve customers, how to optimize the results from AI models for particular purposes, how to streamline operations, and so on. Some of these will be trade secrets, but many of them will become common practice. There will need to be a lot of thought and debate to understand which disclosures might be disclosed privately to auditors (including potentially, government auditors) and which should be made public.
We believe that left to themselves, regulators may come up with ideas that sound plausible but don’t actually work. It’s in the best interest of AI companies at all levels to share best practices, so that we can converge on a set of generally accepted practices that will make it easier for regulators, investors, and the public to see those AI developers and service providers who are behaving responsibly and those who are not. (We’d also love to hear from people who are far more knowledgeable than we are about the ongoing development of accounting standards as well as any other regulatory regimes that might provide models for AI regulation.)
Eventually, the AI auditing community (which will likely include but extend far beyond the current financial auditors that are currently looking to add AI auditing to their portfolios) will need to come together to establish the organization that will develop the standards. As during the early efforts to create accounting standards, there are now multiple overlapping efforts that eventually will have to be brought together. (I’ll write more about this in a future post.)
Ultimately, registration, disclosures, and auditing are key enablers of successful markets. Let’s figure out what that looks like for AI.
In software engineering in general, it’s more important to hire the very best people, to trust them to be flexible, and to empower them to use the practices they choose, than it is to adopt “best practices”.
In practice, “sharing best practices and following best practices” in software engineering usually ends up with a long checklist that the smartest companies ignore. The checklist is then taken advantage of by salespeople for mediocre products who lobby to inject their products as a best practice.
This is how the recent Crowdstrike bug hit so many companies, for example. You’ll notice that it didn’t hit the best companies, like Google or Amazon. No, it hit Delta and United. Companies that can’t really use best practices, so they end up following a government mandated checklist.