Artificial Intelligence

Artificial Intelligence Regulation Needs Private Markets


artificial intelligence regulation

A regulatory market approach would enable the dynamism needed for AI to flourish in a way consistent with safety and public trust. Pixabay

It seems the White House wants to ramp up America’s artificial intelligence (AI) dominance. Earlier this month, the U.S. Office of Management and Budget released its “Guidance for Regulation of Artificial Intelligence Applications,” for federal agencies to oversee AI’s development in a way that protects innovation without making the public wary. The noble aims of these principles respond to the need for a coherent American vision for AI development—complete with transparency, public participation and interagency coordination. But the government is missing something key.

For technological innovation to flourish, regulators have to be innovative too. Enter: Regulatory markets.

SEE ALSO: Why Voice Is a Vital Tech Tool for Brands

A recent paper by Jack Clark and Gillian Hadfield at OpenAI proposes the idea of regulatory markets, wherein private competitive regulators would take on the role traditionally held by by legislators and government agencies. Artificial intelligence is innovating at breathtaking speed, so regulators are bound to have a hard time both foreseeing all possible risks arising from its use and monitoring private-sector activity. That’s why the government should focus on setting objectives for the alignment of AI with society’s values, enabling private actors to do the rest.

Regulators in government, as Clark and Hadfield noted, need to be technically competent enough to understand the field they’re regulating, and it’s often difficult for agencies with limited resources to attract enough talent. This often results in industry capturing regulations and lacking effective oversight. Given AI’s global scale impact, the risk of industry capture is compounded, as regulators find it harder to monitor the cross-border operations of companies. In this environment, it’s difficult to craft legislation to regulate AI without hampering innovation.

On the other side is the concern of voluntary standards and industry self-regulation, an approach specifically mentioned in the White House’s guiding principles on AI. The lesson of the two Boeing 737 Max crashes is that self-governance has its limits. The Federal Aviation Administration (FAA) delegated much of its oversight authority, and warning signs of Boeing’s poor behavior had been apparent since at least 2011, but little was done. This approach should be avoided for a technology that has implications for every industry from health care and finance to national security.

Regulatory markets would resolve these concerns. In creating this market, the government would instead license regulatory corporations who would compete to provide high-quality regulatory services. Treating regulation as a service would allow these corporations to operate internationally and attain more flexible financing, through charging fees and collecting fines, giving them the capacity to effectively monitor corporate AI activity. Competitive regulatory services would enable the development of transparent standards for AI.

This would free governments from the burden of monitoring companies directly, allowing them to focus their energies on designing effective regulatory markets and setting high-level objectives for the AI industry. This would involve crafting licensing requirements for regulators, maintaining competition among them and making it mandatory for companies using AI to subscribe to a regulator.

A competitive system of regulators would be easier for the government to monitor than would the entire breadth of the AI industry, as their operations would be defined by government mandate. Private regulators would inevitably be more privy to companies’ information than the government ever could be thanks to their freedom to operate across borders. That means they’d be more responsive to how they’re developing and deploying artificial intelligence.

Creating global markets and recruiting top talent, however, is not enough for private regulators to be effective. Clark and Hadfield designed their proposal in the light of the tragedies of the financial crisis and the Boeing 737 Max crashes. Their objectives were to increase the capacity of regulators to keep in check an evolving, globalized industry while avoiding the pitfalls that result from too much industry control.

The lessons of the private credit rating agencies whose inflated ratings precipitated the global financial crisis is that government regulators need to have teeth in order to prevent tragedy. While regulating the regulators is theoretically an easier task, theory does not guarantee it will be done in practice. Regulatory exemptions for credit agencies and their consolidation before the financial crisis allowed them to operate unchecked—with disastrous results.

To prevent these risks, competition among regulatory companies must be ensured through the threat of license removal should they fail in their duties, and mechanisms for their licensing and funding carefully crafted by governments. This would enable regulatory markets for AI to act as true markets, unlike what happened with credit agencies.

A regulatory market approach enables the dynamism needed for AI to flourish in a way consistent with safety and public trust. The speed of private innovation should not make governments reluctant to act.

Instead, policy needs to get innovative too.

Ryan Khurana is executive director at the Institute for Advancing Prosperity and a technology policy fellow at Young Voices.

Artificial Intelligence Needs Private Markets for Regulation—Here’s Why



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.