Artificial Intelligence

A.I. and Privacy Concerns Get White House to Embrace Global Cooperation


Two hallmarks of American economic policy under President Trump are a reflexive aversion for regulation and go-it-alone nationalism.

But in technology policy, that stance is changing.

In September, the Trump administration abandoned its hands-off approach and began working closely with the 36-nation Organization for Economic Cooperation and Development to create international guidelines for the design and use of artificial intelligence.

The administration has also started to discuss a new law to protect privacy in the digital age, seeking consensus domestically and common ground internationally. It has fielded more than 200 public-comment filings from advocacy groups, corporations and individuals.

“There is a real desire in the United States to see leadership at the federal level,” said David Redl, a senior Commerce Department official helping to guide the administration’s privacy effort.

On both issues, the administration has “moved from indifference to engagement,” said Julie Brill, a former commissioner at the Federal Trade Commission, who now helps oversee regulatory affairs for Microsoft. “It certainly has been welcome.”

The shift is a pragmatic recognition that regulations that will affect the nation’s tech industry and its citizens are coming, and that if federal officials want a say in them, they must participate.

China — which is not a member of the Organization for Economic Cooperation and Development — has gone its own way, using personal data and artificial intelligence as tools of a government-backed surveillance state. If the United States were another digital island, experts warn, there would be a real danger of a fragmented global marketplace.

Privacy can be seen as the first step toward regulating artificial intelligence more broadly. Vast volumes of data, often personal information, are the fuel of modern A.I. systems.

Several American states, led by California, have passed or proposed privacy laws, threatening to fragment the marketplace in the United States, too. They are following Europe, where a sweeping privacy law took effect in May, harnessing the popular backlash against American tech giants like Facebook and Google.

“Europe is saying, ‘We’re in charge,’ defining the global rules in the next iteration of the digital economy,” said Daniel Weitzner, a researcher at the Massachusetts Institute of Technology who was a policy adviser in the Obama administration.

The new European privacy law, known as the General Data Protection Regulation, lets people request their data online, restricts how businesses obtain and handle information, and opens a door to class-action-style lawsuits and huge fines.

Just how strict or effective enforcement will be remains to be seen. But other nations are adopting similar rules, and tech companies are retooling their data-handling software to comply. Last year, Europe and Japan agreed to allow personal data to flow freely between the two economies, since Japan’s rules were deemed the equivalent of Europe’s.

Mounir Mahjoubi, the French secretary of state for digital affairs, pointed to the European-Japanese pact as the “first impact worldwide” of the European standard.

Mr. Redl of the Commerce Department said the administration had been prompted to seek a new privacy law by action in Europe and state legislatures. Though it has solicited public comments, the administration has not yet drafted a proposed law, and passage would require bipartisan support in Congress.

But the goal, Mr. Redl said, is a federal law that will “harmonize” data privacy rules in the United States and mesh enough with the European standard to avoid a more splintered marketplace.

One sign of the administration’s more cosmopolitan approach to technology policy was a small, private forum on “industries of the future” at the White House in early December. Most of the guests were tech company leaders, including Sundar Pichai of Google, Satya Nadella of Microsoft and Ginni Rometty of IBM.

At that session, White House technology advisers discussed the Organization for Economic Cooperation and Development’s artificial intelligence guidelines and the importance of shaping the outcome. That impressed the tech executives as a newfound embrace of international engagement, according to a person briefed on the meeting, who would speak only anonymously.

The White House confirmed that its side had brought up the organization’s guidelines at the meeting. In February, President Trump signed an executive order on artificial intelligence that called for not only more investment but also regulation to “foster public trust in A.I. systems.”

In a statement last week, Michael Kratsios, deputy assistant to the president for technology policy, said, “We’re focused on promoting an international environment that supports A.I. research and development and ensures the technology is developed in a manner aligned with our nation’s core civil liberties and freedoms.”

The administration’s emphatic cooperation on artificial intelligence guidelines contrasts with an earlier arms-length wariness of the Organization for Economic Cooperation and Development. In 2017, for example, the administration, given its nationalist trade agenda, insisted that the term “free trade” not appear in a ministerial statement, according to two people involved in drafting the language, who would speak only on the condition of anonymity.

The organization’s A.I. initiative has involved meetings and presentations around the world. Groups from engineers to human rights activists are represented on the advisory panels that help draft recommendations, which are tentatively scheduled for ministerial approval in May.

Their guidelines are “soft law” — suggestions, not requirements. But the Paris-based organization has a track record of influencing global policy.

The most recent, eight-page draft lays out rights and responsibilities. Those responsible, it says, include any individual or organization that makes or operates A.I. technology — and these “A.I. actors” should do systematic risk assessments of “privacy, digital security, safety and bias.”

People affected by an A.I.-generated prediction or recommendation, it says, should have the right to challenge the outcome “based on plain and easy-to-understand information” on how an automated decision was made.

The draft recommendations call for global A.I. standards that are “trustworthy” and allow for data to flow fairly freely across borders so that it is “interoperable.” The latter is a vital point for the American side. National laws and approaches will differ, they say, but they should not hobble the global data economy.

No one is entirely satisfied in the collective, give-and-take of developing guidelines. Marc Rotenberg, executive director of the Electronic Privacy Information Center, a nonprofit digital rights research and advocacy group, is one of the expert advisers. He is a champion of more forceful guidelines like a prohibition on Chinese-style scoring of individuals based on their personal information and online behavior.

But Mr. Rotenberg described the Organization for Economic Cooperation and Development’s effort as “the right synthesis to pursue,” combining “economic development and a fundamental human right to privacy.” The guidelines, he said, should be “a very important policy framework.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.