Artificial Intelligence

Microsoft Leaves OpenAI, Senate Probes

As artificial intelligence reshapes the tech landscape, regulators and lawmakers are scrambling to keep pace. Microsoft’s withdrawal from OpenAI’s board, an upcoming Senate hearing on AI privacy, and expert calls for a new regulatory approach highlight the complex challenges facing the AI industry and its overseers.

Microsoft Cuts Ties With OpenAI Board 

Microsoft has reportedly pulled the plug on its observer seat on OpenAI’s board as regulators on both sides of the Atlantic turn up the heat on AI partnerships. The tech giant’s legal team claims the seat has served its purpose, providing insights without compromising OpenAI’s independence.

This move comes as the European Commission and U.S. regulators scrutinize the cozy relationship between the two AI powerhouses. While the EU has grudgingly allowed that the observer seat didn’t threaten OpenAI’s autonomy, it is still seeking third-party opinions on the deal.

Microsoft’s retreat from the board, initially secured during OpenAI’s leadership drama last November, seems aimed at dodging regulatory bullets. As AI continues to reshape the tech landscape, this strategic step highlights the tightrope walk Big Tech faces: balancing collaboration and independence under the watchful eyes of global regulators.

The Microsoft-OpenAI partnership, valued at over $10 billion, has been a cornerstone of both companies’ AI strategies. It has allowed Microsoft to integrate current AI into its products while providing OpenAI with crucial computing resources. The partnership has yielded high-profile products like ChatGPT and image generator DALL-E, which have sparked both excitement and concern about AI’s rapid advancement.

Senate Dives Into AI Privacy Concerns

The Senate Commerce Committee is set to tackle the thorny issue of AI-driven privacy concerns in a hearing scheduled for Thursday (July 11).

The U.S., despite being home to tech giants driving AI innovation, lags behind in privacy legislation. States and other countries are filling the void, creating a patchwork of regulations that are becoming increasingly difficult for companies to navigate.

A bipartisan effort, the American Privacy Rights Act, seemed poised for progress but hit a roadblock last month when House GOP leaders pumped the brakes. The bill aims to give consumers more control over their data, including the ability to opt out of targeted advertising and data transfers.

Thursday’s hearing will feature testimony from legal and tech policy experts, including University of Washington and Mozilla representatives. As AI’s reach expands, pressure is mounting on Congress to act. The question remains: Can lawmakers keep pace with the breakneck speed of technological advancement?

AI Safety and Competition: Regulators Face Tightrope Walk

In the rapidly evolving AI landscape, Brookings Institution fellows Tom Wheeler and Blair Levin are calling for a delicate balancing act from federal regulators. As the Federal Trade Commission (FTC) and Department of Justice (DOJ) ramp up antitrust investigations into AI collaborations, the two experts argue in a Monday (July 8) commentary that fostering both competition and safety is crucial — and achievable.

Wheeler and Levin propose a novel regulatory approach, drawing inspiration from sectors like finance and energy. Their model features three key components: a supervised process for developing evolving safety standards, market incentives to reward companies exceeding these standards and rigorous oversight of compliance.

To quell antitrust concerns, the authors point to historical precedents where the government allowed competitor collaborations in the national interest. They suggest the FTC and DOJ issue a joint policy statement, similar to one released for cybersecurity in 2014, clarifying that legitimate AI safety collaborations won’t trigger antitrust alarms.

This push comes amid growing anxiety about AI’s potential risks and the concentration of power among a handful of tech giants. With AI development outpacing traditional regulatory frameworks, Wheeler and Levin argue that a new approach is urgently needed.

Their proposal aims to strike a balance between unleashing AI’s potential and safeguarding public interest. As policymakers grapple with these challenges, the authors’ recommendations could provide a roadmap for nurturing a competitive yet responsible AI ecosystem.


This website uses cookies. By continuing to use this site, you accept our use of cookies.