Commerce

U.S. creates advisory group to consider AI regulation


The U.S. government has created an artificial intelligence safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.

The new U.S. AI Safety Institute Consortium (AISIC), part of the National Institute of Standards and Technology, is tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.

On Thursday, the U.S. Department of Commerce, NIST’s parent agency, announced both the creation of AISIC and a list of more than 200 participating companies and organizations. Amazon.com, Carnegie Mellon University, Duke University, the Free Software Foundation, and Visa are all members of AISIC, as well as several major developers of AI tools, including Apple, Google, Microsoft, and OpenAI.

To read this article in full, please click here



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.