Artificial Intelligence

EU urges for stronger artificial intelligence regulation to protect the future


AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.
Image © Parradee Kietsirikul | iStock

The EU is urging organisations to participate in a global initiative aimed at implementing artificial intelligence regulation in an effort to control AI programmes like ChatGPT

The chatbot ChatGPT, an AI product released in November, has the amazing capacity to write essays, engage in philosophical debates, and produce computer code, but what are the dangers?

As the development of artificial intelligence (AI) continues to outpace the enactment of legislation to govern its use, the European Commission is taking the lead in launching a collaborative effort with the United States.

The aim is to establish a voluntary code of conduct that companies can adopt, bridging the gap between AI capabilities and regulatory measures.

Voluntary code of conduct proposed at US/EU Trade and Technology Council

During this week’s meeting of the US/EU Trade and Technology Council (TTC), Margrethe Vestager, the executive vice president of the European Commission, presented the proposal.

“We’re talking about technology that develops by the month so what we have concluded here at this TTC is that we should take an initiative to get as many other countries on board on an AI code of conduct for businesses voluntarily to sign up,” she remarked.

Mitigating risks and enacting urgent artificial intelligence regulation

Amid the economic prospects of generative AI tools, concerns arise over their potential risks to democracy when used for misinformation and decision-making, as leading AI experts underscore the need to prioritize global efforts in mitigating the threat of AI-induced human extinction.

Since the launch of ChatGPT, Google and Microsoft, leading US IT companies, have introduced their own generative AI services, marking the beginning of a new era in digital innovation. However, the pace of government legislation to address the potential negative impacts of this technology has been sluggish, and even if a deal is reached within the year, the implementation of the legislation may take another two to three years, according to Vestager.

In the interim, Vestager proposes an international agreement among G7 countries and invited partners, such as India and Indonesia, which could prove effective if companies in these nations, representing about one-third of the global population, commit to a code of conduct and enact artificial intelligence regulation.

Image © metamorworks | iStock

Global collaboration for artificial intelligence regulation and codes of conduct

During the fourth ministerial meeting of the TTC in Luleå, Sweden, EU executive Vice President Vestager and US Secretary of State Antony Blinken acknowledged the economic opportunities and societal risks associated with AI technologies.

They discussed the implementation of a joint roadmap for trustworthy AI and risk management, emphasizing the need for voluntary codes of conduct. The TTC has established expert groups to identify standards and tools for trustworthy AI, which now includes a focus on generative AI systems.

Vestager aims to present a draft code of conduct with industry input in the coming weeks, seeking support from countries like Canada, the UK, Japan, and India.

Private sector representatives also expressed concerns about the need for standards and evaluation to effectively regulate AI, emphasizing the importance of voluntary collaboration among the EU, US, G7, and other countries to expedite progress in this area.

Editor’s Recommended Articles



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.