UN passes its first world statement on artificial intelligence

0

The resolution promotes the dedication of nations to human rights protection, personal data privacy, and AI risk monitoring.

In a decision on AI, the UN General Assembly has given its stamp of approval. There have been a number of global government efforts trying to influence AI research, and this resolution is the most current one of them.

On March 2, the resolution that had been proposed by the US and supported by 123 countries, including China, was accepted without a single dissenting vote among the 193 member states of the United states.

Protecting personal data, keeping an eye on AI for potential dangers, and ensuring human rights are all emphasized in the resolution.

Despite the fact that many AI projects aren’t legally binding, many are nonetheless worried that AI would have bad effects like undermining democratic processes, making fraud worse, or causing massive job losses. According to the resolution:

“The wrong or intentional creation, growth, deployment, and use of artificial intelligence systems… pose risks that could… make it harder to protect, promote, and enjoy basic human rights and freedoms.”

United Nations General Assembly decisions, in contrast to those of the Security Council, do not have the force of law but do reflect public opinion throughout the world. In order to ensure that AI systems are safe, this resolution calls on a number of groups and countries to work together to establish regulations.

The goal of the resolution is to provide less developed countries a voice in artificial intelligence (AI) talks by closing the digital divide between them and more developed nations.

Additionally, it aspires to provide underdeveloped nations with the know-how to take use of artificial intelligence’s benefits in areas including agricultural assistance, workforce development, illness detection, flood prediction, and flood prediction.

The US, UK, and more than a dozen other countries reached a thorough global agreement in November detailing steps to protect AI from bad actors. Tech firms must build AI systems with intrinsic security features, according to the pact.

In the absence of sufficient safeguards or in violation of international law, the resolution expressly forbids the “improper or malevolent design, advancement, implementation, and use of artificial intelligence systems.”

The big tech companies, however, are in agreement that AI regulation is necessary and will work to make the laws work to their benefit.

On the other hand, on March 13, legislators in the European Union finally approved the first global comprehensive AI legislation. The expected implementation date for these regulations is May or June, subject to certain procedural requirements.

Biometric monitoring, social scoring, predictive policing, “emotion recognition,” and untargeted face recognition systems are among the technologies that the European Union has outlawed.

In October, the White House issued a new executive order with the dual goals of improving national security and reducing the hazards associated with artificial intelligence (AI) for consumers, workers, and minorities.

Also Read: Moondance Labs secures $6 million for expansion of Tanssi appchain protocol

Leave A Reply

Your email address will not be published.