Sunday, February 25, 2024
HomeScienceEurope's New AI Rules Could Go Global--Here's What That Will Mean

Europe’s New AI Rules Could Go Global–Here’s What That Will Mean

As applications of artificial intelligence become more advanced, policymakers around the world are grappling with the possibility of unintended consequences: not only a potential existential danger to humanity, but also the more immediate risks of job losses, discrimination and copyright infringement.

The European Union, which represents 450 million citizens throughout Europe, is a pioneer in this regulatory race. Last Friday, member countries signed the AI ​​Law, which had been agreed last December by the European Council (a group of EU leaders that shapes the union’s political agenda) and the European Parliament. The bill is expected to become law this year and would impose broad limits on companies whose AI tools are used in Europe, potentially restricting how these tools are developed and used around the world. However, since the law’s announcement, its text has changed due to internal political disputes and lobbying, according to a recently leaked draft. And some experts are still concerned about what appears to be left out.

The AI ​​Law is one of many recent EU laws addressing technological issues, says Catalina Goanta, associate professor of private law and technology at Utrecht University in the Netherlands. The law prohibits the use of emotion recognition software in workplaces and schools, prohibits racist and discriminatory profiling systems, and provides a strict ethical framework for creating artificial intelligence tools that companies must adhere to.


About supporting scientific journalism

If you are enjoying this article, please consider supporting our award-winning journalism by subscribing. By purchasing a subscription, you help ensure the future of impactful stories about the discoveries and ideas that shape our world today.


To be effective, such regulations must be applied across industries as a one-size-fits-all solution, Goanta explains, a tall order in the fast-moving tech sector, where new products drop weekly. “The struggle has been to find a strong balance” between fostering growth and innovation and putting safeguards in place to protect people, she says.

On January 22, a draft of the AI ​​Law was published, which was filtered by Luca Bertuzzi, a journalist with the European media network Euractiv, revealed how the wording of the law has evolved as it has made its way through the EU bureaucracy. Most notably, it now contains a exemption for open source AI models—systems with freely available source code and training data. Although these tools, which include Meta’s LLaMA large language model, operate more transparently than “black box” systems like OpenAI’s GPT-4, experts note that they are still capable of causing harm. Other changes include what Aleksandr Tiulkanov, who previously worked on AI at the Council of Europe, called a “potentially controversial“Modify the definition of AI systems covered by the regulation.

While these changes may seem small, Goanta says they are significant. “The level of complexity of the changes will require very detailed scrutiny” in the coming weeks and months, she says.

“The AI ​​Act is, in essence, an adaptation of EU product regulation,” says Michael Veale, associate professor of law at University College London. Like other EU consumer protection laws regulating the safety of toys or food, the AI ​​Law defines certain uses (such as medical imaging and facial recognition at border checkpoints) as “high risk” and forces such AI systems to meet special requirements. Developers will need to demonstrate to regulators that they are using high-quality, relevant data and have systems in place to manage risks, Veale says.

Essentially, any application that could cause “potential harm to public interests such as health, security, fundamental rights, democracy, etc.” can be considered high risk, Goanta says. But some researchers have argued that the language the law uses to define “high risk” could be interpreted too broadly. Claudio Novelli, who studies digital ethics and law at the University of Bologna, Italy, is concerned that this could deter AI companies from participating in the EU market and could stifle innovation. “Our criticism is not directed at the risk-based approach per se but at the methodology used to measure risk,” he says, although he acknowledges that the current text of the law is an improvement over the original.

Aside from high-risk uses, so-called general-purpose AI vendors – companies that oversee generative AI tools that, like ChatGPT, have many potential applications – will also be subject to additional obligations. They will need to periodically demonstrate that their model results work as intended, rather than magnifying bias, and test the vulnerability of their systems to hackers or other bad actors. While recent international summits and declarations have identified these risks of general-purpose models, the EU AI Law goes further, says Connor Dunlop, European public policy lead at the Ava Lovelace Institute. “Therefore, the AI ​​Act represents the first attempt to move beyond identifying risks and toward mitigating those risks,” he says.

When the AI ​​Law is adopted, a countdown to its implementation will begin: practices prohibited by the law must stop within six months. General-purpose AI obligations will come into force within a year. Anyone developing high-risk AI will have 24 months to comply, while some specialized high-risk uses, such as medical devices that include AI, will have 36 months to comply.

It is not yet clear how the law will be applied. The law establishes an EU AI Office to support member countries, but its exact role has not yet been determined. Veale predicts that member countries will delegate law enforcement to private agencies, which some experts worry are not proactive in enforcing policing standards. “In practice, these requirements will be developed and determined by private standards bodies, which are not very inclusive or accountable,” he says. “It’s a total self-certification.”

Whatever enforcement mechanism is put in place “might help provide some social scrutiny,” Veale adds, “but I suspect actual enforcement of the regime will be low.”

Dunlop is also concerned about the extent to which the law will actually be enforced. He suggests taking as a model the EU’s General Data Protection Regulation (GDPR) law, which enshrines the privacy rights of internet users. “Enforcement of other landmark laws, such as the GDPR, has been patchy and slow to get underway, but is now improving,” he says. But “the urgency of the AI ​​challenge means that the EU must urgently turn to implementation and compliance.”

Still, AI companies around the world will have to adapt to EU rules if they want their tools to be used in Europe. In the case of GDPR, many international companies have chosen to operate to EU standards globally rather than running multiple versions of their tools across jurisdictions. (This is why many websites require visitors to accept cookie preferences, even outside the EU)

The new legislation “is important for US companies that want to introduce AI products in the EU, whether for public or private use,” says Goanta. “But it will be interesting to see whether there will be a ‘Brussels effect’: will US companies adapt to EU rules and increase public interest protection in their operations overall?”

U.S. regulators are currently following a “wait-and-see approach,” Novelli says. While the EU is happy to highlight its willingness to crack down on Big Tech, the United States is more cautious about deterring investors and innovation. “It is plausible that the United States is monitoring the impact of the EU AI Law on the AI ​​market and stakeholder reactions,” says Novelli, “with the potential goal of capitalizing on any negative feedback and securing an advantage.” competitive”.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments