Home » today » Technology » Open letter: Tech giants put pressure on EU over AI regulation

Open letter: Tech giants put pressure on EU over AI regulation

Shutterstock

  • US technology companies such as Meta, Spotify and Ericsson are putting pressure on the EU to achieve looser AI regulation.
  • According to Reuters, these companies and other stakeholders have participated in the development of codes of conduct for future regulations.
  • Experts warn that overly strict regulations could limit innovation, while others stress the need for transparency.

The EU regulation on artificial intelligence has been in the works for a long time. According to the EU authority itself, this regulation has been in preparation since the European Commission proposed the first legal framework for AI in April 2021. In fact, it has been in the works for so long that one of its main contributors, former EU Internal Market Commissioner Thierry Breton, will not live to see its entry into force.

Breton resigned last week after publicly expressing disagreements with European Commission President Ursula von der Leyen, who reportedly did not intend for him to continue as the person responsible for regulating social networks. In addition, although the law on artificial intelligence officially came into force on August 1, it is not yet fully applicable.

US companies put pressure on the EU

This situation is currently being exploited by major US technology companies that are trying to weaken the originally planned, world’s first comprehensive legislation on artificial intelligence. For example, OpenAI, the company behind ChatGPT, already put pressure on the European Union in June 2023 to reduce the regulatory requirements of the new law.

This week, pressure on EU regulations has increased even further. Companies such as Meta (parent company of Facebook), Spotify, Ericsson and SAP published an open letter on Monday calling for “more balanced” regulation in the field of artificial intelligence.

Reuters reports that other major technology companies have now joined this pressure. The world’s leading tech companies are making a last-ditch effort to persuade the EU to take a less stringent approach to regulating generative AI.

Until the planned codes of conduct that will accompany the law are finalized, it remains unclear how strictly the rules will be applied to general-purpose AI (GPAI) systems such as ChatGPT. It also remains unclear to what extent these companies will face copyright lawsuits and heavy fines.

EU receives nearly 1,000 applications to work on regulations

According to Reuters, the EU has invited companies, researchers and other stakeholders to participate in the drafting of these codes of conduct – a process that has received almost 1,000 applications, an exceptionally high number, an anonymous source confirmed to Reuters.

While these codes will not be legally binding when the law comes into full effect at the end of next year, they are intended to provide a checklist that companies can use to demonstrate their compliance.

Boniface de Champris, policy director of the Computer and Communications Industry Association (CCIA) in Europe, a non-profit organization that represents companies such as Amazon, Google and Meta, stressed the importance of these codes to Reuters: “The code of conduct is crucial. If we get it right, we can continue to innovate.” However, he also warned: “If it is too restrictive or too specific, it will be very difficult.”

According to Reuters, OpenAI and Google are among the companies that have applied to participate in the development of the codes. Amazon also told the news agency that it was willing to contribute its expertise to the success of the code of conduct.

Read also

Mark Zuckerberg has a brilliant AI strategy – and immediately 3 billion users for his artificial intelligence

However, Maximilian Gahntz, AI policy expert at the Mozilla Foundation, expressed concerns that companies are doing everything they can to avoid transparency. “The AI ​​law offers the best opportunity to shed light on this crucial area and open at least part of the black box,” he said.

Gahntz also warned that as many of the regulatory obligations are fleshed out, we need to be careful that major AI players do not dilute important transparency requirements.

This text was translated from Spanish. The original article can be found here.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.