- The EU AI Act, a first-of-its-kind regulatory framework for the systems, fully entered into force in August 2024.
- The timeframe for imposing restrictions on some AI systems and staffing technology literacy requirements actually expired on Sunday.
- Companies , face fines of , as much as 35 million euros ($ 35.8 million ) or 7 % of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.
On Sunday, the European Union officially began enforcing its location artificial intelligence law, easing the way for severe penalties and possible big fines for violators.
The EU AI Act, a first-of-its-kind regulatory framework for the systems, fully entered into force in August 2024.
The deadline for imposing restrictions on some artificial intelligence systems and ensuring staff members had enough technology literacy actually expired on Sunday.
That means that if a company doesn’t agree with the regulations, they are now subject to penalties.
The AI Act forbids some AI programs that it consider to pose an “unacceptable chance” to people.
Those include cultural scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual preference and other features, and “manipulative” AI tools.
Companies , of , as much as 35 million euros ($ 35.8 million ) or 7 % of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.
The amount of penalties will depend on the company’s fine and the extent of the copyright.
That’s higher than the charges possible under the , Europe’s tight electronic privacy laws. For violations, businesses are subject to fines of up to 20 million dollars, or 4 % of their annual global turnover.
Not excellent, but” sorely needed.”
It’s important to point out that the AI Act is still in effect; this is only the first of a long line of approaching developments.
Tasos Stampelos, head of EU public policy and government relations at Mozilla, told CNBC earlier that while it’s” not perfect”, the EU’s AI Act is “very little needed”.
” It’s very important to recognize that the AI Act is predominantly a product safety policy”, Stampelos said in a CNBC-moderated board in November.
” With product safety regulations, the minute you have it in place, it’s not a done deal. After the passage of an act, there are many issues coming and going,” he said.
” Right then, compliance will depend on how standards, rules, extra legislation or generic devices that follow the AI Act, that may actually stipulate what compliance looks like”, Stampelos added.
In December, the EU AI Office, a newly created body regulating the use of models in accordance with the AI Act, published a second-draft code of practice for general-purpose AI ( GPAI ) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.
The second draft included the requirement for developers of” systemic” GPAI models to undergo rigorous risk assessments in addition to the exemptions for providers of some open-source AI models.
Setting the global standard?
Some of the more taxing aspects of the AI Act are unsatisfactionate by a number of technology executives and investors, who also worry that it might stifle innovation.
In June 2024, Prince Constantijn of the Netherlands told CNBC in an interview that he’s “really concerned” about Europe’s focus on regulating AI.
” Our ambition seems to be limited to being good regulators”, Constantijn said. ” It’s good to have guardrails. We want to provide market predictability, clarity, and other things. But it’s very hard to do that in such a fast-moving space”.
Still, some think that having clear rules for AI could give Europe leadership advantage.
Europe is showing leadership in building the most trustworthy AI models, Diyan Bogdanov, director of engineering intelligence and growth at Bulgarian fintech firm Payhawk, stated via email.” While the U. S. and China compete to build the biggest AI models, Europe is showing leadership in building the biggest ones.
” The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation — , they’re defining what good looks like”, he added.