GeneralNews

E.U. Agrees on Landmark Artificial Intelligence Rules

The European Union has finalized the AI Act, a comprehensive legal framework governing artificial intelligence. 

This Act introduces a categorization system for AI based on risk levels: minimal, limited, high, and unacceptable. High-risk AI systems are subject to stringent requirements for risk management, transparency, and human oversight.
General Purpose AI systems (GPAIs), such as LLMs and multi-modal models, must provide technical documentation, training data summaries, and adhere to EU copyright laws.
For high-risk GPAIs, additional obligations include model evaluations, systemic risk assessment, adversarial testing, and reporting on cybersecurity and energy efficiency. Non-compliance can lead to penalties up to €35 million or 7% of global turnover.
Open-source AI models receive broad exemptions, a potential advantage for companies like Meta and European startups. However, there are concerns about the Act’s impact on innovation within Europe’s AI sector.
The implementation of the Act is key, especially for smaller companies, to prevent burdensome certification processes. Expected to take effect no earlier than 2025, the Act marks the EU as a pivotal player in AI regulation, influencing the global landscape of AI development and application.

Join Upaspro to get email for news in AI and Finance

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.