The European Parliament agreed today to adopt new rules governing the future of Artificial Intelligence. Although the authority of Parliament only extends to EU member states, the goal of this legislation is to have a global impact. European Commission President Ursula von der Leyen hailed the agreement as one in which “European values” are being transposed to a new era.
Previous agreements from the EU on issues like digital privacy and open science have indeed created major global impacts in recent years. The flow of information in today’s world is so integrated that it’s impossible to create information regulations (at least for democracies) that are constrained by national borders. This is both a good and bad situation—good in the sense that we now have a framework for AI regulation, bad in the sense that previous laws like this passed by the EU have not been thoughtfully developed with full input from the world or from all stakeholder groups. The EU’s General Directive on Data Protection (GDPR), for example, went into effect in 2018 and since then has caused a huge amount of disruption in clinical research since GDPR privacy protections are wholly incompatible with research practices (which have their own privacy protection protocols; a more innocuous outcome of this legislation has been the appearance of cookie banner notices on most websites, even when they aren’t needed). The EU’s Plan S (although not a product of the European Parliament but rather a coalition of 20 EU member states) has similarly whipsawed the scholarly publishing world for the past five years, initially requiring a stringent set of publishing requirements for global research which proved unworkable, and then withdrawing these requirements a few months ago in favor of a completely different (and no more workable) approach.
The EU’s new rules group AI into three general categories, depending on whether they a pose minimal (or no) risk to citizen rights and safety, a high risk, or an unacceptable risk. High risk systems—including but not limited to systems dealing with critical infrastructure, recruitment, border control and criminal justice—will need to comply with strict requirements for risk mitigation, data quality, user information, human oversight, cybersecurity, and more. AI that poses an unacceptable risk will be banned. This includes systems or applications that manipulate the behavior of minors, allow social scoring by governments and companies, or enable predictive policing or biometric identification. In addition, all AI generated content (including audio and video) will need to be labeled and marked in such a way as to be machine detectable.
A new European AI Office within the European Commission will supervise the implementation and enforcement of these rules. Fines levied on companies for noncompliance will range from 1.5 percent of their global annual revenue for supplying incorrect information to regulators, to 7 percent of their global annual revenue for producing banned AI applications.
Will this effort succeed? No. This is another example of EU policy overreach. The potential uses of AI are only beginning to unfold; it’s impossible to predict where it will be most useful, and unethical for the EU (or for any body) to impose strict requirements on fields of research and innovation where it lacks the requisite expertise. Piling on, it’s also fantasy to believe that digital identifiers can be placed in every bit of AI assisted content henceforth (e.g., if you’ve used an online banking chatbot, that’s AI), and naïve to believe that a single small agency in the EU will be capable of monitoring the global traffic of AI (to say nothing of whether it should even try). Even the fines are curious. China is already using banned AI on a massive scale to surveil its citizens, so assuming they are called to task for this offense, they would certainly be willing to pay a 7% tax on the technology companies they rely on for their software and hardware. This won’t be a deterrent for the worst offenders, simply a nuisance tax.
Is AI regulation needed? Maybe. But the first step in answering this question (let alone developing an appropriate policy response) is to bring all stakeholders together and figure this out the right way. The EU’s track record on developing thoughtful science communication policies is not the best right now.
European Commission's press release