Artificial intelligence combines the elements of computer science and engineering to build intelligent computer programs that help solve global problems. AI works by classifying large volumes of data into actionable information through complex algorithms. Although some have argued that the application of AI is still at its infant stage, its application is already being seen across multiple sectors. For instance, in recent years, AI application has been witnessed in creating expert systems, speech recognition, natural language processing, and machine learning.
AI's potential application across multiple sectors has raised the demand for its use and brought great optimism regarding its ability to provide substantial improvements in working processes and possibly enhance human work. Its far-reaching application has fueled an explosion in its adoption across many sectors. For instance, in the health sector, experts have continued to test and apply various aspects of AI in the performance of administrative duties, documentation, patient monitoring, medical device automation, and image analysis.
Artificial Intelligence (AI) Regulation Debate
The surge in the adoption of AI has sparked heated debate regarding the correctness of introducing regulations that govern its use and application. Proponents of AI regulation have argued that, if unregulated, there was a high likelihood that AI could work against humanity instead of being applied for greater prosperity. One such proponent of regulation is Microsoft Chairman Bill Gates. He has been quoted raising concerns about "superintelligence" and expressing his lack of understanding about why others would not be concerned about the issue. Gates equated failing to regulate Artificial Intelligence to "Summoning the demon." Proponents across the spectrum have continuously made a case for the regulation of AI. There is no telling the lengths to which designers of these technologies could use anonymous data to drive their agenda or for their gain.
However, opponents of AI regulation have continued to call for the deregulation of AI, stating that it would be impossible to regulate all aspects of AI that affect human life. In their argument, they make a case that lawmakers have generally been unsuccessful, in the past, at regulating digital technologies. Opponents of AI regulation argue that a regulatory regime that aims to deal with all uses of Artificial Intelligence technology would be comprehensive in scope. In this regard, it would not make sense to apply the same regulatory regime in facial recognition software as to smart refrigerators, which make grocery orders based on consumer patterns. Instead, however, opponents of regulation propose a strategy whereby issues regarding the use of AI would be approached incrementally, and a regulatory framework adopted based on the issues of concern at that time.
Opponents of regulation have equally argued that regulating AI technologies could stifle growth hence reducing the prospects of it ever achieving its full potential. AI technology experts such as Alex Loizou have actively opposed any form of regulation of AI before it can be fully understood. As a solution, he has called legislators first to give the technology time to flourish and evolve. All players have a good understanding of it before discussing ways of regulating it.
Emerging Issues regarding Unregulated AI
At the core of the debate on whether to regulate or not to regulate AI is that this technology relies on its large volumes of data. Proponents of regulation have argued that since data is not tangible property, it could be misused if it fell into the wrong hands. This data can interfere with individual privacy rights, database rights, copyright, and confidentiality rights in many ways. Already, there are several instances of AI applications gone awry, leading to severe violations against the victims.
According to an article by "The Guardian", the application of AI has not always yielded the desired outcomes. For instance, an overreliance on AI use in facial recognition systems led to more than 1,000 airline travelers flagging. In one case, an American Airline Pilot faced detention at least 80 times during his work since his name resembled that of a terrorist leader. In another instance, black contestants in a beauty contest were denied any win since the AI technology used to pick out winners had been trained predominantly on white women
Regulatory Response to Unregulated AI
The European Union is one such organization that has been quick to regulate AI use and its application to protect its member states from specific harmful AI-enabled practices. In its newest proposal, the European Union proposes to regulate the digital sector through the General Data Protection Regulation (the GDPR), the proposed Digital Services Act, and the proposed Data Governance Act. In the GDPR, the EU regulation introduces a four-tier system of risk to allow or prohibit the use of AI. AI regulation in the EU generally classifies AI systems as prohibited AI or Highly regulated AI. Regulation deems AI as Highly Regulated AI ("high-risk") if they pose a high risk to human beings' health and safety or fundamental rights.
Prohibited AI Systems are deemed as such if they contravene EU values or present an unacceptable risk to the fundamental rights of its citizens. It is noteworthy that the recommendations proposed in the regulation stem from the understanding that some algorithms deployed in AI applications have the potential to have direct consequences on people's lives and affect their decisions. For instance, AI is now being used to diagnose medical conditions, approve loans, select candidates for shortlisting, and recommend court penalties. In such cases, as in many other cases, the impact of AI use is enormous; hence this makes regulation imperative.
In regulating AI, the EU hopes to:
- Establish, implement, document, and maintain a risk management system.
- Establish transparency and information to end-users of AI technologies
- Provide a framework for data management and governance
- Ensure that AI systems undergo a conformity assessment procedure before releasing to the market.
- Promptly correct issues regarding AI system non-compliance with existing AI regulation
Benefits of Regulating AI
It is perhaps not in doubt that regulation of AI creates a sense of confidence in the AI technologies being developed, perhaps because regulation helps safeguard and protect fundamental human rights. It is noteworthy that the use of AI has, in several instances, been seen to breach the rights of individuals on the grounds of race, religion, and sex. Regulation of AI is estimated to bring fairness and reason in the design of technologies that work towards improving the lives of human beings.
It is equally noteworthy that regulation helps ensure that infringements on fundamental human rights are kept at bay during the application of AI across sectors. For example, regulation may protect victims using the criminal justice system, making their sentencing solely based on machine learning. Regulation may, in this effect, help ensure that bad decisions made by machines are not used to deny defendants their fundamental rights. Regulation may also ensure that individuals are protected from unlawful detention based on a flawed facial recognition system. In the long-term, it is estimated that such frameworks will help create a platform for creating accountable AI systems that are above reproach and protect users and the general public from misuse or mishandling of their data to deny them their fundamental rights.
Should AI be Regulated or Not?
It is perhaps apparent that Artificial Intelligence technologies affect almost all spheres of our lives. AI use can improve our lives in ways that we never deemed possible through explaining the reasoning behind certain decisions or events, accurate prediction, and lessening human workload. However, it is equally noteworthy that the use of AI technologies can disrupt human existence and infringe on their fundamental rights. Thus, it is perhaps more reasonable to suggest that AI technologies be regulated to minimize risk to the fundamental human rights of all users. However, regulation should be approached in such a manner that makes sense and does not discourage using these technologies. In this regard, the law should create an enabling framework for responsible AI use that is conscious of the risks involved in applying AI technologies. In the long-term, it is anticipated that this approach will help safeguard both innovators engaged in the design and rollout of these technologies and their end-users.