Regulating Artificial Intelligence

“Artificial Intelligence is not a substitute for human intelligence, it is a tool to amplify human creativity and ingenuity”- Fei Fei Li

Introduction

As we all know Artificial Intelligence (AI) is spreading like a forest fire in our day-to-day life. There has been a substantial increase in its use in the last five years and it has captured a large sphere of our personal and professional lives.  With the increasing use of AI, there is also the rise of fear and uncertainty about the emergence of new technology and its impact on our livelihood.

With the growing discovery of AI applications, there is a growing dependency on the amount of control we cede to AI and the ones who discover it on our own it. As this technology is growing and evolving, there is a global race on its utilization and bringing in regulations to establish control over it. With the emergence of AI and the increase in its utilization, major economies around the world and global organizations are working on the promulgation of a regulation to control or govern AI as there is a major necessity for the same. Here, we discuss some of the regulations to regulate AI proposed recently.

What is AI

Artificial Intelligence is the replication of human intelligence in machines, where they are programmed to mimic human-like thinking and learning abilities. It involves the development of computer systems that can undertake tasks traditionally requiring human intelligence, including comprehension of natural language, pattern recognition, decision-making, problem-solving, and adaptability to new circumstances.

Artificial Intelligence is revolutionizing numerous aspects of our lives, spanning technology, industry, healthcare, and governance. For example, Chat GPT, BARD AI, Midjourney, etc. are substituting the archaic way of writing content, software development, logistics, and much more to ensure responsible AI implementation and address associated legal challenges,

European Union

The European Union (Herein referred to as EU) was one of the first global organizations that acted to regulate AI. In February 2020, long before the general purpose of AI began, it came up with publishing “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust”. The EU has also come up with a ground-breaking AI Act, whose first draft was proposed in April 2021. The said Act was adopted on 14th June 2023 and is supposed to come into effect in the later part of 2025. The EU has on various occasions proposed path-breaking legislation and countries around the world use them as their Model. For Example, the famous General Data Protection Regulation (GDPR) on digital economy. It is interesting to see how the EU’s proposed AI Act is adopted across the globe. The European Union’s (EU) Artificial Intelligence Act (AI Act) is considered to be the most far-reaching regulation of AI worldwide. The AI Act was proposed in 2021 and would introduce a common regulatory and legal framework for AI. 

AI Act by the European Union

The major focus of the AI Act proposed by the EU is based on the risk that originates from the AI applications or systems. It recognizes various aspects of risk that arise from AI applications or systems and regulates the same on the type of risk involved. The Act categorizes risks and divides them into the following categories:

  • Unacceptable Risk
  • High Risk
  • Limited Risk

Unacceptable Risk

Unacceptable risk AI systems are the ones that are banned and are considered as threat to common people. They include risks that are unacceptable and cannot be allowed to be carried out in any circumstances. Some of them are, cognitive behavioral manipulation of people or specific vulnerable groups, social scoring, biometric identification, and categorization of people and real-time and remote biometric identification systems such as facial recognition.

At the same time, there are certain exceptions available for maintaining law and order, with certain specific controls.

High Risk

High risks in AI systems are the ones that negatively impact the safety of the citizens. They also hinder fundamental rights at the same time. High-risk AI systems need assessments before they are released in the market.

High risk AI systems are divided into two different categories which are as follows:

AI systems are used in products that fall under EU product safety legislation which includes toys, aviation, cars, medical devices, and lifts.

AI systems that fall into the areas that need to be registered in an EU database like education and vocational training, employment, worker management and access to self-employment, law enforcement, migration, asylum, and border control management.

Limited Risk

Limited-risk AI systems allow users to make informed decisions as these systems have to comply with minimum transparency requirements. AI systems included in this category are the ones that generate or manipulate image, audio, or video content, for instance, deep fakes.

The EU has also recommended general-purpose and generative AI systems that would disclose the content created by AI, and create a model that would prevent the generation of illegal content.

The approach taken by the United States of America

The United States of America (herein referred to as the US) has taken a different approach to regulating AI than that has been taken by the EU. At present there is no specific Act or law around AI. Recently, the US issued an ‘Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence dated 30th October 2023. The issued order aims to exercise authority over AI and extend powers of the federal agencies exclusively to the ones that relate to protection of the consumers.

On 4th October 2022, the ‘AI Bill of Rights’ was unveiled. This bill recognizes five protections for every American citizen in the current age of AI i.e. Safe and effective Systems, algorithmic discrimination protection, data privacy, notice and explanation, and human alternatives, consideration, and fallback.

The approach taken by India

Recently, India passed the Digital Personal Data Protection Act in 2023 which extends on data protection, privacy, and consumer protection as an outcome of AI. The Digital India Bill, 2023 is also expected to have specific rules and regulations around AI and application on the same. India is also a part of the Global Partnership on Artificial Intelligence (GPAI) and is also collaborating on worldwide AI policymaking.

Principles around which AI regulations are promulgated

The following principles take center stage while regulations are formulated for AI and have these regulatory trends:

  • Core Principles

The AI regulation and guidance under consideration is consistent with the core principles of AI. The basic principles of respecting human rights, sustainability, transparency, and strong risk management.

  • Risk-based approach

Adapting a risk-based approach is essential where regulations address the risks that originate from AI systems like, privacy, transparency, and security.

  • Sector-agnostic and sector-specific

It is important to keep in mind the sector-specific rules and regulations applicable and the working of AI in respective sectors.

  • Policy Alignment

Rulemaking on AI is in accordance with digital policy priorities like cyber security, data privacy, and intellectual property protection.

  • Private-sector collaboration

Collaboration with the private sector with the core objective of promoting safe and ethical AI. It is also done to consider the implications of higher-risk innovation associated with AI where closer oversight may be appropriate.

  • International collaboration

International collaborations concerning shared interests, risks, safety, and security concerning AI. It is an approach that is based on debates, discussions, and deliberations on the global stage.

Conclusion

It’s important to note that the field of AI regulation is dynamic, and laws are continuously evolving. Many countries are in the process of developing or refining their AI regulations to address emerging challenges and opportunities in the AI landscape. As such, staying updated on AI regulations in specific regions is crucial for businesses and organizations working with AI technologies.

LEGALLANDS recognizes the complicated domain of Artificial intelligence. Our team of legal experts specializes in handling an array of cases, including those related to money laundering, cryptocurrency fraud, and the establishment of online gaming accounts. With thorough understanding of international regulations, our firm is at the forefront of providing comprehensive legal counsel and representation to clients navigating this dynamic environment. We are committed to safeguarding the interests of both players and gaming companies, ensuring compliance with global regulations, and resolving complex legal challenges with expertise and diligence.

Related Posts

YOU ARE WELCOME!

We, the LegalLands LLP , are a family of exceptional professionals with expertise in the fields of law, taxation, business administration, consultation services, etc. We understand your problems and work to the best of our abilities, tailoring our knowledge and expertise to your specific interests and needs, to arrive at the best suitable solutions to your problems. Our aims are to cater to your needs rather than viewing these needs as opportunities to enrich ourselves at your cost!
We look forward to many more engagements with you which keep adding value to your lives.
Together and onwards we march on toward new milestones in our illustrious journey.

RAJIV TULI

Managing Partner

Legallands LLP