How many times have you Googled yourself, and felt a slight sense of pride when Google accurately listed something cool you have done in its organic search result? I tried the new-age version of this – I asked Generative AI tools – Bard and Chat GPT – who I was. Naturally, they both responded saying that there was no notable public figure or widely known individual with that name.
After a momentary sense of rejection, I got more interested, and started crafting more detailed prompts. To my surprise, one of them said I work with a think tank, let’s call it XYZ. The only problem – I have never worked with XYZ!
As a lawyer and a technology enthusiast, I am equal parts thrilled about the promise of AI, and worried about the risks emanating from it. An incorrect output about me working at XYZ is one thing, but what if models foster racial bias in judges’ decisions in courtrooms, or banks’ decisions in granting loans?
With this background, some preliminary questions about the risks of AI that come to my mind are – what data sets are the large language models trained on, and how do they obtain such data in compliance with data protection laws? Should AI models be licenced, vetted or tested before being made available to the public? Should AI models be held responsible as intermediaries for discrimination, or for sharing misleading information, and if yes, to what extent? Should all AI tools be treated equally, or should regulators adopt a calibrated approach to different types of AI tools?
Regulators around the world are cognisant of the risks arising out of poorly functioning or under-regulated AI systems, and are asking the same questions. Several countries are already in the process of developing new laws, while other countries are encouraging adoption of AI tools in governance. In this article, we look at some of the measures taken by countries around the world towards regulating and using AI.
One of the pioneering pieces of general-purpose legislation in this regard is the proposed EU AI Act. It is expected to be a landmark piece of EU legislation, and have the same standard-setting “Brussels Effect” that the EU General Data Protection Regulation had on data privacy. Under the proposed EU AI act, lawmakers seek to take a risk-based approach, and classify AI according to their perceived level of risk, from low to unacceptable.
Unacceptable AI systems would include those that enable biometric surveillance, emotion recognition, predictive policing, and are sought to be banned. High-risk AI systems, such as those used for administering justice or law enforcement will be subject to strict “conformity assessments” before they are made available for use in the market.
Recently, a new category of General Purpose AI Systems has been proposed, which covers AI tools with more than one application, such as generative AI models like ChatGPT and Bard. They may be required to comply with additional transparency requirements, such as publishing summaries of copyrighted data used to train the model.
China has also put out draft measures requiring AI models to submit security assessments to authorities before going public. In India, AI regulation is being deliberated within the scope of the Digital India Act, a holistic law seeking to cover digital services and digital markets. Qatar is in the process of drafting a legislation to govern AI.
Sector specific regulators are also trying to understand how their regulatory ambit is impacted by AI. In the UK, the Financial Conduct Authority, has been tasked with drawing up new guidelines covering AI (think algo-trading), and the Competition and Markets Authority has started assessing the impact of AI on consumers, businesses and the economy from a competition lens (think algo-collusion). In Italy, ChatGPT was temporarily banned by its data regulator over a suspected breach of data laws.
Some countries are adopting AI as a way of governance to better understand the nuances of the technology. Others are holding collaborative, multi-stakeholder discussions with the industry, civil society and private sector. In April 2023 the UAE had put out a first-of -its-kind practical guide suggesting 100 ways in which the government can use AI!
Just earlier this month, a special taskforce with 30 government entities in Dubai has been formed to harness the power of AI to transform government operations and services. The UAE has, in partnership with Google, also launched the AI Majlis, a collaborative forum that brings officials from the government, academia, public and private sector to discuss AI public policies.
AI will be a large part of all of our lives, and evolve dramatically in the near future. Overall, while the regulatory interventions around the world are indeed timely, it is also imperative that the need to regulate is balanced with the incentive to innovate.
Overregulation, or premature regulation, can stifle the sector’s growth in its early stages. With careful technology design, collaborative policy approaches, and legal frameworks that ensure responsibility and accountability, the harms can be minimised and the potential of the technology can be optimally leveraged.