How are countries planning to regulate AI? An overview

    By Aravind
    Published on July 03, 2023

    Ever since ChatGPT, the generative-AI-powered natural language chatbot designed by Microsoft-backed OpenAI, was publicly released, its capabilities have been the subject of a flurry of discussions, including the need for AI regulation.

    For the uninitiated, generative AI (GenAI) is an AI model that can create data across multiple formats (such as text, audio, video, and 3D models) based on user-defined input prompts by analyzing large volumes of data sets in the back end. Because ChatGPT has amassed a record number of users and introduced disruptions within a short period, users and experts (such as tech journalists and industry leaders) believe ChatGPT gives humans a tremendous edge in terms of productivity and innovation.

    But on the other side, the exponential rise of AI and generative language models has also attracted critics with questions ranging in tone from cautious (Will AI empower cybercriminals and antisocial elements?) to borderline dystopian (Will AI wipe out human efforts entirely?).

    Now data regulation bodies are weighing in on this discourse. On March 31, 2023, the Italian Data Protection Authority, Garante, directed OpenAI to temporarily stop processing the data of its Italian users after flagging security concerns and a lack of age-based guardrails. The ban was lifted on April 23, with Garante demanding OpenAI to implement an age verification system by the end of September 2023.

    Also read: Exploring ChatGPT and its impact | A two-part series

    The global AI regulation landscape

    The concept of AI regulation harks back to 2016, but the recent action on ChatGPT by Garante and open calls by lawmakers and industry leaders for AI regulation has added more to the urgency. Some of the nations that have made noteworthy strides in AI regulation include the following:

    The United Kingdom

    The UK government published a white paper that offers a contextual approach to AI regulation and its scope of implementation. Besides projecting the UK's vision of becoming an AI superpower by 2030, the document also notes that the roadmap to achieving this objective will involve a future-proof framework for pioneering AI innovation while addressing its risks and threats in advance.

    OECD countries

    Comprised of 38 countries, the Organisation for Economic Co-operation and Development (OECD) was the first organization to propose an intergovernmental standard for AI technology. Adopted in May 2019 by the coucil's nations, this framework established five key principles for forming a trustworthy, innovative ecosystem for AI:

    • Inclusive growth, sustainable development, and well-being
    • Human-centered values and fairness
    • Transparency and explainability
    • Robustness, security, and safety
    • Accountability

    The European Union

    Consisting of 27 countries, the EU has taken a risk-based approach to AI with the AI Act, legislation that has been in the making for two years.

    Some of the frequently asked questions surrounding the AI Act include:

    What is the AI Act?

    The act plans to categorize AI-induced risks to human life and bring in corrective measures and prerequisites for each classification according to its severity.

    What are the types of AI risks according to the AI Act?

    The legislation classifies risks into four categories:

    1. Unacceptable: This pertains to AI systems that are used to operate the critical resources and support systems of humanity. This includes systems that involve the subtle manipulation of data, biometric identification systems, and social scoring tools.
    2. High: This includes AI systems used for surveillance, customer profiling, employee management, and devices that could potentially endanger lives if they malfunction (like AI-enabled healthcare and transportation systems).
    3. Limited: This pertains to AI systems that face human users, such as deepfakes and GenAI.
    4. Minimal: This includes spam filters and AI-enabled entertainment.

    What is the scope of the AI Act?

    The act "sets out core horizontal rules for the development, trade, and use of AI-driven products, services, and systems within the territory of the EU that apply to all industries."

    Akin to the GDPR's "security by design" approach, the AI Act's prime objective is to standardize "trustworthy AI by design." On June 14, 2023, the EU passed a draft of the AI Act, with 499 votes for the bill, 28 against and 93 abstaining. In their press release, the EU stated that the Members have expanded the list of AI systems with unacceptable levels of risk. Besides, the release stated that Generative AI systems will be required to label their content as 'AI generated output' once the law is implemented.

    China

    China has been at the forefront of AI regulation, starting with its 2022 recommendations for algorithm-based systems. This regulation focuses on algorithmic recommendations. Recently, the Chinese government announced further plans to regulate AI.

    The United States

    The trend of companies licensing GenAI technologies started in the US when Microsoft licensed ChatGPT and later went on to become OpenAI's billion-dollar investor. Despite being one of the pioneering nations of AI innovation, the US has only made nascent steps towards building a consolidated AI regulation framework.

    However, federal and regulatory bodies have raised requests for AI governance. Recently, the US Chamber of Commerce released a report demanding AI regulation to ensure that it doesn't pose any risks to national security. Some of the steps taken by the US government and regulatory bodies to push AI-centric regulation include:

    The DOD's five principles of AI ethics: The US Department of Defense (DOD), in an attempt to standardize the ethical adoption and development of AI technologies within the department, stated five core principles:

    • Responsible: DOD personnel must practice good judgment when dealing with AI-based technologies.
    • Equitable: The DOD must take steps to remove any unintentional bias from AI systems.
    • Traceable: During the deployment of AI technologies, the DOD must ensure that the personnel are equipped with the best practices and tools for AI use, such as transparent, auditable methodologies; data sources; design procedures; and documentation.
    • Reliable: The DOD's AI-based capabilities must have explicitly defined use cases, with the safety, security, and effectiveness of those capabilities being subject to periodic testing.
    • Governable: The DOD must "design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior."

    The Blueprint for an AI Bill of Rights: Published by the US Office of Science and Technology Policy, this white paper aims to contextualize AI usage in various social scenarios while helping lawmakers make informed policy decisions. The blueprint intends to "support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems." It recognizes five key areas that require consideration when creating and deploying AI technology:

    • Safe, effective systems
    • Algorithmic discrimination protections
    • Data privacy
    • Notice and explanation (the transparent use of automation)
    • Human alternatives, consideration, and fallback (the freedom for users to opt out of automated systems in favor of their human alternatives)

    NIST's AI Risk Management Framework (AI RMF): The 1.0 version of NIST's AI RMF highlights AI risk management as a key component in developing responsible AI technology. The AI RMF offers a "resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems." Contrary to the AI Bill of Rights, the AI RMF is "non-sector-specific and use case agnostic." The document examines the factors that contribute to AI-based risks and establishes four core functions needed by organizations to address these factors:

    • Govern: Internalizing a culture of risk management in organizational policies that design, deploy, and manage AI systems
    • Map: Defining the context of AI use cases and the risks associated with them
    • Measure: Analyzing AI risks in quantifiable terms
    • Manage: Mitigating risks that are prioritized in accordance with their level of impact

    A joint statement by the EU-US Trade and Technology Council (TTC): The EU and US issued a joint AI roadmap that expands on their collective approach to AI risk management and creating trustworthy AI services.

    Is regulation enough for safe, effective AI?

    With multiple frameworks and guidelines overlapping one another, institutions and organizations across the globe have made strides in defining AI best practices. For instance, the need to evaluate context and the need to uphold the trustworthiness of AI technology have resonated across these texts.

    However, unlike other technological advancements, AI has more large-scale implications across different sectors and has been notorious for delivering unpredictable outcomes. To prevent AI from going rogue, regulation enforcement systems must match the dynamism of this technology, and this capability gap can be filled by AI TRiSM, a Gartner-proposed framework.

    Also read: AI TRiSM explained - What is AI Trust, Risk, and Security Management?

    AI TRiSM: An important step towards AI regulation

    The majority of guidelines collectively emphasize contextual AI use and development, and this need has given rise to AI trust, risk, and security management (AI TRiSM).

    According to Gartner, AI TRiSM "ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy, and data protection" by including "solutions and techniques for model interpretability and explainability, AI data protection, model operations, and adversarial attack resistance."

    The goals of AI TRiSM are achieved through the deployment of a collection of tools that implement the five pillars of AI TRiSM:

    • Explainability: Ensure that the AI model achieves its designated goals and targets.
    • ModelOps: Manage the end-to-end software life cycle of the model.
    • Data anomaly detection: Implement a system for identifying flawed and unusual AI-generated output.
    • Adversarial attack resistance: Implement a mechanism to prevent adversarial attacks, which are attacks that use data to weaken AI and ML systems.
    • Data protection: With data sets forming the backbone of AI's input and output, securing data and maintaining its integrity and privacy is essential to ensuring the trustworthiness and security of the systems and processes.

    AI TRiSM can be an effective tool for maintaining the trustworthiness, accuracy, and predictability of AI systems. However, one of the main challenges that AI TRiSM currently faces is the lack of a consistent and actionable industry standard to incorporate.

    Consensus: The way forward for AI governance

    The proliferation of AI (and cyberwarfare) has made cybersecurity a pressing environmental, social, and governance (ESG) issue for its outcomes have a direct impact on human lives. With a technology that is inching towards mimicking human capabilities, such as ideation, perception, and creation, it must be a top priority for nations across the world to come together to define an expansive industry standard and create watchdog institutions for AI systems. Such measures encourage context-aware adoption while placing guardrails to prevent the reckless use of AI-powered technologies.

    The way that many intergovernmental organizations, such as the OECD, the EU, and the EU-US TTC, are already building frameworks to govern AI illustrates how clusters of nations can unite to form a broader consensus on AI. Recently, UNESCO called upon all nations to implement its Recommendation on the Ethics of Artificial Intelligence (a global ethical framework that was created by the organization in November 2021) as soon as possible. With global consensus on AI gradually becoming a reality, it all boils down to how swiftly and decisively will countries be able to implement these regulations.

    Related Stories

    2020 Zoho Corporation Pvt. Ltd. All rights reserved.