February 8, 2024
2023 was a transformative year for the legal, regulatory, and policy landscape around artificial intelligence (“AI”). Public debate as well as commercial and public sector deployment of AI capabilities hit a fever pitch, though many of the legal frameworks that hit major milestones in 2023 predate the generative AI phenomenon.
The European Union’s (“EU”) AI Act overcame near derailment by the emergence of foundation models (so-called “general purpose AI”) and now approaches the finish line, on track for 2024 to become the first comprehensive AI law on the books, directly regulating AI systems based on inherent risk, with sweeping consequences far beyond EU borders.
For now, the U.S. continues to rely on a largely sectoral, self-regulatory approach to AI. While efforts to develop a federal framework fell short, the landscape remained dynamic: a sweeping White House executive order, private sector commitments around cutting-edge frontier models, regulatory guidance and emergent best practices grounded in the National Institute of Standards and Technology (“NIST”) AI Risk Management Framework 1.0, and statements by agencies including the Federal Trade Commission (“FTC”), Department of Justice (“DOJ”), Equal Employment Opportunity Commission (“EEOC”), Securities and Exchange Commission (“SEC”), and Consumer Financial Protection Bureau (“CFPB”), as well as ongoing efforts by the Senate to develop AI legislative frameworks. At the federal and state level, legislative and regulatory focus sharpened on the allegedly improper use of protected data (for example, personal or copyrighted data) to develop models and improve products and services.
2024 promises more of the same. In a year where half the world’s population is slated to cast a ballot in an election, and as AI increasingly establishes itself as a topic with certain but unclear geopolitical import, governments will continue to experiment in deploying different regulatory models to governed foundation models and other types of AI deployments in an effort to achieve political, societal, and geopolitical goals. These developments will occur in parallel with emergent and evolving societal norms around the use and acceptance of AI, and broader understandings about potential risks.
This will take place across legal domains. For example, competition authorities around the world have already signaled increasing scrutiny of the market impacts of leading companies in the AI space. In the EU, the AI Act will require virtually all companies using AI in their products, services and supply chains on the EU market to assess their risk profile and potential liability under the new framework. Similar comprehensive AI laws and governance tools continue to be proposed and debated elsewhere around the world. In the U.S., the FTC, California’s Privacy Protection Agency (“CPPA”), and other federal and state regulators are poised to continue their efforts to establish themselves as key agencies in this fast-evolving space. We also expect to see new AI-related state legislation and a regulatory enforcement focus on data governance and usage in high-risk spaces, such as employment, insurance, and healthcare, and a reimagined intellectual property legal landscape thanks to guidance from the U.S. Copyright Office and court rulings in pending high-profile federal lawsuits.
Our AI Review and Outlook – 2024 focuses on these legal and regulatory developments and also examines other notable policy updates in the U.S. and the EU, with an eye toward the key issues and developments to watch in 2024. Key developments include:
TABLE OF CONTENTS
I. EU POLICY & REGULATORY DEVELOPMENTS
II. U.S. LEGISLATIVE, REGULATORY & POLICY DEVELOPMENTS
III. U.S. SECTOR-SPECIFIC DEVELOPMENTS
A. Intellectual Property
B. Privacy
C. Employment
D. Insurance
IV. SELECT ADDITIONAL INTERNATIONAL DEVELOPMENTS
_________________________
I. EU POLICY & REGULATORY DEVELOPMENTS
A. EU AI Act
In late 2023, the EU reached a long-awaited milestone in comprehensive AI regulation. After almost 6 months of trilogue negotiations, the European Commission, the Council and the Parliament reached a political agreement on the provisional rules that will comprise the first global AI regulation – the EU AI Act – on December 8, 2023.[1]
The provisional agreement on the EU AI Act will:
A number of procedural steps remain before the AI Act can be finalized; however, the staggered (and relatively rapid) planned enforcement of certain provisions bears note. Provisions related to prohibited AI systems are set to become enforceable six months after the Act is finalized; and provisions related to so-called General Purpose AI (“GPAI”) become enforceable 12 months after this date. The rest of the AI Act is expected to become enforceable in 2026.
The “long arm” of the AI Act will impact a broad range of businesses—including, but not limited to, those that intend to provide or deploy AI systems within the EU.[2] The distinct posture of the AI Act, based in part on fundamental and human rights jurisprudence, requires companies to think differently when preparing compliance strategies, including:
A draft of the final text was released on January 21, 2024, but the provisional agreement must now be formally approved by the EU Member States and the European Parliament. For more details into related developments, please see our previous alerts analyzing the European Commission’s 2021 proposal on the AI Act, the European Council’s common position in December 2022, the European Parliament’s negotiating position in June 2023), and the political agreement reached on December 8, 2023.
B. AI Liability Directive & Product Liability Directive
In September 2023, the AI Liability Directive (“AILD”) and the Product Liability Directive (“PLD”) were introduced as part of a comprehensive package to facilitate the responsible deployment of artificial intelligence in Europe.[3]
The AILD focuses on fault-based liability under national regimes for damages caused by specific AI systems, establishing standardized rules for information access and burden of proof. Simultaneously, the PLD, updated in a political agreement in December 2023, broadens no-fault liability for defective products to encompass digital entities such as software, including those powered by AI. The legislation, poised for formal approval, is set to govern products entering the market 24 months post-directive enforcement. Notably, it introduces provisions for compensating a range of losses, including data corruption, and outlines conditions for presuming product defectiveness in specific scenarios.
C. EDPS Opinions on AILD and PLD
In October 2023, the European Data Protection Supervisor (“EDPS”) issued “Opinion 42/2023 on the Proposals for two Directives on AI liability rules.”[4]
Key points include the EDPS’ emphasis on extending liability rules to AI systems used by EU institutions, advocating for broad procedural safeguards, suggesting comprehensive and understandable disclosure of information, and recommending a reconsideration of additional measures for consumers to prove fault or causality. The EDPS also proposed explicitly stating that the AILD does not prejudice Union data protection law and suggested shortening the review period for AILD to expedite assessing compensation effectiveness.
II. U.S. LEGISLATIVE, REGULATORY & POLICY DEVELOPMENTS
A. White House AI Executive Order
On November 1, 2023, the Biden Administration released its long-awaited Executive Order on AI (“EO”).[5]
The goals and overarching themes of the EO are to:
Although the EO attempts to address a variety of pressing AI-related issues, it is largely focused on directing federal agencies to develop guidance on the use of AI; the creation of new standards, including for labeling AI-generated content and ensuring the safety and security of critical infrastructure; safety testing models; and detecting AI-generated content and authenticating AI-related content. The EO’s focus on privacy includes developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
Select notable requirements created by the EO include: (1) affirmative reporting requirements for AI companies developing or intending to develop foundation models; (2) the creation of new standards, including for labeling AI-generated content, and for ensuring the safety and security of critical infrastructure; and (3) the creation of a cybersecurity program that develops AI tools to find and fix vulnerabilities in critical software.
As relevant to the private sector, the EO contains three specific requirements. First, it purports to require that developers of high-capability foundation models report and provide information to the federal government, as discussed below. Second, it imposes separate reporting requirements for companies that acquire, possess, or develop “potential” large-scale computing clusters, including disclosing the existence and location of these clusters and their power. Third, the EO requires the Secretary of Commerce to propose new regulations that require U.S. cloud service providers to notify the government if non-U.S. individuals or entities who use their services start training large AI models that could be used for malicious purposes.
1. Reporting Requirements for High Capability Foundation Models
Section 4.2(a) seeks to impose affirmative reporting requirements for companies that (1) develop or have the intent to develop “foundation models,” or (2) “acquire, develop or possess” large compute clusters.
The EO requires developers of large, high-capability foundation models to provide information to the federal government about (1) model safety and training, (2) steps taken to protect model weights, and, perhaps most concerning, (3) the results of all red-team safety testing. This requirement is written to apply broadly to a range of foundation models that are considered “dual use.” Importantly, this covers not only models that “exhibit a high level of performance at tasks that pose a serious risk to security, national economic security, national public health or safety,” or any combination of the above, but also models that “could be easily modified” to do so, even if they include technical safeguards that attempt to prevent users from using such “unsafe capabilities.” Accordingly, a company that has the intent to develop a foundation model that could be modified (including by a third party) to exhibit such risks is subject to this registration requirement.
As such, companies appear required to assess model risk and report accordingly. The EO does not clearly define who would determine that a foundation model presents such “serious risks” or how such a determination would be made. It would require companies to make this determination on their own, and provide examples including models that: make weapons of mass destruction accessible; enable “powerful” cyber-offensive operations against a range of targets; or allow an AI model to evade human control or oversight (including through “deception”).
Simultaneously, Section 4.2(b) would appear to independently establish a temporary registration requirement for models trained on a certain quantity of computing power. The Secretary of Commerce (in consultation with other executive agencies) is directed to establish “technical conditions” for models “that would be subject” to these reporting requirements. The import of these “technical conditions” and their relationship to a determination of “serious risk” and attendant reporting obligations remains to be seen.
2. OMB Implementing Guidance
On November 1, 2023, the Office of Management and Budget (“OMB”) published a draft memorandum to assist in implementing the EO.[6] The guidance in the memorandum is primarily focused on operationalizing standards for federal government actors, but holds predictive value for companies contracting with government agencies and may be instructive as to what future federal regulation may hold for the private sector.[7]
The key proposed policies would: institute government-wide “minimum practices” to be employed with regards to any “rights-impacting” or “safety-impacting” AI; require agency-specific AI strategies, which would include planning for data sharing, workforce training, and cybersecurity measures; and instruct agencies to designate a Chief AI Officer to oversee all AI use within each agency.[8] OMB is expected to publish the final guidance document in 2024.[9]
3. Next Steps
The U.S. Department of Commerce (“Commerce”), NIST, the Bureau of Industry and Security (“BIS”), the National Telecommunications and Information Administration (“NTIA”), and the U.S. Patent and Trademark Office (“USPTO”) will play a key role in implementing the EO. Commerce has been given 90 days to establish the reporting requirements.[10] On December 19, 2023, NIST released an RFI seeking public comment to support its response to the EO and develop guidelines for evaluation and red-teaming as well as consensus-based standards.[11] Responses were due Friday, February 2, 2024, and NIST anticipates publishing draft guidelines for public comment in due course.
B. Voluntary Commitments for Frontier AI Models
On July 21, 2023, the White House announced that several major technology companies had made voluntary commitments to ensure the safe development of frontier AI systems.
Among the commitments are efforts to develop markings on AI-generated content that can allow users to understand that the content derives from an AI system, to internally and externally red-team generative AI systems’ safety, and to prioritize research on how AI models can protect privacy and safeguard against potential bias and discrimination.[12] In the following months, additional companies signed on to the White House’s voluntary commitments as well.[13]
C. NIST’s Focus on AI
On January 26, 2023, NIST released the first version of its Artificial Intelligence Risk Management Framework (“AI RMF”).[14]
The AI RMF is designed to assist organizations in mapping out and assessing AI risks and “trustworthiness” in the development and use of AI products, systems and services. The AI RMF follows direction from Congress for NIST to develop the framework, and was produced in close collaboration with the private and public sectors. It is intended to provide practical guideposts that are adaptable to the rapidly evolving AI landscape, and outlines core fundamental functions that organizations should consider when developing trustworthy AI systems, including governance, risk assessments, and risk management. NIST also established the Trustworthy & Responsible AI Resource Center that will serve as a repository for current guidance on AI that can assist companies and organizations in institutionalizing the AI RMF. For more details on NIST’s AI RMF, see our alert, NIST Releases First Version of AI Risk Management Framework.
On March 8, 2023, NIST released a draft report that defines certain key terminology and creates a taxonomy of attacks and mitigation techniques relating to adversarial machine learning (“ML”).[15] The report aims to inform standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for the rapidly developing adversarial ML landscape. Specifically, the report outlined three categories of attacks: evasions (where adversary generates adversarial examples), data and model poisoning (where attacks occur during the training of a machine learning algorithm to introduce integrity violations), and data and model privacy (where attacks seek to reconstruct training data or infer datasets). In June 2023, NIST also announced the creation of a Public Working Group on Generative AI, which is intended to build upon the AI RMF and address developments in the AI sector.[16] On December 21, 2023, NIST issued a Request for Information (“RFI”), relating to its assignments under the White House’s AI Executive Order.[17] The RFI spans a range of broad categories, including red-teaming exercises, benchmarking, and watermarking.
D. U.S. Congressional Actions
Members of the U.S. Congress demonstrated a keen interest in AI in 2023, including by holding AI-related hearings, meeting with key stakeholders, and introducing various bills to regulate AI. However, these efforts were largely fragmented and the proposals are unlikely to result in passage and enactment.[18]
Throughout 2023, both the House and Senate held hearings on a range of topics relating to AI, including AI regulation, potential risks with IP and misinformation, and national security considerations. In April of 2023, Senate Majority Leader Chuck Schumer (D-NY) spearheaded a bipartisan effort to develop a comprehensive AI policy framework that “outlines a new regulatory regime” and implements “robust” oversight efforts. This theoretical framework focused on the following four proposed guardrails: (i) identification of who trained the algorithm and who its intended audience is; (ii) disclosure of its data source; (iii) an explanation for how it arrives at its responses; and (iv) transparent and strong ethical boundaries.[19] When Senator Schumer spoke at the Center for Strategic and International Studies in June 2023, he referred to the framework as the “SAFE Innovation Framework.”[20]
Although proposals made little progress, a few themes have emerged.
Additional proposals included bills intended to: restrict Section 230 immunity for civil claims premised on generative AI,[23][24] prohibit the distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads,[25] and restrict the use of the “name[s], image[s] and likeness[es] (NIL)” of artists.[26]
Given the blistering pace of technological development and fast-moving regulatory landscape on the matter of AI, it remains to be seen whether Congress will be successful in passing comprehensive AI legislation in this legislative session. As Justice Kagan noted recently during oral argument, “Congress can hardly see a week in the future with respect to this subject, let alone a year or a decade in the future.”[27]
E. Joint Agency Statement on Bias and Discrimination
On April 25, 2023, officials from the DOJ, FTC, CFPB, and EEOC issued a joint statement stating the agencies would “vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”[28]
While the joint statement is nonbinding, it highlights the following three areas of AI as potential sources of discrimination in automated systems that may result in enforcement from these agencies:
F. FTC Enforcement and Policy
In 2023, the FTC doubled down on its focus on AI through an array of blogs, policy statements, and enforcement actions, starting the year with the launch of its Office of Technology to bolster in-house technical expertise and capacity, signaling its commitment to enforcing consumer protection laws in the high-tech space.[29]
Underlining this ambition, on November 21, 2023, the FTC approved a significant resolution streamlining FTC Staff’s ability to issue civil investigative demands (CIDs) in investigations relating to AI.[30] In announcing the resolution, the FTC defined AI broadly to include (but not be limited to) “machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” The announcement further stated that generative AI “can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans.” The resolution will be in place for 10 years and will likely facilitate the FTC in launching AI-related investigations.
1. FTC’s Policy Statements, Blog Posts, and Guidance
In May 2023, the FTC issued a policy statement[31] and accompanying press release,[32] warning about misuses of biometric information and the potential harm to consumers.
The statement asserted that the FTC “is committed to combating unfair or deceptive acts related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.” The FTC emphasized that it will scrutinize statements about the collection and use of biometric information and warned that companies should not make false statements about the extent of their collection or use of biometric information, underscoring that “[b]usinesses also must ensure that they are not telling half-truths—for example, a business should not make an affirmative statement about some purposes for which it will use biometric information but fail to disclose other material uses of the information.” The statement recommends that companies continuously monitor compliance with Section 5 of the FTC Act and have a system for receiving and addressing consumer complaints and disputes related to biometric information.[33]
In late June 2023, the FTC published a blog post about generative AI and its impact on competition, expressing concern that a small number of companies could control the essential “building blocks” of generative AI—data, talent, and computational resources–and thus stifle competition.[34]
Specifically, the FTC took the view that the volume and quality of data needed to train a generative AI model may lock out new players in the market who do not have access to large quantities of end-user data. Further, the FTC noted that the minimum resources needed to fully train a model can pose a prohibitive cost of entry, potentially leading to a market where entrants must use pre-trained models that are controlled by a small number of incumbents. As a result, the FTC asserted that it will use its “full range of tools to identify and address unfair methods of competition.”[35]
On May 1, 2023, the FTC published a blog post focusing on the use of generative AI in advertising and the ways in which it could “steer people unfairly or deceptively into harmful decisions.”[36]
The FTC’s concern arises from so-called “unearned human trust,” which is the tendency to trust the output from machines (i.e., “automation bias”) and the ability of AI to mimic human interaction. The blog post reiterated that, in the FTC’s view, advertisements should always be clearly labeled as such, and noted that outputs of any generative AI “influenced by a commercial relationship” should be disclosed.
2. FTC Issues Order Prohibiting Use of Facial Recognition System
On December 19, 2023, the FTC announced a complaint and proposed stipulated order (“Order”) against a retail company in connection with the company’s alleged unfair use of facial recognition technology.[37]
Notably, the Order prohibits the company from using any facial recognition system for five years and requires that the company and its third-party vendors delete any images collected from facial recognition systems as well as any algorithms or products derived from such images and photos.
In his accompanying statement, Commissioner Bedoya noted that the settlement “offers a strong baseline for what an algorithmic fairness program should look like” beyond the use of facial recognition and offered two additional comments that suggest the FTC continues to be focused on enforcement in relation to AI tools used for automated decision-making in particular (emphases added):
G. SEC
Demonstrating a continued focus on the use of AI in the financial sector, the Securities and Exchange Commission (“SEC”) sent RFIs to several investment advisers relating to AI-related topics, including marketing documents, algorithmic models used to manage client portfolios, third-party providers, and compliance training.[39]
The use of AI technologies to optimize, forecast, or direct investment-related behaviors or outcomes has accelerated, which has, in turn, increased market access, efficiency, and returns for investors. In a series of statements, SEC Chair Gary Gensler has warned about potential harms that could emerge from the financial industry’s growing adoption of AI, from inadvertent bias and conflicts of interest to a risk of financial instability.
In July 2023, the SEC proposed rules regarding the use of data analytics, including AI, which would require firms to neutralize any conflicts in which AI put the firm’s interests above a client’s. The proposed rules would require a firm to evaluate and determine whether its use of certain technologies in investor interactions involves a conflict of interest that results in the firm’s interests being placed ahead of investors.[40] Firms would then be required to neutralize the effect of any such conflicts and would be permitted to employ tools that they believe would address these risks specific to the technology they use. Lastly, the rules would require a firm to maintain written policies and procedures designed to achieve compliance with the proposed rules and to make and keep related books and records.
H. CFPB
The CFPB significantly increased its focus on AI and automated decision-making tools in 2023, issuing public statements and new guidance as well as proposing new rules focused on creditors and lenders.
On June 1, 2023, the CFPB proposed a rule that would govern the use of so-called “automated valuation models” used by mortgage originators and secondary market issuers to determine the value of a home.[41] The rule would require institutions to take certain steps to minimize inaccuracy and bias, including by “adopt[ing] and maintain[ing] policies, practices, procedures, and control systems to ensure that automated valuation models . . . adhere to quality and control standards.”[42] These standards should ensure a high level of confidence in the valuation, protect against data manipulation, avoid conflicts of interest, require random testing of the models, and comply with applicable nondiscrimination law. The CFPB also released an issue spotlight in June 2023, which focused on the potential risks associated with the use of chatbots by financial institutions, including diminished customer service, running afoul of federal consumer financial protection laws, and causing harm to consumers.[43]
In addition to the proposed rule, the CFPB also issued a Consumer Protection Circular in September 2023, titled “Adverse Action Notification Requirements and the Proper Use of the CFPB’s Sample Forms Provided in Regulation B,” which contained guidance aimed at ensuring transparency for consumers who receive an adverse decision on an application for credit.[44] The guidance emphasizes that creditors must provide accurate and specific reasons for adverse decisions made by complex algorithms, a requirement that is not automatically satisfied by the use of a sample adverse action checklist.
I. HHS
On December 13, 2023, the U.S. Department for Health and Human Services (“HHS”) issued its first rule regarding the use of AI in healthcare.
Titled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” (“HTI-1”), and issued through the Office of the National Coordinator for Health Information Technology (“ONC”), the final version of the rule followed a comprehensive, months-long rulemaking process.[45]
HTI-1 revises the previous certification criterion to require that health IT offerings facilitate the use of predictive models and algorithms in healthcare decision-making, and inform users about the use of predictive models and algorithms.[46] The rule broadly defines predictive models and algorithms in healthcare as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis.”[47] ONC specifically includes large language models and other models generally relying on training data in a list of exemplar technologies that would likely meet the Rule’s definition of predictive technology.[48]
To be certified, predictive intervention technologies must support a baseline set of “source attributes,” or categories of technical performance and quality information, including the intervention’s purpose, potential out-of-scope uses, development, external validation history, quantitative measures of performance, and any maintenance requirements.[49] Developers seeking certification of predictive health IT products must also ensure the source attribute information their predictive technology draws on is complete and up-to-date, and adopt and maintain certain intervention risk management practices.[50] Additionally, HTI-1 also identifies additional requirements for maintaining certification under the Program.[51] HHS has signaled that additional regulations are on the horizon, including a forthcoming HTI-2.[52]
III. U.S. SECTOR-SPECIFIC DEVELOPMENTS
1. Copyright Office and Courts Limit Protection for AI-Generated Works
On March 16, 2023, the Copyright Office concluded that AI-generated material may be eligible copyrightable material to the extent that it is the result of the author’s “own mental conception, to which [the author] gave visible form.”[53] This guideline followed a decision in February 2023, when the Copyright Office decided to grant copyright protection to only portions of a book, named Zarya of the Dawn, that was deemed the expressive material of the author, and not the associated images generated by an AI tool.[54] Subsequently, on August 30, 2023, the Copyright Office published a Notice of Inquiry (“NOI”) seeking comment on the copyright law and policy issues implicated by AI systems, and generative AI in particular.[55] Specifically, the Copyright Office sought public comments on: (1) the use of copyrighted works to train AI models; (2) the copyrightability of material generated using AI systems; (3) potential liability for infringing works generated using AI systems; and (4) the treatment of generative AI outputs that imitate the identity or style of human artists.
On August 18, 2023, the District Court for the District of Columbia held that AI-generated output cannot be copyrighted because such work lacks human authorship.[56] Plaintiff Stephen Thaler had attempted to register the output of his generative AI system with the Copyright Office, listing the system as the author and himself as the assignee. The court affirmed the conclusion that the Copyright Act and the Constitution both provide for the granting of copyright to “authors,” who must be humans, and concluded that a work generated autonomously by a generative AI system is not eligible for copyright.
2. Courts Begin to Contend With Alleged Copyright Infringement by Generative AI
2023 saw a series of copyright infringement litigation filed across U.S. federal courts in connection with generative AI tools and platforms and the data used to develop them. For example, a group of authors brought a putative class action suit alleging that a major technology company used copyrighted books to train its large language models. On November 20, 2023, the Northern District of California dismissed the copyright infringement claim, reasoning that an allegation that the model was trained on copyrighted materials is insufficient to show that all the models’ outputs are themselves infringing.[57] Similarly, a group of visual artists brought a putative class action suit alleging that developers of AI art tools used their copyright-protected works to train their models. In October 2023, the Northern District of California dismissed all but one claim, relating to direct copyright infringement by one AI art tool developer. The court held that plaintiff’s discovery of their copyrighted work on a search platform that shows users whether their works have been used for AI training was sufficient to state a claim for direct infringement.[58] Meanwhile, a number of companies developing generative AI tools have announced the creation of copyright indemnity shields under which the companies will indemnify customers, subject to varying limitations, for certain copyright infringement liability stemming from their use of the companies’ generative AI systems.[59]
3. Copyright Management Information Claims
Many lawsuits against companies developing generative AI technologies assert claims under the Digital Millennium Copyright Act for the removal of copyright management information (“CMI”). On May 11, the Northern District of California refused to dismiss CMI claims brought by plaintiffs against developers of a code-generating AI system. Plaintiffs alleged that the companies had trained AI programs to “ignore or remove CMI.”[60] The court held that plaintiffs had sufficiently alleged that the companies “intentionally designed the programs to remove CMI from any licensed code they reproduce as output.”[61] By contrast, on October 30, 2023, the Northern District of California dismissed a CMI claim in a separate case because the complaint failed to identify the “particular types of their CMI from their works that they believe was removed or altered,” in connection with the use of their works in the defendant’s training set.[62] In this putative class action, a group of visual artists brought a CMI claim that a generative AI company had scraped their works from public datasets, and had “stripped or altered” the CMI associated with such works. We expect to see more development in the court’s rulings on CMI-related claims in 2024.
Several U.S. states have passed new comprehensive privacy laws, some of which contain obligations directly implicating businesses’ use of AI and automated decision-making technologies (“ADMT”).
On March 15, 2023, the Colorado Attorney General finalized the Colorado Privacy Act (“CPA”) regulations that included AI- and ADMT-specific requirements relating to notice, opt-outs, and data protection assessments.[63] In late November 2023, the California Privacy Protection Agency (“CPPA”) released discussion draft regulations intended to facilitate CPPA board discussion on ADMT and risk assessments (the “Draft Regulations”).[64] The Draft Regulations provide an expansive definition of ADMT[65] while also proposing ADMT-specific obligations relating to notice, opt-out rights, and risk assessments. Specifically, under the Draft Regulations, consumers would have the right to opt-out of ADMT for decisions that produce “legal or similarly significant effects” on an individual as well as the right to access certain information about a businesses’ use of ADMT. The draft would also require risk assessments for the use of ADMT, which would need to include a description of why the business seeks to use the ADMT, the “operational elements” of the processing, and the safeguards that the business will put in place to mitigate negative privacy impacts on consumers.[66] The proposal carves out key areas of future discussion for the CPPA Board, including the profiling of children under 16 and the use of consumer information for model training.
These pre-rulemaking Draft Regulations were discussed during the December 8, 2023 CPPA board meeting.[67] Several board members expressed concerns that the discussion draft regulations were overly broad and suggested narrowing the definition of profiling to target ADMT which is particularly concerning and intrusive to avoid regulating low-risk ADMT. We expect that the Draft Regulations will be amended and that certain provisions may be informed by other emerging U.S. and global AI regulations, including the EU’s approach under the draft AI Act.
1. EEOC
Following the EEOC’s publication of a draft Strategic Enforcement Plan (“SEP”) for Fiscal Years 2023-2027 on January 10, 2023, it released the final SEP on September 21, 2023, which makes clear that the agency will remain focused on the use of AI in employment.
As employers are increasingly using technology in employment, the SEP makes clear that the EEOC intends to focus on employment decisions, practices, and policies in which employers leverage technology (broadly defined), including machine learning, AI, algorithmic decision-making, and other automated employment decision-making tools. The EEOC will also place special emphasis on aiming to eliminate barriers arising from purportedly exclusionary job advertisements, restrictive or inaccessible application systems, and AI systems that intentionally exclude or adversely impact protected groups for recruitment or hiring.
This priority aligns with the EEOC’s ongoing attention to AI and automation in 2023, including issuing its second set of technical guidance on AI,[68] reaching a conciliation agreement requiring a job search website operator to re-write its algorithm following claims of national origin discrimination,[69] and finalizing a consent decree in a case alleging algorithmic age discrimination.[70] Employers can expect more technology-related cases to be brought by the EEOC in addition to ongoing AI regulation at the state and local levels, including in New York and California, and an uptick in proposals from Congress, such as the Algorithmic Accountability Act of 2023.[71]
On May 18, 2023, the EEOC released new technical guidance on employers’ use of AI under Title VII of the Civil Rights Act of 1964.[72]
The guidance outlines key considerations that, in the EEOC’s view, help ensure that automated employment tools do not violate Title VII of the Civil Rights Act of 1964 (“Title VII”) when making employment decisions.
The guidance provides that the “four-fifths rule” merely acts as “a rule of thumb” when analyzing adverse impact with respect to algorithmic decision-making tools and is not necessarily sufficient to show that a tool is lawful under Title VII. Further, the EEOC encourages employers to routinely conduct self-assessments of their AI tools to monitor for potentially disproportionate effects and states that an employer’s failure to take steps to adopt a less discriminatory algorithm that was considered during the development process may give rise to liability.
2. New York City Local Law 144
On July 5, 2023, New York City’s Department of Consumer and Worker Protection (the “DCWP”) began enforcing Local Law 144, the broadest law governing AI in employment in the US.
Under Local Law 144, an automated employment decision tool (“AEDT”), is defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”[73] Local Law 144 prohibits employers from utilizing an AEDT in hiring and promotion decisions unless it has been the subject of an annual bias audit by an “independent auditor” no more than one year prior to use. The law also imposes certain posting and notice requirements to applicants and employees who are subject to the use of an AEDT.
For more detailed insights into Local Law 144, please see our prior coverage of the Final Rules, DCWP’s FAQs, and Local Law 144’s Scope.
The Colorado Division of Insurance has implemented a final regulation, effective on November 14, 2023, that requires life insurers operating in Colorado to integrate AI governance and risk-management measures.[74]
These measures must be reasonably designed to prevent unfair discrimination in the utilization of AI models leveraging external consumer data and information sources, which are defined to include biometric data. Under the regulations, insurers must remediate any instances of detected unfair discrimination. The regulation requires insurers to conduct a comprehensive gap analysis and risk assessments and imposes specific documentation requirements, including maintaining an up-to-date inventory of AI models, documenting material changes, bias assessments, ongoing monitoring, vendor selection processes, and annual reviews.
IV. SELECT ADDITIONAL INTERNATIONAL DEVELOPMENTS
A. United Kingdom
In 2023, the UK Government demonstrated further support for its proposed “pro-innovation” and “context-specific” AI regulatory regime.
On March 29, 2023, the UK Government published the AI White Paper, which proposes sector-specific oversight of the development and use of AI alongside empowering existing regulators like the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), Competition and Markets Authority (CMA), and the Office of Communications (Ofcom), agencies that will be called upon to regulate the use of AI within the scope of their existing remits.[75] In 2023, UK regulators published guidance regarding the use and regulation of AI, including:
On March 15, 2023, the UK Government responded to recommendations made in the Pro-innovation Regulation of Technologies Review prepared by Sir Patrick Vallance, the Government Chief Scientific Advisor, to clarify issues relating to IP and AI. The UK Government accepted the recommendations and announced that a code of practice on copyright and AI would be developed with the UK Intellectual Property Office (“IPO”) with input from users and rights holders.[80] However, in February 2024, the UK Government announced that it was abandoning plans to develop the code.[81]
On July 7, 2023, the Parliament’s Communications and Digital Committee launched an inquiry into large language models (LLMs) and sought public comment on its work in evaluating the work of the UK Government and regulators, examining how well this addresses current and future technological capabilities, and reviewing the implications of approaches taken elsewhere in the world.[82]
On November 1 and 2, 2023, the UK Government hosted the AI Safety Summit 2023 (the “AI Summit”), which brought together representatives from a broad range of countries, companies, and civil society groups. The AI Summit was primarily built around round-table discussions on global safety and societal risks, as well as sessions focused on the steps that frontier AI developers, national policymakers, the international community, and the scientific community should take. Countries attending the first day of the Summit, including the United States, China, Japan, the UK, members of the EU, Korea, Singapore, and Brazil, agreed to the Bletchley Declaration, which recognizes that AI presents the potential to enhance human wellbeing as well as risks, particularly arising from “highly capable general-purpose AI models, including foundation models.”[83]
At the Summit, UK Prime Minister Rishi Sunak announced the creation of a UK AI Safety Institute (the “Institute”), a new global hub based in the UK and tasked with testing the safety of emerging types of AI,[84] and Vice President Kamala Harris announced the creation of a US AI Safety Institute housed by NIST.[85]
B. Canada
As part of a bill introduced in June 2022, Canada has made progress with respect to its proposed Artificial Intelligence and Data Act (AIDA),[86] which is intended to promote responsible AI systems in the private sector through a risk-based approach.
Under the risk analysis, harm may be individual or collective, physical, psychological, or economic, and biased output can arise if an AI system causes disadvantage without justification on the basis of one or more of the grounds in the Canadian Human Rights Act.[87] In March 2023, the AIDA companion document was issued,[88] which laid out a general approach for AIDA, identified a liability scheme, and provided “key factors” as guidance for companies to assess the high-impact risks of their AI system. Relatedly, on September 27, 2023, Canada’s Minister of Innovation, Science and Industry announced a voluntary code of conduct for organizations engaged in the development and management of generative AI systems[89] to effectively serve as a bridge between the present and when the AIDA may come into force.[90]
At a local level, as of September 2023, Quebec’s “Act to modernize legislative provisions as regards the protection of personal information” requires that individuals whose personal information is processed exclusively by an automated decision-making system must be informed of such processing.[91] The Act also guarantees the right to be informed of the personal information used to make an automated decision upon request and the right to have such personal information corrected.
* * *
[1] Council of the EU, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, press release of 9 December 2023, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/.
[2] The scope of some of the broadest jurisdictional hooks, including governing companies that are responsible for generating output from AI tools that have effect in the Union, remains to be seen.
[3] EU updates product liability regime to include software, Artificial Intelligence, Euractiv (Dec. 14, 2023), https://www.euractiv.com/section/digital/news/eu-updates-product-liability-regime-to-include-software-artificial-intelligence/.
[4] EDPS issues opinions on AI liability proposals, International Association of Privacy Professionals (Oct. 13, 2023), https://iapp.org/news/a/edps-issues-opinions-on-ai-liability-proposals/.
[5] Exec. Order 14,110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Nov. 1, 2023).
[6] Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum, 88 Fed. Reg. 75626 (Nov. 3, 2023); The White House, OMB, OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence (Nov. 1, 2023), https://www.whitehouse.gov/omb/briefing-room/2023/11/01/omb-releases-implementation-guidance-following-president-bidens-executive-order-on-artificial-intelligence/.
[7] See Mohan & Lamm, Practical Insights for Employers Using AI (Dec. 19, 2023), Gibson Dunn, https://gdstaging.com/wp-content/uploads/2023/12/practical-insights-for-employers-using-ai.pdf.
[8] See Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, Office of Mgmt. & Budget (2023), https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf.
[9] OMB is now reviewing the almost 200 comments it received on the guidance. Comment files can be found on the federal regulations website, https://www.regulations.gov/docket/OMB-2023-0020/comments.
[10] Artificial Intelligence, U.S. Department of Commerce, https://www.commerce.gov/issues/artificial-intelligence#:~:text=On%20October%2030%2C%20President%20Joseph,supports%20workers%20and%20protects%20consumers.
[11] NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, NIST (Dec. 19, 2023), https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-and.
[12] White House, Ensuring Safe, Secure, and Trustworthy AI (2023), https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.
[13] White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI” (Sept. 12, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
[14] NIST Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework.
[15] A. Oprea, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST (Mar. 8, 2023), https://csrc.nist.gov/pubs/ai/100/2/e2023/ipd.
[16] NIST, Biden-Harris Administration Announces New NIST Public Working Group on AI, https://www.nist.gov/news-events/news/2023/06/biden-harris-administration-announces-new-nist-public-working-group-ai.
[17] NIST, Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence (Sections 4.1, 4.5, and 11), https://www.federalregister.gov/documents/2023/12/21/2023-28232/request-for-information-rfi-related-to-nists-assignments-under-sections-41-45-and-11-of-the#:~:text=SUMMARY%3A,issued%20on%20October%2030%2C%202023.
[18] In the first of a series of closed-door meetings with technology industry CEOs and other industry stakeholders, Senator Schumer stated that he does not “just want to put together legislation” because “[i]f you go too fast, you could ruin things.” Mohar Chaterjee & Brendan Bordelon, Senate Starts to Fracture over How to Govern AI, Politico (Sept. 13, 2023), https://www.politico.com/news/2023/09/13/schumer-senate-ai-policy-00115794. Senator Todd Young (R-IN) indicated in August that it is unlikely that Congress will pass “sweeping” regulation of AI. Steven Overly, Congress Unlikely to Pass Sweeping New AI laws, Key GOP Senator Says, Politico (Aug. 3, 2023), https://www.politico.com/news/2023/08/03/congress-ai-laws-todd-young-00109553.
[19] Schumer Launches Major Effort To Get Ahead Of Artificial Intelligence, Senate Democrats (Apr. 13, 2023), https://www.democrats.senate.gov/newsroom/press-releases/schumer-launches-major-effort-to-get-ahead-of-artificial-intelligence.
[20] Chuck Schumer, Majority Leader, U.S. Senate, Remarks of Sen. Chuck Schumer, Launches SAFE Innovation in the AI Age at CSIS, https://www.csis.org/analysis/sen-chuck-schumer-launches-safe-innovation-ai-age-csis.
[21] U.S. Rep. Ritchie Torres Introduces Federal Legislation Requiring Mandatory Disclaimer for Material Generated by Artificial Intelligence, congressional Office of Ritchie Torres (June 5, 2023), https://ritchietorres.house.gov/posts/u-s-rep-ritchie-torres-introduces-federal-legislation-requiring-mandatory-disclaimer-for-material-generated-by-artificial-intelligence.
[22] Eshoo, Beyer Introduce Landmark AI Regulation (Dec. 22, 2023), https://eshoo.house.gov/media/press-releases/eshoo-beyer-introduce-landmark-ai-regulation-bill; AI Foundation Model Transparency Act of 2023 (Dec. 21, 2023), https://beyer.house.gov/uploadedfiles/ai_foundation_model_transparency_act_text_118.pdf.
[24] Hawley, Blumenthal Introduce Bipartisan Legislation to Protect Consumers and Deny AI Companies Section 230 Immunity, U.S. Senate Office of Josh Hawley (June 14, 2023), https://www.hawley.senate.gov/hawley-blumenthal-introduce-bipartisan-legislation-protect-consumers-and-deny-ai-companies-section.
[25] Klobuchar, Hawley, Coons, Collins Introduce Bipartisan Legislation to Ban the Use of Materially Deceptive AI-Generated Content in Elections, U.S. Senate Office of Amy Klobuchar (Sept. 12, 2023), https://www.klobuchar.senate.gov/public/index.cfm/2023/9/klobuchar-hawley-coons-collins-introduce-bipartisan-legislation-to-ban-the-use-of-materially-deceptive-ai-generated-content-in-elections.
[26] Senators Coons, Blackburn, Klobuchar, Tillis Announce Draft of Bill to Protect Voice and Likeness of Actors, Singers, Performers, and Individuals from AI-generated Replicas, U.S. Senate Office of Chris Coons (Oct. 12, 2023), https://www.coons.senate.gov/news/press-releases/senators-coons-blackburn-klobuchar-tillis-announce-draft-of-bill-to-protect-voice-and-likeness-of-actors-singers-performers-and-individuals-from-ai-generated-replicas.
[27] Relentless Inc. v. Dep’t of Com., No. 21-1886 (oral argument Jan. 17, 2024).
[28] Dep’t of Justice, Fed. Trade Comm’n, Consumer Fin. Prot. Bureau, Equal Emp’t & Opportunity Comm’n, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (Apr. 25, 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
[29] S. Nguyen, A Century of Technological Evolution at the Federal Trade Commission, FTC (Feb. 17, 2023), available at:https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/02/century-technological-evolution-federal-trade-commission.
[30] FTC, FTC Authorized Compulsory Process for AI-related Products and Services (Nov. 21, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services.
[31] Fed. Trade Comm’n, “Policy Statement of the Federal Trade Commission on Biometric Information and Section 5 of the Federal Trade Commission Act,” (May 18, 2023) https://www.ftc.gov/system/files/ftc_gov/pdf/p225402biometricpolicystatement.pdf (FTC’s policy statement on biometric information).
[32] Press Release, Fed. Trade Comm’n, FTC Warns About Misuses of Biometric Information and Harm to Consumers Press (May 18, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-warns-about-misuses-biometric-information-harm-consumers.
[33] Relatedly, on November 16, the FTC announced a “Voice Cloning Challenge” to promote the development of ideas to prevent, monitor, and evaluate malicious uses of voice cloning technology that could harm consumers. FTC, The FTC Voice Cloning Challenge (Nov. 16, 2023), https://www.ftc.gov/news-events/contests/ftc-voice-cloning-challenge.
[34] Fed. Trade Comm’n, Generative AI Raises Competition Concerns (June 29, 2023), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
[35] Id.
[36] Fed. Trade Comm’n, The Luring Test: AI and the engineering of consumer trust (May 1, 2023), https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust.
[37] FTC, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards (Dec. 19, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without.
[38] Statement of Commissioner Alvardo M. Bedoya On FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation, Commission File No. 202-3190 (Dec. 19, 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_commissioner_bedoya_riteaid_statement.pdf.
[39] Richard Vanderford, SEC Probes Investment Advisers’ Use of AI, The Wall Street Journal (Dec. 10, 2023), https://www.wsj.com/articles/sec-probes-investment-advisers-use-of-ai-48485279.
[40] Press Release, SEC, SEC Proposes New Requirements to Address Risks to Investors from Conflicts of Interest Associated with the use of Predictive Data Analytics by Broken-Dealers and Investment Advisers (July 26, 2023), https://www.sec.gov/news/press-release/2023-140.
[41] Rohit Chopra, Algorithms, Artificial Intelligence, and Fairness in Home Appraisals, CFPB Blog (June 1, 2023), https://www.consumerfinance.gov/about-us/blog/algorithms-artificial-intelligence-fairness-in-home-appraisals/.
[42] Quality Control Standards for Automated Valuation Models, 88 Fed. Reg.40,670 (June 21, 2023).
[43] Press Release, Consumer Financial Protection Bureau, CFPB Issue Spotlight Analyzes “Artificial Intelligence” Chatbots in Banking (June 3, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-issue-spotlight-analyzes-artificial-intelligence-chatbots-in-banking.
[44] Press Release, Consumer Financial Protection Bureau, CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence (Sept. 19, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence/.
[45] In addition to collecting public comments, ONC convened a task force consisting of various stakeholder groups (including direct patient care providers, public health groups, patients, health IT developers, standards development organizations, and others) to evaluate the proposed rule and provide draft revisions based on input collected from a range of external subject matter experts. See HTI-1 Proposed Rule Task Force 2023, Report to the Health Information Technology Advisory Committee (June 15, 2023), https://www.healthit.gov/sites/default/files/facas/2023-06-15_HTI-1-PR-TF-2023_Recommendations_Report.pdf.
[46] 45 C.F.R. § 170.315(b)(11) (2024).
[47] 45 C.F.R. § 170.102 (2024).
[48] Dep’t of Health and Human Services, Comments to Rule on HTI-1 (Jan. 2, 2024), at 177, https://www.federalregister.gov/d/2023-28857.
[49] 45 C.F.R. § 170.315(b)(11)(iv)(A) and (B) (2024).
[50] 45 C.F.R. § 170.315(b)(11)(vi) (2024).
[51] U.S. Dep’t of Health and Human Services, HTI-1 Overview Fact Sheet, https://www.healthit.gov/sites/default/files/page/2023-12/HTI-1_Gen-Overview_factsheet_508.pdf.
[52] U.S. Dep’t of Health and Human Services, HHS Finalizes Rule to Advance Health IT Interoperability and Algorithm Transparency (Dec. 13, 2023), https://www.hhs.gov/about/news/2023/12/13/hhs-finalizes-rule-to-advance-health-it-interoperability-and-algorithm-transparency.html.
[53] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,190 (Mar. 16, 2023), https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence.
[54] Letter from U.S. Copyright Office re: Zarya of the Dawn (Feb. 21, 2023), https://copyright.gov/docs/zarya-of-the-dawn.pdf.
[55] Copyright Office, Artificial Intelligence and Copyright, Notice of Inquiry and Request for Comment, Fed. Reg. 88, 167 (Aug. 30, 2023), http://www.govinfo.gov/content/pkg/FR-2023-08-30/pdf/2023-18624.pdf.
[56] Thaler v. Perlmutter, No. 1:22-cv-1564, 2023 WL 5333236 (D.D.C. Aug. 18, 2023).
[57] Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417-VC, 2023 WL 8039640 (N.D. Cal. Nov. 20, 2023).
[58] Andersen v. Stability AI, No. 3:23-cv-00201-WHO, 2023 WL 7132064 (N.D. Cal. Oct. 30, 2023).
[59] Kyle Wiggers, Some Gen AI Vendors Say They’ll Defend Customers from IP Lawsuits. Others, Not So Much, TechCrunch+ (Oct. 26, 2023), https://techcrunch.com/2023/10/06/some-gen-ai-vendors-say-theyll-defend-customers-from-ip-lawsuits-others-not-so-much/?guccounter=1.
[60] Doe 1 v. Github, Inc., No. 22-cv-06823-JST, 2023 WL 3449131, at *11 (N.D. Cal. May 11, 2023).
[61] Id.
[62] Andersen v. Stability AI, No. 3:23-cv-00201-WHO, 2023 WL 7132064, at *11 (N.D. Cal. Oct. 30, 2023).
[63] Colo. Code Regs. § 904-3.
[64] California Privacy Protection Agency, Draft Automated Decisionmaking Technologies Regulations (“Draft ADMT Regulations”) (Nov. 27, 2023), https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf; California Privacy Protection Agency, New Rules Subcommittee Revised Draft Risk Assessment Regulations (“Draft Risk Assessment Regulations”) (Dec. 8, 2023), https://cppa.ca.gov/meetings/materials/20231208_item2_draft_redline.pdf.
[65] AMDT is defined as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking” and “includes profiling.”
[66] See Draft Risk Assessment Regulations, §7152.
[67] ADMT is defined as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking” and includes profiling. See Draft ADMT Regulations, § 7001.
[68] For more information, please see Gibson Dunn’s Client Alert, Keeping Up with the EEOC: AI Focus Heats Up with Title VII Guidance (May 23, 2023), https://gdstaging.com/keeping-up-with-the-eeoc-focus-heats-up-with-title-vii-guidance/.
[69] For more information, please see Gibson Dunn’s Client Alert, Keeping Up with the EEOC: 5 Takeaways from its Algorithm Rewriting Settlement (Mar. 23, 2023), https://gdstaging.com/keeping-up-with-the-eeoc-5-takeaways-from-its-algorithm-rewriting-settlement/.
[70] EEOC, iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit (Sept. 11, 2023), https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit.
[71] Wyden, Booker and Clarke Introduce Bill to Regulate Use of Artificial Intelligence to Make Critical Decisions like Housing, Employment and Education (Sept. 21, 2023), https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-bill-to-regulate-use-of-artificial-intelligence-to-make-critical-decisions-like-housing-employment-and-education.
[72] EEOC, EEOC Releases New Resource on Artificial Intelligence and Title VII (May 18, 2023), https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii.
[73] NYC, Int. 1894-2020, Local Law 144, https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9.
[74] Co. Div. Ins., Notice of Adoption – New Regulation 10-1-1 Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (effective Nov. 14, 2023), https://doi.colorado.gov/announcements/notice-of-adoption-new-regulation-10-1-1-governance-and-risk-management-framework.
[75] AI regulation: A pro-innovation approach to AI regulation, UK government white paper (Mar. 29, 2023), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
In contract, note the Artificial Intelligence (Regulation) Bill (Nov. 22, 2023), https://bills.parliament.uk/publications/53068/documents/4030. On November 22, 2023, the Artificial Intelligence (Regulation) Bill (“AI Bill”) was introduced to the UK Parliament’s House of Lords as a private members bill. The main purpose of the AI Bill is the creation of an ‘AI Authority’, which would have the function of (inter alia) ensuring that relevant regulators take account of AI and align their approaches, undertaking a gap analysis of regulatory responsibilities in respect of AI, and coordinating a review of legislation to assess its suitability to address the challenges and opportunities presented by AI.
[76] ICO, Generative AI: Eight questions that developers and users need to ask (Apr. 3, 2023), https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/generative-ai-eight-questions-that-developers-and-users-need-to-ask/
[77] FCA, Innovation, AI & the future of financial regulation (Apr. 17, 2023), https://www.fca.org.uk/news/speeches/innovation-ai-future-financial-regulation
[78] CMA, Initial review of competition and consumer protection considerations in the development and use of AI foundation models (4 May, 2023)
https://www.gov.uk/government/news/cma-launches-initial-review-of-artificial-intelligence-models
[79] Ofcom, What generative AI means for the communications sector (8 June 2023), https://www.ofcom.org.uk/news-centre/2023/what-generative-ai-means-for-communications-sector
[80] UK Government, Summary of the Government’s ongoing programme of work to develop a code of practice on copyright and AI (29 June 2023), https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai.
[81] UK Shelves Proposed AI Copyright Code in Blow to Creative Industries, Fin. Times (Feb. 4, 2024), https://www.ft.com/content/a10866ec-130d-40a3-b62a-978f1202129e.
[82] UK Parliament, Call for Evidence (July 7, 2023), https://committees.parliament.uk/call-for-evidence/3183; Communications Committee launches inquiry into large language models – Committees – UK Parliament
[83] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 Policy Paper (Nov. 11, 2023), https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
[84] UK Government Press Release ‘Prime Minister launches new AI Safety Institute’ (Nov. 2, 2023) https://www.gov.uk/government/news/prime-minister-launches-new-ai-safety-institute.
[85] Remarks by Vice President Harris on the Future of Artificial Intelligence (Nov. 1, 2023), https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom/.
[86] Parliament of Canada, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts (last accessed Jan. 5, 2024), https://www.parl.ca/legisinfo/en/bill/44-1/c-27. See also Department of Justice Canada, Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts (Nov. 27, 2023), https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c27_1.html#:~:text=The%20Consumer%20Privacy%20Protection%20Act%20would%20repeal%20parts%20of%20the,for%20commercial%20activity%20in%20Canada.#:~:text=The%20Consumer%20Privacy%20Protection%20Act%20would%20repeal%20parts%20of%20the,for%20commercial%20activity%20in%20Canada.
[87] Canadian Human Rights Act (R.S.C., 1985 c. H-6), https://laws-lois.justice.gc.ca/eng/acts/h-6/page-1.html.
[88] Government of Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document (Mar. 13, 2023), https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s4.
[89] News Release, Minister Champagne launches voluntary code of conduct relating to advanced generative AI systems, Government of Canada (Sept. 27, 2023), https://www.canada.ca/en/innovation-science-economic-development/news/2023/09/minister-champagne-launches-voluntary-code-of-conduct-relating-to-advanced-generative-ai-systems.html. See Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, Government of Canada (Sept. 2023), https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems.
[90] The voluntary code provides for measures that developers and managers of advanced generative AI systems commit to implementing consistent with “six core principles” and seek commensurate outcomes, including accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. See Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, Government of Canada (Sept. 2023), https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems.
[91] An Act to modernize legislative provisions as regards the protection of personal information, SQ 2021, c 25 (September 23, 2021), https://www.canlii.org/en/qc/laws/astat/sq-2021-c-25/latest/sq-2021-c-25.html.
Gibson, Dunn & Crutcher’s lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or the following leaders and members of the firm’s Artificial Intelligence practice group:
United States:
Cassandra L. Gaedt-Sheckter – Co-Chair, Palo Alto (+1 650.849.5203, [email protected])
Vivek Mohan – Co-Chair, Palo Alto (+1 650.849.5345, [email protected])
Eric D. Vandevelde – Co-Chair, Los Angeles (+1 213.229.7186, [email protected])
Ryan T. Bergsieker – Denver (+1 303.298.5774, [email protected])
S. Ashlie Beringer – Palo Alto (+1 650.849.5327, [email protected])
Gustav W. Eyler – Washington, D.C. (+1 202.955.8610, [email protected])
Lauren R. Goldman – New York (+1 212.351.2375, [email protected])
Natalie J. Hausknecht – Denver (+1 303.298.5783,[email protected])
Jane C. Horvath – Washington, D.C. (+1 202.955.8505, [email protected])
Martie Kutscher Clark – Palo Alto (+1 650.849.5348,[email protected])
Ari Lanin – Los Angeles (+1 310.552.8581, [email protected])
Carrie M. LeRoy – Palo Alto (+1 650.849.5337, cleroy@gibsondunn.
Rosemarie T. Ring – San Francisco (+1 415.393.8247, [email protected])
Ashley Rogers – Dallas (+1 214.698.3316, [email protected])
Alexander H. Southwell – New York (+1 212.351.3981, [email protected])
Sara K. Weed – Washington, D.C. (+1 202.955.8507, [email protected])
Debra Wong Yang – Los Angeles (+1 213.229.7472, [email protected])
Europe:
Robert Spano – Co-Chair, London/Paris (+44 20 7071 4000, [email protected])
Ahmed Baladi – Paris (+33 (0) 1 56 43 13 00, [email protected])
Nicholas Banasevic* – Managing Director, Brussels (+32 2 554 72 40, [email protected])
Patrick Doris – London (+44 20 7071 4276, [email protected])
Kai Gesing – Munich (+49 89 189 33-180, [email protected])
Joel Harrison – London (+44 20 7071 4289, [email protected])
Vera Lukic – Paris (+33 (0) 1 56 43 13 00, [email protected])
Lars Petersen – Frankfurt/Riyadh (+49 69 247 411 525, [email protected])
Asia:
Connell O’Neill – Hong Kong (+852 2214 3812, [email protected])
Jai S. Pathak – Singapore (+65 6507 3683, [email protected])
*Nicholas Banasevic, Managing Director in the firm’s Brussels office and an economist by background, is not admitted to practice law.
*Kate Googins, an associate in the Los Angeles office admitted to practice in Colorado, is practicing under supervision of members of the California Bar.
*Samantha Yi, an associate in the Washington, D.C. office admitted to practice in Maryland, is practicing under supervision of members of the District of Columbia Bar under D.C. App. R. 49.
*John Ryan, a recent law graduate in the Palo Alto office, is not admitted to practice law.
© 2024 Gibson, Dunn & Crutcher LLP. All rights reserved. For contact and other information, please visit us at www.gibsondunn.com.
Attorney Advertising: These materials were prepared for general informational purposes only based on information available at the time of publication and are not intended as, do not constitute, and should not be relied upon as, legal advice or a legal opinion on any specific facts or circumstances. Gibson Dunn (and its affiliates, attorneys, and employees) shall not have any liability in connection with any use of these materials. The sharing of these materials does not establish an attorney-client relationship with the recipient and should not be relied upon as an alternative for advice from qualified counsel. Please note that facts and circumstances may vary, and prior results do not guarantee a similar outcome