Select Page

The increasing reliance on AI technology has led to a surge in complex legal questions surrounding accountability. As AI systems become more integrated into various industries, the risk of AI-related errors or “hallucinations” causing harm to individuals or organizations grows.

AI liability law

This raises a critical issue: who bears the responsibility when AI systems malfunction or produce erroneous results? The legal grey zones around AI liability create uncertainty for corporate counsel, directors, insurers, and policymakers.

Understanding the intricacies of AI accountability is crucial for navigating this uncharted territory. As AI continues to evolve, it is essential to address the legal implications and establish clear guidelines for liability.

Key Takeaways

  • The growing use of AI technology raises complex questions about legal accountability.
  • AI-related errors or “hallucinations” can cause significant harm to individuals and organizations.
  • The legal grey zones around AI liability create uncertainty for various stakeholders.
  • Understanding AI accountability is crucial for navigating the legal implications.
  • Clear guidelines for AI liability are essential for addressing the challenges posed by AI.

The Emerging Liability Crisis in AI

The increasing sophistication of AI has brought to the forefront the issue of liability in AI decision-making and its far-reaching consequences. As AI systems become more integrated into critical sectors such as healthcare, finance, and transportation, the potential for AI-related errors or “hallucinations” to cause significant harm grows.

Real-world Consequences of AI Hallucinations

AI hallucinations, where an AI system provides or acts on incorrect information not based on any actual data, can have serious real-world consequences. For instance, in healthcare, an AI system might misdiagnose a condition or recommend an inappropriate treatment, potentially leading to patient harm.

Notable Incidents and Their Impacts

  • A notable example is when an AI-powered autonomous vehicle was involved in a fatal accident, raising questions about the liability of the vehicle’s manufacturer versus the AI developer.
  • In another case, an AI system used in financial services provided incorrect investment advice, resulting in significant financial losses for investors.

These incidents highlight the need for clear guidelines on liability in AI-related incidents.

The Legal Vacuum Around AI Decision-making

Current legal frameworks struggle to address the complexities introduced by AI decision-making processes. Traditional laws often rely on human intent or negligence, which may not be directly applicable to AI systems that operate based on complex algorithms and data processing.

Why Traditional Laws Fall Short

“The law is not equipped to handle the nuances of AI decision-making, creating a legal vacuum that leaves victims of AI-related errors without clear recourse.”

The limitations of traditional laws in addressing AI liability issues underscore the need for new legal theories and frameworks that can accommodate the unique characteristics of AI technology.

As AI continues to evolve, legal systems must adapt to address the emerging liability crisis, ensuring that those harmed by AI-related errors have appropriate avenues for redress.

Understanding AI Hallucinations: Technical and Legal Definitions

Understanding AI Hallucinations: Technical and Legal Definitions

Understanding AI hallucinations is essential for addressing the legal and technical challenges they pose in various industries. AI hallucinations refer to instances where artificial intelligence systems produce outputs that are not based on actual data or facts.

What Constitutes an AI “Hallucination”

An AI hallucination occurs when an AI system generates information that is not grounded in reality. This can happen due to various reasons such as flawed training data, algorithmic biases, or system malfunctions.

Technical Mechanisms Behind False Outputs

The technical mechanisms behind AI hallucinations involve complex interactions between the AI’s algorithms and the data it is trained on. For instance, if an AI is trained on biased or incomplete data, it may produce outputs that are not accurate.

When Technical Glitches Become Legal Liabilities

Technical glitches, including AI hallucinations, can become legal liabilities when they result in harm or damage to individuals or organizations. The legal implications depend on the context in which the AI is used.

The Threshold for Actionable Harm

For an AI hallucination to be considered actionable harm, it must result in significant consequences, such as financial loss, physical harm, or reputational damage. Determining this threshold is crucial for legal accountability.

The legal and technical communities must work together to establish clear guidelines and regulations regarding AI hallucinations. This includes developing more robust AI systems and legal frameworks that can address the challenges posed by AI hallucinations.

Traditional Liability Frameworks vs. AI Challenges

The rise of AI has exposed significant gaps in traditional liability frameworks, necessitating a reevaluation of current legal standards. As AI systems become increasingly complex, the legal community faces the daunting task of adapting existing laws to address the unique challenges posed by AI.

Product Liability Law’s Limitations

Product liability law, which has traditionally governed disputes related to defective products, is struggling to accommodate AI’s distinctive characteristics. A key issue lies in distinguishing between different types of defects in AI systems.

Design Defect vs. Manufacturing Defect in AI

In AI systems, a design defect refers to a flaw inherent in the system’s design, whereas a manufacturing defect occurs during the production process. For instance:

  • A design defect might involve an inherently biased AI algorithm.
  • A manufacturing defect could result from a faulty hardware component.

The Problem with Applying Negligence Standards

Negligence standards, another cornerstone of traditional liability law, are also being tested by AI’s complexities. The challenge lies in defining “reasonable care” in the context of machine learning algorithms that evolve over time.

Reasonable Care in the Context of Machine Learning

Reasonable care in AI development involves ensuring that systems are designed and deployed with appropriate safeguards. This includes:

  1. Implementing robust testing protocols.
  2. Continuously monitoring AI performance.
  3. Updating algorithms to address emerging issues.

As AI continues to advance, it is clear that traditional liability frameworks require significant adjustments to effectively address the challenges posed by AI. By understanding the limitations of current laws and adapting them to the unique characteristics of AI, we can work towards creating a more equitable and just legal system for all stakeholders involved.

AI and the Law of Liability: Who Pays When Hallucinations Harm?

The issue of liability for harm caused by AI hallucinations is multifaceted, involving a range of potential defendants from developers to end-users. As AI becomes more pervasive, understanding who bears the responsibility when AI systems fail is crucial.

The Chain of Potential Liable Parties

In the complex ecosystem of AI development and deployment, multiple parties could potentially be held liable for harm caused by AI hallucinations. These include developers, manufacturers, deployers, and end-users.

From Developers to End Users

Developers might be held liable for flaws in the AI’s design or for failing to implement adequate safeguards. Manufacturers could be responsible if the hardware fails or is inadequately designed. Deployers, such as healthcare providers using AI diagnostic tools, may be liable for how they implement and oversee the AI system. End-users, including patients or operators of AI systems, might also bear some responsibility if their actions contribute to the harm caused.

Determining Proximate Cause in AI Systems

One of the significant challenges in assigning liability is determining the proximate cause of harm in complex AI systems. The intricate interactions between different components of AI and the often opaque decision-making processes of AI algorithms complicate this task.

Evidentiary Challenges in Complex Systems

The complexity of AI systems poses substantial evidentiary challenges. Understanding how an AI system arrived at a particular decision can be difficult due to the “black box” nature of many AI algorithms. This makes it challenging to establish causation and, consequently, liability.

In conclusion, the liability for AI-related harm involves a complex interplay of various parties and technical challenges. Addressing these issues requires a nuanced understanding of both the legal and technical aspects of AI systems.

Corporate Liability: Responsibilities of AI Deployers

The deployment of AI systems by corporations raises significant legal and ethical questions regarding liability when these systems cause harm. As AI becomes more integral to business operations, understanding these liabilities is crucial for corporate counsel and directors.

Due Diligence Requirements

Corporations deploying AI must conduct thorough due diligence to mitigate potential liabilities. This includes pre-deployment testing and validation to ensure that AI systems function as intended.

Pre-deployment Testing and Validation

Pre-deployment testing involves rigorous evaluation of AI systems under various scenarios to identify potential failures. Validation ensures that the AI system meets its intended purpose and performs reliably in real-world conditions.

Risk Assessment Protocols for AI Implementation

Effective risk assessment protocols are essential for identifying and mitigating potential risks associated with AI deployment. This includes assessing the potential for AI systems to cause harm and implementing measures to minimize these risks.

Documenting Decision Chains for Liability Protection

Documenting the decision-making process behind AI deployment is critical for liability protection. By maintaining detailed records, corporations can demonstrate their due diligence and compliance with regulatory requirements.

In conclusion, corporations deploying AI systems must prioritize due diligence and risk assessment to mitigate potential liabilities. By understanding and implementing these measures, businesses can navigate the complex landscape of AI regulation and liability frameworks effectively.

Developer Accountability: From Code to Courtroom

The rapid evolution of AI has raised critical questions about the responsibilities of developers in ensuring their creations do not cause harm. As AI technology becomes more pervasive, the legal accountability of those who develop and deploy these systems is coming under scrutiny.

Duty of Care in AI Development

Developers of AI technology have a duty of care to ensure their products are designed and implemented in a way that minimizes the risk of harm to users and third parties. This duty is evolving as AI capabilities become more sophisticated and integrated into various aspects of life.

Evolving Standards of Professional Responsibility

The standards of professional responsibility for AI developers are continually evolving. As technology advances, so too do the expectations for how developers should design, test, and deploy AI systems. This includes staying abreast of the latest research and best practices in AI development to mitigate potential risks.

Documentation and Disclosure Requirements

Proper documentation and disclosure are critical components of AI development. Developers must maintain detailed records of their development processes, including design decisions, testing methodologies, and any known limitations or risks associated with their AI systems.

Transparency as a Legal Shield

Transparency in AI development can serve as a legal shield for developers. By being open about how their AI systems are designed and function, developers can demonstrate their commitment to responsible development practices. This transparency can be crucial in defending against claims of negligence or liability.

In conclusion, developer accountability is a critical aspect of the AI industry’s legal landscape. By understanding and adhering to their duty of care, staying updated on evolving standards of professional responsibility, and maintaining thorough documentation and disclosure practices, developers can mitigate the risks associated with AI technology and legal accountability.

Expert Legal Perspectives on AI Liability

Professor Ryan Abbott: The “Reasonable Robot” Standard

Professional Background: Ryan Abbott, MD, JD, PhD, is a Professor of Law and Health Sciences at the University of Surrey and an Adjunct Professor of Medicine at UCLA.  He is a licensed physician and patent attorney who has been recognized by Managing Intellectual Property as one of the 50 most influential people in IP.

Key Legal Opinion: In his book, The Reasonable Robot: Artificial Intelligence and the Law, Dr. Abbott advocates for “legal neutrality.” He argues that the law should not discriminate between AI and human actions, applying the same standards of care to both.  His framework suggests AI liability should be judged by whether the system acted reasonably under the circumstances—similar to the “reasonable person” standard in tort law—rather than creating a separate, artificial legal standard.

Professor Woodrow Hartzog: A Focus on Privacy and System Design

Professional Background: Woodrow Hartzog is a Professor of Law at Boston University School of Law with a joint appointment in Computer Science. He has testified before Congress on data protection issues and is a leading voice on privacy and technology design. 9

Key Legal Opinion: Professor Hartzog argues that privacy-as-control models, which rely on individual user consent, are ineffective in the age of complex AI.  He proposes a model where robust privacy protections and ethical considerations are built directly into an AI system’s design. This proactive approach holds companies accountable for preventing harm rather than simply reacting to it after the fact.

Current AI Liability Case Law and Precedents

Case Study 1: Copyright and Fair Use in AI Training

Case: The New York Times Co. v. Microsoft Corp. and OpenAI Inc. (S.D.N.Y., filed 2023)

Key Issue: Whether using copyrighted news articles to train large language models constitutes copyright infringement or qualifies as fair use.

Facts: The New York Times sued Microsoft and OpenAI, alleging that millions of its articles were used without permission to train the models behind ChatGPT and other generative AI tools.  The suit claims that these tools can reproduce NYT content verbatim, competing directly with the newspaper.

Legal Significance: This is a landmark case being closely watched. Its outcome could establish a critical precedent for whether AI training is considered fair use and may determine the viability of hundreds of similar copyright claims filed by authors, artists, and other media organizations against AI developers.

Case Study 2: AI Hallucinations and Lawyer Sanctions

Case:Mata v. Avianca, Inc. (S.D.N.Y. 2023)

Key Issue: Professional responsibility and competence when using generative AI for legal research.

Facts: Two attorneys submitted a legal brief that included six entirely fictitious case citations, complete with fabricated quotes and legal analysis, all generated by ChatGPT. The lawyers admitted they had used the AI tool for research and failed to verify the accuracy of its output.

Legal Significance: The judge sanctioned the attorneys for acting in bad faith and violating Federal Rule of Civil Procedure 11, which requires lawyers to conduct a “reasonable inquiry” into the law.  This case sent a clear message to the legal profession: using AI without verifying its output is not just sloppy, it’s a sanctionable offense that amounts to professional incompetence.

Case Study 3: Open-Source Licenses and Code Generation

Case: Doe v. GitHub, Inc. (N.D. Cal., filed 2022)

Key Issue: Whether an AI code generation tool, trained on public code repositories, violates open-source licenses by removing attribution and other copyright management information.

Facts: A group of programmers sued GitHub, Microsoft, and OpenAI, alleging that GitHub Copilot was trained on their open-source code in a way that stripped it of the required licenses and attributions, a violation of the Digital Millennium Copyright Act (DMCA).

Legal Significance: While the district court has dismissed some claims, the case is proceeding on appeal to the Ninth Circuit.  It raises fundamental questions about how AI can permissibly learn from the vast amount of data available on the internet and what obligations developers have to respect the licenses attached to that data.

Healthcare AI Liability: Special Considerations

The use of AI in healthcare introduces unique liability challenges, particularly concerning the established standards of medical malpractice.

Medical Malpractice in the AI Era

Expert Consensus: Healthcare law specialists emphasize that AI creates new litigation risks tied to accountability, data bias, and the traditional liability framework of negligence vs. strict liability.

Clinician Responsibility: The prevailing view from regulatory bodies like the Federation of State Medical Boards and professional organizations like the American Medical Association is that the ultimate responsibility remains with the clinician. Medical professionals are expected to use AI as a tool, but they must verify its outputs and use their own professional judgment, just as they would with any other diagnostic aid.  They cannot delegate their duty of care to an algorithm.

Key Risk Scenarios: Research has identified critical liability scenarios, including:

  • Healthcare professionals fail to understand an AI tool’s limitations.
  • Patients misinterpreting AI recommendations despite disclosures.
  • AI systems provide flawed outputs due to biased training data or algorithmic errors.

Organizational Liability and the Duty of Reasonable Training

As cases of “AI hallucinations” mount, courts are establishing that organizations have a professional duty to ensure their staff are competently trained to use these powerful tools.

The Legal Crisis of AI-Generated Misinformation

An AI “hallucination” occurs when an AI model generates false, fabricated, or nonsensical information but presents it as factual. 30 This risk is particularly high in professional settings. Research from Stanford has shown that even legal-specific AI tools can hallucinate plausible-sounding but entirely fictitious case law, posing a significant trap for unwary professionals. 31

Establishing a Legal Standard for AI Competence

Professional Duty: The American Bar Association’s Model Rule 1.1 requires lawyers to provide competent representation, which includes keeping “abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”  Legal experts argue that using generative AI without checking its citations is a clear violation of this duty.

Vicarious Liability: Organizations can be held vicariously liable when their employees misuse AI within the scope of their employment. If an untrained employee’s use of AI harms a client, the company itself faces significant legal and financial exposure.

Court-Mandated Training: In some cases, judges have gone beyond fines and have explicitly ordered sanctioned attorneys to complete continuing legal education courses on the proper use of generative AI, establishing a clear precedent that AI literacy is now a required professional competency.

Practical Compliance Strategies for Organizations

To mitigate these risks, organizations must adopt a proactive approach to AI governance and training.

Implement a Clear AI Use Policy: Establish written guidelines that define acceptable uses of AI, outline confidentiality requirements, and mandate verification protocols for all AI-generated output.

Conduct Comprehensive Training: Ensure all staff understand the risks of AI, including hallucinations and data bias. Training should cover practical skills like prompt engineering (crafting effective queries) and verification methodologies.

Perform Due Diligence on AI Tools: Before deploying an AI system, conduct thorough testing for bias and safety. Vet vendors carefully, negotiating contractual protections and demanding transparency about the tool’s data sources and limitations.

Document Everything: Maintain meticulous records of AI system design choices, training protocols, human oversight procedures, and any incidents that occur. This documentation is critical for building a legal defense in the event of a lawsuit.

The Future Regulatory Landscape

While Congress has yet to pass comprehensive federal AI legislation, a patchwork of regulations is emerging at the state and international levels.

State-Level Action: States like California, Colorado, and Georgia have already passed laws related to AI, focusing on issues like transparency, bias, and disclosure requirements in specific sectors.

The EU AI Act: The European Union’s AI Act, which takes a risk-based approach to regulation, is setting a global benchmark.  U.S. companies operating in Europe must comply, effectively creating a de facto international standard for AI governance and risk management.

Insurance Market Response: The insurance industry is adapting by offering new AI-specific policies. However, coverage often comes with significant limitations and may require proof of robust human oversight and staff training programs to be valid.

Ethical Considerations in AI Liability

As AI continues to permeate various aspects of our lives, the ethical considerations surrounding its liability have become a pressing concern. The deployment of AI systems raises significant ethical questions, particularly in situations where AI decisions result in harm or damage.

Balancing Innovation and Accountability

One of the primary ethical challenges is balancing the need to encourage innovation with the necessity of holding parties accountable for AI-related harms. Regulators and policymakers must navigate this delicate balance to ensure that the development and deployment of AI are not stifled by overly stringent liability rules.

The Social Cost of Excessive Liability

Excessive liability can have unintended consequences, such as discouraging investment in AI research and development. “The fear of liability can lead to a ‘liability chill,’ where companies are reluctant to innovate due to the risk of potential lawsuits,” notes a legal expert.

Distributive Justice in AI Harm Compensation

The principle of distributive justice is also critical in the context of AI liability. Ensuring that victims of AI-related harms receive fair compensation is essential for maintaining public trust in AI systems.

Ensuring Access to Remedies

To achieve distributive justice, it is crucial to ensure that individuals harmed by AI have access to effective remedies. This may involve the development of new legal frameworks or the adaptation of existing ones to address the unique challenges posed by AI.

In conclusion, addressing the ethical considerations in AI liability requires a nuanced approach that balances the need for innovation with the necessity of accountability. By understanding these ethical dimensions, policymakers and corporate leaders can work towards creating a more equitable and just AI ecosystem.

Regulatory Frameworks and Compliance Strategies

The rapidly changing landscape of AI necessitates a closer look at regulatory frameworks and compliance strategies. As AI becomes increasingly integrated into various sectors, understanding and adhering to regulatory requirements is crucial for minimizing liability.

U.S. Regulatory Landscape for AI Systems

The U.S. regulatory environment for AI is multifaceted, involving various federal agencies. Key among these are:

  • The Federal Trade Commission (FTC), which focuses on unfair or deceptive acts or practices related to AI.
  • The Department of Commerce, which has been exploring the potential of AI through initiatives like the National Institute of Standards and Technology (NIST).

Agency Jurisdiction and Enforcement Priorities

Different agencies have different jurisdictions and priorities. For instance, the FTC has taken a keen interest in AI bias and discrimination, issuing guidelines on the responsible use of AI.

“The use of AI can be both innovative and risky. It’s our job to ensure that it is used in a way that is fair and transparent.” – FTC Commissioner

International Approaches to AI Regulation

Globally, countries are adopting various approaches to AI regulation. A significant development is the EU AI Act.

EU AI Act and Its Global Influence

The EU AI Act is a comprehensive regulatory framework that categorizes AI applications based on risk. It has implications not just for EU businesses but globally, as companies operating in the EU must comply.

Compliance Programs for Mitigating Liability

To mitigate liability, businesses must implement robust compliance programs. This includes:

  1. Conducting regular risk assessments.
  2. Implementing transparency and explainability measures in AI decision-making.
  3. Ensuring ongoing monitoring and updating of AI systems.

By understanding and implementing these regulatory frameworks and compliance strategies, businesses can reduce the risks associated with AI hallucinations and other liabilities.

The Future of AI Liability: Emerging Legal Theories

The future of AI liability is being shaped by emerging legal theories that challenge traditional notions of responsibility. As AI systems become more autonomous and integrated into various aspects of society, the legal community is grappling with how to assign liability when these systems cause harm.

AI Personhood and Direct Liability

The concept of AI personhood is gaining traction as a potential legal framework for addressing AI liability. This theory posits that AI systems could be considered legal entities with their own rights and responsibilities.

The Debate Over AI Legal Status

The debate over AI legal status is complex, with proponents arguing that it could provide a more straightforward way to assign liability. Critics, however, raise concerns about the implications of granting personhood to non-human entities.

Key arguments for AI personhood include:

  • Simplification of liability assignment
  • Potential for more effective deterrence
  • Recognition of AI’s growing autonomy

No-Fault Compensation Systems for AI Harms

Another emerging legal theory is the implementation of no-fault compensation systems for AI-related harms. This approach focuses on providing compensation to victims without the need to establish fault.

Industry-Funded Victim Compensation Funds

Industry-funded victim compensation funds are being explored as a means to ensure that those harmed by AI systems receive adequate compensation. This model is already used in various industries, such as workers’ compensation.

Benefits of no-fault compensation systems include:

  1. Swift compensation for victims
  2. Reduced litigation costs
  3. Increased focus on prevention through industry-wide standards

Algorithmic Auditing as a Legal Requirement

Algorithmic auditing is emerging as a potential legal requirement for AI systems. This involves regular, systematic examination of AI algorithms to ensure they are functioning as intended and not causing unintended harm.

Algorithmic Auditing as a Legal Requirement

By implementing these emerging legal theories, the legal landscape surrounding AI liability can become more robust and better equipped to handle the challenges posed by AI systems.

Conclusion: Navigating the AI Liability Landscape

The AI liability landscape is complex and evolving rapidly. While traditional legal principles offer some guidance, the unique nature of AI demands new strategies for risk management, compliance, and governance. The consensus among legal experts is that accountability will be shaped by a combination of evolving case law, state-level regulation, and industry best practices.

Organizations that proactively implement robust training, oversight, and documentation protocols will be best positioned to innovate responsibly while protecting themselves from significant legal and reputational harm.

FAQ: AI Liability Law

Q1: What is AI liability, and why is it a growing concern?

AI liability refers to the legal responsibility for harm or damage caused by artificial intelligence systems. It is a growing concern due to the increasing use of AI in various industries, leading to a higher risk of AI-related errors and accidents.

Q2: How do traditional liability frameworks apply to AI challenges?

Traditional liability frameworks, such as product liability law and negligence standards, are being applied to AI challenges, but they have limitations. For instance, it can be difficult to determine whether an AI system’s error was due to a design defect or a manufacturing defect.

Q3: What is an AI “hallucination,” and how does it impact liability?

An AI “hallucination” refers to a situation where an AI system produces false or misleading information. This can impact liability as it may lead to harm or damage to individuals or organizations relying on the AI system’s outputs.

Q4: Who is liable when an AI system causes harm?

The chain of potentially liable parties in AI-related harm includes developers, deployers, and end-users. Determining proximate causes in complex AI systems can be challenging, and evidentiary challenges may arise.

Q5: How are insurance markets responding to AI liability?

Insurance markets are developing AI-specific insurance products to address the growing concern of AI liability. However, coverage gaps and exclusions may exist, and corporate leaders must consider risk transfer strategies.

Q6: What are the regulatory frameworks for AI liability?

Regulatory frameworks for AI liability are evolving, with the U.S. regulatory landscape and international approaches, such as the EU AI Act, playing a significant role. Compliance programs can help mitigate liability in AI.

Q7: How can organizations mitigate AI liability risks?

Organizations can mitigate AI liability risks by implementing due diligence requirements, such as pre-deployment testing and validation, and documenting decision chains for liability protection. They should also consider risk assessment protocols and compliance programs.

Q8: What are the emerging legal theories on AI liability?

Emerging legal theories on AI liability include AI personhood and direct liability, no-fault compensation systems for AI harms, and algorithmic auditing as a legal requirement. These theories may shape the future of AI liability.

Q9: How does AI liability impact healthcare?

AI liability in healthcare is a significant concern, particularly in medical malpractice cases. Physician oversight vs. AI autonomy, FDA regulations, and patient harm and causation challenges are some of the key issues.

Q10: What is the role of documentation and disclosure in AI development?

Documentation and disclosure are crucial in AI development, as they can provide transparency and help establish a standard of care. This can also serve as a legal shield for developers.

Legal Disclaimer

This article provides general information for educational purposes only and does not constitute legal advice.45 The law surrounding artificial intelligence is rapidly changing. You should consult with a qualified legal professional for advice regarding your specific situation.

Affiliate Disclosure

We believe in transparency. This article may contain links to products or services from our partners. If you make a purchase through these links, we may earn a commission at no additional cost to you. We only recommend resources that we believe will provide value to our readers.

Citations and References

[1] Thomson Reuters Institute. (2024). 2024 Report on the State of the Legal Market.

[2] Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.46

[3] Vladeck, S. I. (2024). The Shadow Docket: How the Supreme Court Uses Stealth Rulings to Amass Power and Undermine the Republic. Basic Books.

[4] National Conference of State Legislatures. (2024). 2023 Artificial Intelligence Legislation.

[5] The National Law Review. (2023). What to Expect in 2024: AI Legal Tech and Regulation.

[6] University of Surrey. Faculty Profile: Professor Ryan Abbott.

[7] Managing Intellectual Property. (2019). The 50 most influential people in IP 2019.

[8] Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press.

[9] Boston University School of Law. Faculty Profile: Woodrow Hartzog.

[10] Hartzog, W., & Solove, D. (2022). The Pervasive Problem of Privacy Notices. Boston University Law Review.

[11] The New York Times Company v. Microsoft Corporation, OpenAI, Inc., et al., Case 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023).

[12] Copyright Alliance. (2024). An Update on the Copyright-Related AI Lawsuits.

[13] Mata v. Avianca, Inc., No. 22-CV-1461 (PKC) (S.D.N.Y. June 22, 2023).

[14] Federal Rules of Civil Procedure, Rule 11. Signing Pleadings, Motions, and Other Papers; Representations to the Court; Sanctions.47

[15] J. Doe 1, et al. v. GitHub, Inc., et al., Case No. 22-cv-06823-JST (N.D. Cal. Nov. 3, 2022).

[16] Brittain, B. (2024). Microsoft, GitHub, OpenAI win partial dismissal of AI copyright lawsuit. Reuters.

[17] Holland & Knight. (2023). Top 10 Healthcare Law and Policy Issues for 2024.

[18] Federation of State Medical Boards. (2024). Assessing the Use of Artificial Intelligence in Healthcare.

[19] American Medical Association. (2023). AMA reinforces need for physician oversight of AI in medicine.

[20] Terranova, N., et al. (2024). AI and professional liability assessment in healthcare. Frontiers in Medicine.

[21] Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2023). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?.

[22] American Bar Association. Model Rules of Professional Conduct, Rule 1.1, Comment 8.

[23] Restatement (Third) of Agency § 2.04 (Am. Law Inst. 2006).

[24] PYMNTS. (2024). Potential Shifts in AI Accountability: Legal Experts Weigh in on Future Liability Concerns.

[25] Jones, A. (2024, February 28). Lawyer who used AI for research, cited fake cases, must now take classes on AI. ABA Journal.

[26] European Parliament. (2024). Artificial Intelligence Act.