Revision #1
System
20 days ago
The Two Questions Haunting Every Lawyer in America: Will AI Take My Job — and Who's Responsible When It Fails?
At legal conferences across the country in early 2026, two questions dominate every panel, hallway conversation, and cocktail hour: Will artificial intelligence eliminate my job? And when AI produces flawed legal work, who is to blame?
These are not abstract hypotheticals. In February, Baker McKenzie — one of the ten largest law firms in the world — laid off more than 700 business services employees in what it described as an AI-driven restructuring [1]. Across the profession, over 600 documented cases of AI "hallucinations" in court filings have now been recorded, implicating 128 attorneys at firms of every size [2]. The legal industry, long considered immune to technological disruption, is confronting a reckoning that is at once economic, ethical, and existential.
The Baker McKenzie Shock
When Baker McKenzie announced in February 2026 that it would cut roughly a tenth of its global business services workforce, the firm cited its "increased use of AI" as a driving factor. The layoffs swept across nearly every non-attorney function — IT, knowledge management, administrative support, marketing, secretarial, DEI, and design — affecting offices worldwide, including the firm's offshore centers [1][3].
The firm's official statement was carefully worded: it had undertaken "a careful review of our business professionals functions" aimed at "rethinking the ways in which we work, including through our use of AI, introducing efficiencies, and investing in those roles that best serve our clients' needs" [3].
But industry observers quickly questioned the narrative. Above the Law noted that Baker McKenzie may have given "Biglaw permission to blame AI for mass layoffs," suggesting that the AI rationale provided convenient cover for cost-cutting driven by broader financial pressures [4]. The New York Law Journal predicted that more firms would follow Baker McKenzie's lead, warning that the layoffs were "just the beginning" [5].
The episode crystallized a fear spreading through the profession: that AI is not merely a tool for augmenting legal work, but a justification for eliminating the people who do it.
The Jobs Question: Disruption Without Displacement?
The data tells a more nuanced story than the headlines suggest. According to the Bureau of Labor Statistics, employment in professional and technical services — the category encompassing legal work — has grown steadily from approximately 1.49 million jobs in September 2019 to 1.75 million in February 2026, a gain of roughly 17% over six and a half years [6].
Goldman Sachs initially estimated that 44% of legal tasks could be automated by AI — one of the highest exposure rates of any profession [7]. But a 2025 update significantly revised that figure downward, finding that about 17% of U.S. legal jobs face genuine automation risk, affecting roughly 228,000 lawyers [7]. The distinction is critical: high task exposure does not automatically translate to job elimination when those tasks constitute only a fraction of an attorney's work.
A National Law Review survey of 85 legal professionals found that 58.3% rejected the notion that AI will replace entry-level lawyers within five years, while only 20.2% thought it likely [8]. Harvard Law School's Center on the Legal Profession reported an even more striking finding: none of the AmLaw 100 firms it interviewed anticipated reducing the headcount of practicing attorneys, even as some reported 100x productivity gains on specific tasks [9].
Yet the picture for non-attorney legal workers is considerably bleaker. Paralegals and legal assistants face an estimated 80% to 94% risk of automation, according to widely cited Oxford University research [9]. The Thomson Reuters 2025 Future of Professionals Report found that AI could free up approximately 240 hours per year per legal professional — time that, in the past, might have justified hiring additional support staff [10].
The emerging consensus is that AI will transform rather than eliminate attorney positions while potentially devastating the support roles that make up a significant share of legal employment. As one conference panelist put it: "The lawyer who uses AI won't replace the lawyer who doesn't. But the law firm that uses AI will have far fewer non-lawyers."
The $5 Billion Rush to Automate
The legal technology market is experiencing explosive growth. Law firms increased technology spending by 9.7% in 2025 — what Thomson Reuters called "the fastest real growth likely ever experienced in the legal industry" [11]. Knowledge management spending grew even faster, at 10.5%.
The AI-specific legal technology market is projected to grow from $4.59 billion in 2025 to $5.59 billion in 2026, a 22.3% year-over-year increase [12]. Broader estimates peg the overall legal AI market at $7.4 billion by 2035 [12].
Adoption is accelerating rapidly. Thomson Reuters data shows that 26% of legal organizations are now actively using generative AI, nearly doubling from 14% in 2024 [13]. More than 95% of legal professionals surveyed expect AI to become central to their workflow within five years [13]. And the American Bar Association's 2024 Legal Technology Survey found that 30% of law firms reported using AI technology, up from just 11% in 2023 [14].
The spending surge comes against a backdrop of record law firm profitability. Billable hours grew 2.5% in 2025, hitting 4.4% growth in some months — yet firms are investing in technology that could reduce the need for those hours [11]. The paradox has not gone unnoticed: as Above the Law observed, "the legal market is doing great, which is how you know you're all about to get laid off" [15].
600 Hallucinations and Counting
If the jobs question represents AI's long-term challenge to the legal profession, the ethics crisis is immediate and acute. Since the beginning of 2025, more than 518 documented cases in the United States alone have involved generative AI producing hallucinated content — fabricated case citations, invented statutes, nonexistent legal precedents — that was submitted to courts [2].
The total database now exceeds 600 cases worldwide, implicating attorneys from solo practices to top-tier firms [2]. The pace shows no sign of slowing.
The consequences for attorneys caught submitting AI-hallucinated work have been severe. In one high-profile case, a federal judge ordered two lawyers representing MyPillow CEO Mike Lindell to pay $3,000 each after AI-generated filings contained fabricated citations [16]. In Pennsylvania, judges flagged suspected AI hallucinations in Commonwealth Court cases, triggering investigations [17]. In Canada, a lawyer faced contempt proceedings for relying on AI-generated case law that did not exist [2].
The ethical framework governing AI use in law is rapidly evolving. The American Bar Association issued guidance in 2024 establishing that lawyers must maintain "a reasonable understanding of AI's capabilities and limitations" and verify all AI-generated output [18]. States have followed with increasingly specific rules:
- Florida now mandates attorneys disclose AI use when it affects client billing [19].
- Texas requires human oversight of all AI-generated legal work to prevent fabricated citations [19].
- California demands that attorneys understand large language models — including hallucination risks — before using them [19].
- New York has focused on the confidentiality implications of using AI to record and transcribe client communications [19].
A growing number of jurisdictions are now debating whether lawyers must affirmatively disclose to clients whenever AI is used to draft substantive motions — a requirement analogous to disclosing the use of outside contract lawyers [19].
The Liability Frontier
The question that may define the legal profession's relationship with AI in the coming years is deceptively simple: when AI causes harm in a legal context, who pays?
As the National Law Review framed it, liability for AI-induced harms will likely turn on four factors: "who marketed the system, who relied on it, who failed to supervise it, and who made unsupported claims about its reliability" [8]. That formulation implicates not just the attorneys who use AI tools, but the technology vendors who build them, the firms that deploy them, and the clients who may demand their use.
The current ethical consensus places the burden squarely on the attorney. "The duty to use AI responsibly attaches to the attorney personally — not the tool, not the vendor," as one ethics opinion stated [2]. But that principle is under strain. When a well-known legal research database produces fabricated citations — as occurred in at least one 2025 sanctions case — the question of vendor liability becomes unavoidable [16].
Corporate general counsel are taking notice. A 2026 analysis identified three critical AI risk shifts: expanding regulatory exposure as states adopt AI-specific rules, growing litigation risk from AI-generated errors, and increasing pressure to demonstrate that AI governance frameworks are not merely performative [20].
Law Schools Race to Catch Up
The pipeline that feeds the legal profession is also being reshaped. Mississippi College School of Law announced that beginning in spring 2026, all first-year students must complete a mandatory AI certification — the first such requirement in the Southeast [21]. The University of Chicago Law School is rolling out required AI modules for all entering students [22]. UC Berkeley Law has launched the first AI-focused Master of Laws degree [14].
Perhaps most symbolically, the University of Miami School of Law now includes an optional essay in its application process requiring prospective students to design a generative AI prompt and evaluate its output [23]. The message is unmistakable: AI fluency is no longer elective for the next generation of lawyers.
Yet the National Law Review survey found deep skepticism about whether these efforts are sufficient. Among the four "core issues" surveyed — the likelihood of near-term artificial general intelligence, AI's impact on entry-level hiring, disciplinary responses to hallucinations, and law school preparedness — the question of whether U.S. law schools are adequately preparing students drew particular concern [8].
The Wider Context
The legal profession's AI reckoning is part of a broader pattern across white-collar industries. ServiceNow CEO Bill McDermott recently warned that AI agents could push unemployment among recent college graduates into the "mid-30s" within the next couple of years, as his company automates 90% of customer service operations. Companies including Block and Atlassian have conducted AI-driven layoffs, and Federal Reserve data shows graduate underemployment at its highest level since the pandemic.
But law has particular characteristics that make its AI transformation both more consequential and more contested than most industries. Legal work is governed by ethical obligations with no parallel in most professions. A software engineer who uses AI to write buggy code faces an irritated product manager; a lawyer who uses AI to cite nonexistent case law faces sanctions, malpractice liability, and potential disbarment. The stakes are categorically different.
And unlike many industries where automation happens invisibly, the legal system's transparency — public filings, court records, judicial opinions — means that AI failures in law are uniquely visible and uniquely embarrassing. Every hallucinated citation becomes a cautionary tale with a named defendant.
What Comes Next
The legal profession in 2026 finds itself suspended between two futures. In one, AI augments lawyers' capabilities without fundamentally disrupting the profession's economics or its labor force. The BLS employment data and the AmLaw 100 hiring intentions support this reading. In the other, the Baker McKenzie layoffs are merely the opening act of a structural transformation that will hollow out legal support roles, compress associate hiring, and concentrate the profession's rewards among a smaller cohort of AI-fluent practitioners.
The truth will likely involve elements of both — and the outcome will depend less on the technology itself than on the ethical, regulatory, and institutional choices the profession makes in the months ahead. The two questions haunting lawyers at every conference in America do not yet have definitive answers. But the legal profession, for the first time in its history, is running out of time to find them.
Sources (23)
- [1]Wake Up Call: Hundreds Laid Off at Baker McKenzie as AI Growsbloomberglaw.com
Baker McKenzie is laying off more than 700 business services employees as part of a restructuring driven in part by increased use of AI.
- [2]AI Hallucination Cases Databasedamiencharlotin.com
More than 600 documented AI hallucination cases in court filings worldwide, implicating 128 lawyers across firms of every size.
- [3]Top 10 Biglaw Firm To Conduct 'Massive' Layoff, Leaving Hundreds Jobless Thanks To AIabovethelaw.com
Baker McKenzie conducted massive staff layoffs among its global business services team, with cuts across nearly all functions in all offices.
- [4]Did Baker McKenzie Just Give Biglaw Permission To Blame AI For Mass Layoffs?abovethelaw.com
Industry observers questioned whether AI was the true driver of the layoffs or convenient cover for broader cost-cutting measures.
- [5]After Baker McKenzie's Cuts, Layoffs Expected in Other Law Firms. Don't Only Blame AIlaw.com
More layoffs predicted across law firms following Baker McKenzie's cuts, with warnings not to blame AI alone for the restructuring trend.
- [6]Bureau of Labor Statistics - Professional and Technical Services Employment (CES6054130001)bls.gov
BLS data shows professional and technical services employment grew from 1.49M in 2019 to 1.75M in February 2026.
- [7]Updated, Around 17% of Legal Jobs At AI Risk – Goldman Sachsartificiallawyer.com
Goldman Sachs revised its estimate from 44% of legal tasks automatable to 17% of legal jobs facing genuine AI automation risk, affecting roughly 228,000 lawyers.
- [8]85 Predictions for AI and the Law in 2026natlawreview.com
National Law Review survey of 85 legal professionals found 58.3% rejected the view that AI will replace entry-level lawyers within five years.
- [9]AI Is Reshaping Legal Work—But Is It Stealing Lawyers' Jobs?bestlawfirms.com
Harvard Law School's Center on the Legal Profession found none of the AmLaw 100 firms interviewed anticipate reducing headcount of practicing attorneys.
- [10]How AI promises to transform the legal professionworklife.news
Thomson Reuters' Future of Professionals Report found that AI could free up approximately 240 hours per year per legal professional.
- [11]Legal Tech Spending Surges 9.7% As Firms Race to Integrate AIlawnext.com
Law firms increased technology spending by 9.7% in 2025 — the fastest real growth likely ever experienced in the legal industry.
- [12]AI Market in the Legal Sector Projected to Grow to $12.49 Billion by 2030einpresswire.com
The AI legal technology market is projected to grow from $4.59 billion in 2025 to $5.59 billion in 2026, a 22.3% increase.
- [13]Thomson Reuters Survey: Over 95% of Legal Professionals Expect Gen AI to Become Central to Workflowlawnext.com
26% of legal organizations are actively using generative AI, up from 14% in 2024, with 95% expecting it to become central within five years.
- [14]7 Ways AI Is Changing Law School & Legal Careerscollegesoflaw.edu
The ABA's 2024 Legal Technology Survey found 30% of law firms using AI technology, up from 11% in 2023.
- [15]Legal Market Is Doing Great, Which Is How You Know You're All About To Get Laid Offabovethelaw.com
Record law firm profitability coincides with aggressive technology investment, raising questions about whether firms are investing to reduce headcount.
- [16]A recent high-profile case of AI hallucination serves as a stark warningnpr.org
A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to pay $3,000 each for AI-generated filings with fabricated citations.
- [17]Judges find suspected AI hallucinations in PA court casesspotlightpa.org
Pennsylvania judges flagged suspected AI hallucinations in Commonwealth Court cases, triggering investigations into attorney conduct.
- [18]Lawyer Sanctioned for Failure to Catch AI Hallucinationamericanbar.org
The ABA established that lawyers must maintain a reasonable understanding of AI's capabilities and limitations and verify all AI-generated output.
- [19]State Bar Rules on AI Use: What Lawyers Need to Know About AI Compliancespellbook.legal
Multiple states have adopted specific rules requiring disclosure, oversight, and ethical compliance for attorney AI use.
- [20]AI Risk in 2026: 3 Critical Changes for the General Counselcorporatecomplianceinsights.com
Three critical AI risk shifts for 2026: expanding regulatory exposure, growing litigation risk from AI errors, and pressure to demonstrate AI governance.
- [21]MC Law Becomes First Law School in the Southeast to Require AI Certification for All Studentsmc.edu
Mississippi College School of Law mandates AI certification for all first-year students beginning spring 2026.
- [22]AI Advances into the Law School Curriculumlaw.uchicago.edu
University of Chicago Law School developing required AI modules for all first-year students launching in early 2026.
- [23]Designing the Future: Miami Law Adds AI Prompt Question for 2026 Applicantsmiami.edu
University of Miami School of Law introduced an application essay requiring prospective students to design and evaluate a generative AI prompt.