Custody Battle Highlights Dangers of AI in Legal Practice

Mar 28, 2026, 2:46 AM
Image for article Custody Battle Highlights Dangers of AI in Legal Practice

Hover over text to view sources

A custody dispute involving a 16-year-old Labrador retriever named Kyra has recently spotlighted the perils of relying on artificial intelligence (AI) within the legal profession. The case, which unfolded in California, illustrates how AI-generated citations can mislead lawyers and judges, leading to severe consequences for those involved in the litigation process.
The custody battle arose between Joan Pablo Torres Campos and Leslie Ann Munoz following the dissolution of their domestic partnership. After the family court did not specify custody arrangements for Kyra in its order, Torres Campos sought shared custody and visitation rights. However, Munoz's lawyer, Roxanne Chung Bonar, cited two fictitious California cases to support her client's refusal. The first, "Marriage of Twigg," did not exist, while the second reference, "Marriage of Teegarden," was incorrectly dated and irrelevant to pet custody.
Unexpectedly, the opposing law firm failed to identify the inaccuracies, leading to the judge signing an order that included these fabricated citations. This oversight not only compromised the integrity of the judicial record but also highlighted a growing trend of AI fabrications infiltrating legal documents.
Eugene Volokh, a law professor at UCLA, remarked that AI has introduced errors into legal practice that were virtually unheard of previously. Historically, lawyers could expect some degree of honesty in case references, but AI's ability to generate plausible yet fictitious citations has transformed this expectation.
The implications extend beyond individual cases. Federal Magistrate Mark D. Clarke recently imposed significant sanctions on attorneys for incorporating multiple fabricated citations into their filings, underscoring the judiciary's increasing intolerance for such missteps. Clarke's ruling involved a $90,000 penalty alongside the dismissal of a $29 million lawsuit due to reliance on AI-generated inaccuracies.
This case has also raised broader concerns within the legal community regarding the reliability and accountability of AI tools. A database maintained by French researcher Damien Charlotin has documented over 1,174 instances of AI hallucinations in legal contexts, with roughly 750 cases originating from US courts. Volokh estimates that many more cases go unnoticed, leading to a potential crisis in legal documentation and public trust.
As the case involving Kyra progressed, both lawyers failed to verify the citations thoroughly. After the initial ruling, Torres Campos’ legal team acknowledged that the cited precedents were fictitious only during the appeal process. Unfortunately, the appellate judges opted not to overturn the lower court's decision, as both parties had neglected to ensure the authenticity of their references.
In her response to the appellate filing, Bonar doubled down on her claims, insisting that the Twigg case was valid and even adding three more fictitious citations. This led to a $5,000 sanction against her for attempting to shift blame and for failing to acknowledge the inaccuracies promptly.
The reliance on AI in legal practice has prompted calls for greater accountability among lawyers and judges alike. David C. Beavens, Torres Campos' attorney, expressed the need for all parties involved in legal proceedings to assume responsibility for ensuring the accuracy of references.
As this custody battle over Kyra illustrates, the integration of AI into legal practices carries significant risks. Lawyers must remain vigilant and verify all citations, understanding that AI tools, while potentially useful, can also mislead and damage the integrity of the legal system.
In conclusion, the case serves as a cautionary tale about the growing reliance on AI in law, emphasizing that verification of sources is more critical than ever in maintaining the credibility of legal processes.

Related articles

Iran's Revolutionary Guard Threatens Tech Giants Amid Rising Tensions

Iran's Revolutionary Guard has issued threats against major American tech companies, including Nvidia and Apple, declaring them 'legitimate targets' for retaliation. This escalation follows recent military confrontations with the US and Israel, raising concerns about the safety of corporate operations in the region.

Tech Giants Cut H-1B Petitions Amid New Visa Restrictions

Major tech companies like Meta, Google, and Amazon have significantly reduced their H-1B visa petitions following new restrictions implemented during the Trump administration. The changes have made the visa process more complex and costly, coinciding with workforce reductions across the tech sector.

Iran Threatens Attacks on Major U.S. Tech Firms Amid Conflict

Iran's Islamic Revolutionary Guard Corps has escalated its threats against several major US tech companies, including Apple, Google, and Microsoft, declaring them 'legitimate targets' in response to ongoing military conflicts. The IRGC's warning highlights a troubling trend as drone strikes have already damaged American infrastructure in the region.

Pussy Riot Protests Ubiquiti Over Alleged War Crimes Support

Pussy Riot staged a protest at Ubiquiti's Manhattan offices, accusing the tech company of facilitating Russian war crimes in Ukraine. The group's demands include compliance with US sanctions and acknowledgment of Ubiquiti's role in the conflict.

Jury Verdicts Against Meta and Google Spark Legal Battle Over Tech Liability

A recent jury in Los Angeles found Meta and Google liable for the mental health issues of a young woman due to their social media platforms' addictive designs. This landmark ruling has significant implications for the tech industry, potentially reshaping legal accountability and prompting a wave of similar lawsuits.