Two US federal judges have admitted that staff in their chambers turned to artificial intelligence to help draft court rulings and that the experiment went badly 0 a pair of candid letters made public on Thursday by Senator Chuck Grassley, the Chairman of the Senate Judiciary Committee, Judges Henry 1 of Mississippi and Julien Xavier Neals of New Jersey said that AI tools were used in the preparation of court orders that were later found to be riddled with factual mistakes and legal 2 decisions have since been retracted. Grassley, who had demanded explanations, said, “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law.” Staff missteps expose limits of AI in the courtroom In his letter , Judge Neals of the District of New Jersey said that a draft ruling in a securities lawsuit had been released “in error — human error” after a law school intern used OpenAI’s ChatGPT for research without authorization or 3 decision was promptly withdrawn once the mistake was 4 prevent a recurrence, Neals said his chambers had since created a written AI policy and enhanced its review 5 Wingate, who serves in the Southern District of Mississippi, said a law clerk used the AI tool Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket.” That draft order, issued in a civil rights case, was later replaced after he identified 6 stated that the event “was a lapse in human oversight,” adding that he has since tightened review procedures within his 7 of AI usage in legal work The episode adds to a growing list of controversies involving AI-generated legal 8 in several US jurisdictions have faced sanctions in recent years for submitting filings drafted by chatbots that included fabricated case citations and misapplied 9 this month, the New York state court system put out a new policy that restricts judges and staff from entering confidential, privileged, or non-public case information into public generative AI 10 the legal profession has been quick to explore AI’s potential to improve efficiency, the incidents have exposed the technology’s limitations, particularly its tendency to hallucinate, or generate plausible but false 11 courts, where the integrity, accuracy of rulings and the burden of proof are paramount, such lapses risk undermining public confidence in the justice system.
Grassley, who commended Wingate and Neals for owning up to the mistakes, also urged the judiciary to put in place stronger AI 12 Administrative Office of the US Courts has not released comprehensive guidance on AI use, though several circuit courts are reportedly exploring frameworks for limited, supervised 13 scholars, on the other hand, are reportedly proposing a disclosure rule, which will require judges to publicly note any use of AI in their opinions or orders, in a manner similar to citation requirements for external 14 incidents come as federal agencies and professional bodies continue to grapple with questions about AI 15 you're reading this, you’re already 16 there with our newsletter .
Story Tags

Latest news and analysis from Cryptopolitan



