« We try and educate judges about this stuff, » Andy Wilson, CEO of legal technology company Logikcull, said. « Just knowing what I know, I think it’ll be a futile effort. »
The order, issued by Northern District Court Judge Brantley Starr, appears to be a first of its kind, mandating that lawyers who file documents on his docket certify the filings as either free of content produced by large language model (LLM) AI tools – like OpenAI’s ChatGPT, Harvey.AI and Google Bard – or as reviewed by a human for accuracy.
« My order is an attempt to keep the pros of generative AI while managing the cons, » Judge Starr told Yahoo Finance. « But judges are reactive and resolve what is put before us. So we’ll never be as cutting edge as the innovations we eventually face. »
Starr said one of the many pros for legal AI is that it can search through mountains of data. The main con he sees is the tendency for systems to ‘hallucinate’ by making up case citations and supporting citations. Hallucinations are scenarios where AI-generated text appears plausible but is factually, semantically, or syntactically incorrect.
Starr explained in a post to the court’s website that there’s also no way to hold a machine to the ethical requirements of practicing law, or to ensure that the technology’s creators have avoided programming their personal prejudices, biases, and beliefs into the systems.
« As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States…or the truth, » the judge wrote.