Last week Steven A. Schwartz, fellow lawyer Peter LoDuca and law firm Levidow, Levidow & Oberman, were fined US$5,000 (AU$7,485) for submitting fake citations in a court filing. 

The judge found the lawyers acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court."

In a written opinion, Judge P. Kevin Castel said lawyers had to ensure their filings were accurate, even though there was nothing "inherently improper" about using artificial intelligence in assisting with legal work. 

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Castel wrote.

“But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Schwartz, who has more than 30 years' experience practising law in the US, was part of a legal team acting for a man suing the airline Avianca. The client, Roberto Mata, had claimed that he was injured after a metal serving cart hit his knee during a flight. 

Unfortunately for the client, Schwartz did his legal research for the case using ChatGPT without fact checking if the cases he cited in his brief, involving other airlines and personal injuries, were real or not.

Turns out they weren't. 

"He did ask ChatGPT whether one of the cases was real but was happy enough when ChatGPT said yes," Professor Lyria Bennett Moses tells ABC RN's Law Report.

"In fact, ChatGPT told him that they could all be found on reputable databases, and he didn't do any checking outside of the ChatGPT conversation to confirm the cases were real — he didn't look any of them up on a legal database."

Portrait of a woman with shoulder length curly hair and glasses.

Professor Lyria Bennett Moses research explores the relationship between technology and law.(Supplied)

Professor Moses is the director of the UNSW Allens innovation hub. She says the lesson here is to use this platform with caution.

"[Schwartz] stated in the court [hearing], 'I just never could imagine that ChatGPT would fabricate cases.' So, what it showed is a real misunderstanding of the technology," she explains.

"[ChatGPT] has no truth filter at all. It's not a search engine. It's a text generator working on a probabilistic model. So, as another lawyer at the firm pointed out, it was a case of ignorance and carelessness, rather than bad faith."

Schwartz's lack of due diligence when researching his brief in this personal injury case has caused him great embarrassment, particularly as his hearing has drawn worldwide attention.

After reading a few lines of one of the fake cases aloud, Judge P. Kevin Castel asked: "Can we agree that's legal gibberish?"

When ordering the lawyers and the law firm to pay the fine, the judge said that they had "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question."

In a statement, the law firm Levidow, Levidow & Oberman said its lawyers "respectfully" disagreed with the court that they had acted in bad faith.

"We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said. 

Lawyers for Schwartz told Reuters that he declined to comment while lawyers for DoLuca were reviewing the decision.

Excerpt from article by Damien Carrick and Sophie Kesteven from ABC RN, read the full article here.