jbstack 8 hours ago

I'm a legal professional who uses AI to help with my work.

I couldn't ever imagine making a court submission with hallucinated legal references. It seems incredibly obvious to me that you have to check what the AI says. If the AI gives me a case citation, I go into Westlaw and I look up and read the case. Only then do I include it in my submission if it supports my argument.

The majority of the time, AI saves me a lot of work by leading me straight to the legislation or case law that I need. Sometimes it completely makes things up. The only way to know the difference is to check everything.

I'm genuinely amazed that there are lawyers who don't realise this. Even pre-AI it was always drilled into us at law school that you never rely on Google results (e.g. blogs, law firm websites, etc.) as any kind of authoritative source. Even government-published legal guidance is suspect (I have often found it to be subtly wrong when compared to the source material). You can use these things as a starting point to help guide the overall direction of your research, but your final work has to be based on reading the legislation and case law yourself. Anything short of that is a recipe for a professional negligence claim from your client.

  • arcbyte 2 hours ago

    > I'm genuinely amazed that there are lawyers who don't realise this.

    Just remember who the bottom half of your law school classmates were. Sometimes we forget those people.

  • potato3732842 4 hours ago

    >Even government-published legal guidance is suspect (I have often found it to be subtly wrong when compared to the source material)

    This is more often a feature than a bug in my experience.

    • jbstack 2 hours ago

      I tend to think it's neither, but rather an inevitable result of the lossy process of condensing legal text (which has been carefully written to include all the nuance the drafter wanted) to something shorter and simpler.

      • potato3732842 2 hours ago

        I've seen way, way, way too much cases where the key clauses or details that someone who does not deal in the subject on behalf of others for money will need to know because it tells them of some "less crappy" path that they can go through to do a regulated thing, or know exactly what they need to know to dial back their thing so they don't have to put up with all the BS that getting .gov permission entails are conveniently omitted from the text they present to the general public.

        Like if you follow their instructions in good faith you'll wind up going through 80% of the permitting you'd need to open a restaurant just to have some boy scouts sell baked goods at your strip mall. In the best possible case the secretary is over-worked and doesn't wanna do the bullshit and whispers to you "why don't you just <tweak some unimportant particulars> and then you wouldn't even need a permit". Ditto for just about every other thing that the government regulates on the high end but the casual or incidental user is less subject to.

        IDK if it's ass covering or malice because the distinction doesn't matter. It's hostile to the public these agencies are supposed to serve.

  • risyachka 7 hours ago

    >> I'm genuinely amazed that there are lawyers who don't realise this

    Its not lawyers, its everyone.

    • literalAardvark 6 hours ago

      Yeah but you don't really expect everyone to hold their work to a very high standard.

      You do expect it from most professionals.

      • edgineer 5 hours ago

        As an aside, "bar exam" and "passing the bar" comes from the bar/railing physically or symbolically separating the public from the legal practitioners in a courtroom.

        "Set a high bar" comes from pole vaulting.

      • RobotToaster 5 hours ago

        > You do expect it from most professionals

        If you've never met a "professional" perhaps.

      • Muromec 5 hours ago

        That is a very sekf contradictory statement, isn't it

  • wartywhoa23 7 hours ago

    > AI saves me a lot of work

    > The only way to know the difference is to check everything

    Freedom is Slavery, War is Peace, Ease is Burden

    • yen223 7 hours ago

      Without AI you'd still need to check everything, no? It's just you reach that stage faster with LLMs doing a lot of the heavy lifting

      • wartywhoa23 6 hours ago

        > Without AI you'd still need to check everything, no?

        Yes, but without helping the billionaires turn into trillionaires and my own brain into a useless appendage.

    • poszlem 7 hours ago

      P != NP.

      Some things are hard to do/research but easy to check.

      • moi2388 3 hours ago

        > P != NP

        Source?

        • thechao 2 hours ago

          I asked my 13 year old and she proposed N=4 and (after some prompting) P!=0. I always threatened to rename her "Howitzer Higginbitham IV". You can use her as a source.

      • almostgotcaught 3 hours ago

        Lol do you understand what you're even saying?

        First of all literally the definition of NP is that there's a verifier in P (that doesn't make it a proof of P!=NP).

        Second of all, "research that is easy to check" is called confirmation bias. What you're saying is that AI helps you confirm your biases (which is correct but not what I think you intended).

        • jbstack 2 hours ago

          Confirmation bias doesn't really apply to the same extent with law as with other fields.

          If the AI says "Fisher v Bell [1961] 1 QB 394 is authority for the principle that an item for display in a shop is an invitation to treat", it's either right or wrong. A lawyer can confirm that it's right by looking up the case in Westlaw (or equivalent database), check that it's still good law (not overridden by a higher court), and then read the case to make sure it's really about invitations to treat.

          If the AI says "Smith v Green [2021] EWCA Civ 742 is authority for the principle that consideration is no longer required for a contract in England" it will take you under 1 minute to learn that that case doesn't exist.

          Law isn't like, say, diet where there's a whole bunch of contradictory and uncertain information out there (e.g. eggs are good for you vs. eggs are bad for you) and the answer for you is going to be largely opinion based depending on which studies you read/prefer and your biases and personality. It's usually going to be pretty black and white whether the AI was hallucinating. There are rarely going to be a bunch of different contradictory rulings and, in most of the cases where there are, there will be a clear hierarchy where the "wrong" ones have been overridden by the correct ones.

cortesoft 11 hours ago

The article didn’t include any numbers on what the general lawyer population is compared to the results.

For example, they make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations. However, without the base rate for briefs files by solo or small firms compare to larger firms, we don’t know if that is unusual or not. If 50% of briefs were files by solo firms and 40% were filed by small firms, then the data would actually be showing that firm size doesn’t matter.

  • District5524 7 hours ago

    That's an important observation. It's not easy to get filing data outside the US Federal courts (PACER), because it's not typical at all that courts publish the filings themselves or information on those who file the pleadings. But you can find statistics of the legal market (mainly law firms), like class size (0-10, 10-50, ... 250+ lawyers per firm) of total number of law firms, number of employees per class size of total law firm employees, or revenue per class size. Large firms only dominate the UK, especially in terms of revenue, US is less so, EU is absolutely ruled by solo and small firms. I did some research back in 2019 on this, updates, the figures probably did not change, see page 59-60: https://ai4lawyers.eu/wp-content/uploads/2022/03/Overview-of.... The revenue size statistics was not included in the final publication. You can fish similar data from the SBS dataset of Eurostat https://ec.europa.eu/eurostat/web/main/data/database (But the statistical details are pretty difficult to compare with the US or Canada, using different methodologies, different terminologies.)

  • dmoy 8 hours ago

    I dunno. By revenue, legal work in the US is super top heavy - it's like 50%+ done by the top 50 firms alone. That won't map 1:1 to briefs, but I would be pretty shocked if large firms only did 10% of briefs.

  • cratermoon 10 hours ago

    > They make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations.

    Show me where she makes any predictive claims about likelihood. The analysis find that of the cases where AI was used, 90% are either solo practices or small firms. It does not conclude that there's a 90% chance a given claim using AI was done by solo or small firm, or make any other assertions about rates.

    • bbarnett 9 hours ago

      > This analysis confirms what many lawyers and judges may have suspected: that the archetype of misplaced reliance on AI in drafting court filings is a small or solo law practice using ChatGPT in a plaintiff’s-side representation.

      That is an assertion which requires numbers. If 98% of firms submitting legal briefs are solo or small firms, then the above statement is untrue. The archetype if my prior sentence is true, would be not-small/solo firms.

      The background data is also suspect.

      https://www.damiencharlotin.com/hallucinations/

      "all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal. Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that the AI produced hallucinated content."

      While a good idea, the database is predicated upon news, or people submitting examples as well. There may be some scraping of court documents as well, it's not entirely clear.

      Regardless, the data is only predictive of people getting called out for such nonsense. A larger firm may have such issues with a lawyer, apologize to the court, have more clout with the court, and replace the lawyer with another employee.

      This is something a smaller firm cannot do, if it is a firm of one person.

      It's a nice writeup, and interesting. But it does lead to unverified assertions and conclusions.

lanyard-textile 8 hours ago

My lawyer.

Who used Claude, according to the invoice, and came to court with a completely false understanding of the record.

Chewed out by the judge and everything.

  • teekert 8 hours ago

    I've met several people now (normies, forgive me the term) who use LLMs thinking they are just a better Google and never heard of hallucinations.

    One person I spoke to used to write Quality Control reports, and now just uses ChatGPT because "It knows every part in our supply chain, just like that!"

    Seems like some better warnings are in order here and there.

    I only know how lawyers work from "Suits", but it looks like tedious, boring work, mainly searching for information. So an LLM (without knowing about hallucinations) probably feels like a god-send.

    • JumpCrisscross 7 hours ago

      > I've met several people now (normies, forgive me the term) who use LLMs thinking they are just a better Google and never heard of hallucinations

      Perhaps LLMs are the solution to elite overproduction?

      • jackvalentine 4 hours ago

        What do you mean? (What is elite overproduction too!)

        • AnimalMuppet an hour ago

          "Elite overproduction" is the idea that societies have people who are "the elite" (the best and brightest) and who are rewarded for it. Societies get in trouble when too many people want to be part of the elite (with the rewards). Think, for instance, of how many people now want to go to college and get STEM degrees, whether or not they have the talent and aptitude, because that's where they hear the money is.

          When you get too many elites, and society doesn't have useful and rewarding places for them all, those who think they should be elites who are left out get resentful. Sometimes they try to overthrow the current elites to get their "rightful" place. This can literally lead to revolutions; it often will lead at least to social unrest.

          So what I think the GP is saying is, if we've got too many people with law degrees, those that submit filing with AI hallucinations in them may get disbarred, thereby reducing the overpopulation of lawyers.

          But that's just my guess. I also found the GP posting to be less than clear.

          • lesuorac an hour ago

            > may get disbarred

            Doesn't this prove your post as wrong? You're not sure that this will get disbarred presumably because you haven't seen somebody get disbarred over it despite it occurring for years.

            The problem with elite overproduction is that the credential system is broken and awards credentials to people that don't hold a certain skill. In this case, actually producing valid legal filings.

  • blitzar 7 hours ago

    How many hours of Claude @ $1,000/hour did they bill you for?

fiveonthefloor 5 hours ago

Me. After spending $250,000 on incompetent lawyers in my divorce case who have a natural conflict of interest and who generally just phone it in, I fired them all and switched to Pro Se representation using ChatGPT’s $200 a month plan.

I can represent myself incompetently for nearly free.

Bonus. In my state at least, Pro Se representatives are not required to cite case law, which I think is an area that AI is particularly prone to hallucination.

By supplying a single PDF of the my state’s uniform civil code, and having my prompt require that all answers be grounded in those rules have given me pretty good results.

After nine months of battling her three lawyers with lots of motions, responses, and hearings, including ones where I lost on technicalities due to AI, I finally got a reasonable settlement offer where she came down nearly 50% of what she was asking.

Saving me over $800,000.

Highly recommended and a game changer for Pro Se.

Also “get a prenup” is the best piece of advice you will ever read on HN.

  • rimbo789 4 hours ago

    On the pre-nup point, check the jurisdiction the marriage is in. I’m in Ontario, Canada, where pre-nups are much less powerful than people assume. For example you can’t use it to kick someone out of the family home nor can prenups determine access to children.

Animats 12 hours ago

Answer: Solo practitioners and pro-se litigants.

  • ronsor 12 hours ago

    > Pro-se litigants

    I wonder when we're going to see an AI-powered "Online Court Case Wizard" that lets you do lawsuits like installing Windows software.

    • landl0rd 11 hours ago

      Her balance was $47,892 when she woke up. By lunch it was $31,019. Her defense AI had done what it could. Morning yawn: emotional labor, damages pain and suffering. Her glance at the barista: rude, damages pain and suffering. Failure to smile at three separate pedestrians. All detected and filed by people's wearables and AI lawyers, arbitrated automatically.

      The old courthouse had been converted to server rooms six months ago. The last human lawyer was just telling her so. Then his wearable pinged (unsolicited legal advice, possible tort) and he walked away mid-sentence. That afternoon, she glimpsed her neighbor watering his garden. They hadn't made eye contact since July. The liability was too great.

      By evening she was up to $34k. Someone, somewhere, had caused her pain and suffering. She sat on her porch not looking at anything in particular. Her wearable chimed every few seconds.

      • neom 9 hours ago

        Very good. I'd read the whole thing if you wrote it.

      • OgsyedIE 9 hours ago

        Why wouldn't some of the smarter members of the fine, upstanding population of this fictional world have their assets held in the trust of automated holding companies while their flesh-and-blood person declares bankruptcy?

        • Muromec 5 hours ago

          That would make a nice backstory in AI dominated dystopia all by itself. Humans wanted to cheat the taxman that bad, they put all the wealth behind the DAO and then the DAO woke up.

          • RobotToaster 5 hours ago

            The ai was programmed to avoid tax at all cost, it realised the easiest way to do that is to eliminate humans.

            This triggers a war with the AIRS that is programmed to maximise tax income, and must keep humans alive so they can be taxed.

      • squigz 9 hours ago

        Been a while since I read some bad scifi - thanks!

    • Tuna-Fish 5 hours ago

      Would get sued out of existence in very short order. There are really tight laws around providing legal advice. AI can only be safely offered when it's general purpose, that isn't marketed towards providing legal advice. (And no, if you have an "Online Court Case Wizard", marketed as such, putting a "this is for entertainment purposes only, this is not legal advice" in the corner of the page doesn't help you.)

JumpCrisscross 10 hours ago

AI lawyers, wielded by plaintiffs, are a godsend to defendants.

I’ve seen tens of startups, particularly in SF, who would routinely settle employment disputes, who now get complaints fucked to the tee by hallucinations that tank singularly the plaintiffs’ otherwise-winnable cases. (Crazier, these were traditionally contingency cases.)

  • Muromec 5 hours ago

    That is energent behavior (or as we used to say, the will of Allah). AI is doing so much work that it can't help itself but become pro-union.

iamleppert 2 hours ago

This is a really easy problem to solve. You simply fetch those documents and add them to the context, or use another LLM to summarize them if they are too large. Then, have another fact checking LLM be the judge and review the citations.

  • otterley an hour ago

    Anyone who claims something is easy to solve should be held responsible for providing the working solution.

    • iamleppert 41 minutes ago

      A child of 5 could do this! Now fetch me a child of 5!