Why Are People Turning To AI Lawyers?

Artificial intelligence is being used for legal advice, now. Large language models like ChatGPT and similar systems have begun giving quick responses to complex legal questions. Research published by Eike Schneiders, Tina Seabrooke and others in 2025 shows that people are actually open to acting on legal advice produced by AI, sometimes more so than advice from real lawyers. The research ran 3 experiments using real world legal problems in traffic, planning, and property law.

The first 2 experiments found a pattern, where when people did not know who had written the advice, they said they were more willing to follow the AI’s suggestions than those from a lawyer. The results were consistent, even after using a new group of participants for the second experiment. This shows that, unless the source is obvious, many people feel comfortable trusting answers that sound confident, even if they come from a machine.

One of the reasons for this, as the study found, is that AI systems often write in a style that sounds more complex and to the point. The legal advice produced by ChatGPT used fewer words but often felt more decisive. On the other hand, lawyers tend to add more detail and avoid bold claims, probably because of the risks involved in giving the wrong advice. These differences may influence how confident or useful the advice feels to someone who needs a quick answer.
 

Are People Able To Tell Apart AI Legal Advice From A Lawyer’s?

 
It is not always easy for non-experts to spot when a piece of legal advice comes from a machine. In the third experiment by Schneiders and the team, participants were asked to decide whether each answer was produced by an AI or by a lawyer, without being told the source. Their ability to tell the difference was just slightly better than flipping a coin, where people guessed correctly about 59% of the time, according to the study.

Although people could sometimes sense a difference, the gap was small. Random guessing would have landed at 50%. The study just shows how easily the confident style of AI can mislead people into thinking the advice is credible, even if it might be wrong or entirely made up.

What makes this issue more complicated is that some people show a preference for advice that feels like it came from a machine, especially if the writing sounds sure of itself. The researchers note that trust in advice depends on more than just where it comes from, and that sometimes, people simply like the way AI presents information.

 

What Risks Come With AI Legal Advice?

 

The easy access to AI-generated legal advice brings new problems. One major issue is that language models sometimes create information that is not true, even if it sounds convincing. This can have serious consequences in legal settings. According to Schneiders and team, there have already been court cases in the US where lawyers submitted documents filled with made-up cases and citations written by AI. In a California case, a judge fined 2 law firms $31,000 after discovering that the brief they filed quoted authorities that did not exist.

This problem can happen because AI tools are designed to sound knowledgeable, even when they are not. Lay people may not know when to question the content or double check important points with a real lawyer.

What makes things even harder is that people may take confident-sounding advice at face value. Even simple legal matters can quickly turn complex, especially if the information someone acts on is wrong or based on hallucinated cases. When mistakes like these reach the courts, they can cause delays and may lead to much bigger problems for everyone involved.

 

 

What Solutions Are Being Discussed to Make AI Legal Advice Safer?

 
The rise of AI in legal and other high-stakes areas has prompted calls for better rules and more public awareness. One solid solution is regulation like the EU AI Act, which requires that AI-generated text must be marked so machines can pick up on it. Schneiders and the research team point out that this does not always help ordinary readers, who might not pay mind to those marks.

The study made the argument that better education about AI is also an important part of fixing this. People need to know how to ask questions about what they read, think carefully about where information comes from, and always crosscheck legal advice with an expert. Only using quick disclaimers, like those found on ChatGPT or Google Gemini, may not be enough, since most people ignore small print.

Improving AI literacy by helping everyone understand both the benefits and the limits of these tools will become more important as these systems get better at “sounding human”. Practical steps include encouraging users to verify key information and teaching critical thinking in schools and workplaces. Public awareness campaigns could help too, especially for groups most likely to trust AI advice without checking.

 

Will AI Take Over Legal Advice Eventually?

 
AI is already being used by law firms for research and document summaries, and more people are trying out AI for their own legal questions. According to the study, about 45% of people said they would consider using AI for legal help in future. But the researchers made the obvious point on relying too heavily on machines without careful checks, and how it could create more problems than it solves.

The research closes by saying that the real problem here is not the technology itself, but how people use it. Rules, public education, and professional oversight will matter much more than the latest update to any chatbot. The safest route, for now, is to treat AI as a tool for getting started or gathering ideas, and not as a substitute for real legal advice.