Speaker 1 (00:02):
A court in the Netherlands has ruled a marriage void after it was revealed that the couple’s vows were generated by ChatGPT and did not meet the legal requirements of that nation. It begs the question: how many of us are using AI platforms as a form of legal counsel? And if they do, what’s the potential impact? There is an old saying: the person who represents themself has a fool for a client. Peter Carter is the director of Carter Capner Law. Peter, good afternoon.

Speaker 2 (00:38):
Good afternoon, Kelly.

Speaker 1 (00:39):
Now, Peter, we’ll talk about the courtroom shortly, but first, lots of us these days use a little AI to help write beautiful vows for a wedding or, you know, make it sound like we put in some effort. Why was this couple in the Netherlands thought to have crossed a line? What happened here?

Speaker 2 (00:57):
Well, they didn’t comply — the vows that ChatGPT generated didn’t comply with the requirements of the law. So they were deficient when someone looked into them in detail. That’s why the court decided to invalidate the marriage.

Speaker 1 (01:14):
Do we know what that specific requirement was that they didn’t meet?

Speaker 2 (01:20):
No, I’m afraid I don’t.

Speaker 1 (01:23):
Not an aficionado—

Speaker 2 (01:24):
Of—

Speaker 1 (01:25):
Not an aficionado of Denmark law there, Peter. That’s fair enough. No, not that country. Not that country. Well, it’s interesting, isn’t it? Because unless you knew, then you might not think: I’ll just write my vows, I sign the marriage certificate and off we go.

Speaker 2 (01:42):
Well, yes, one would have thought that getting the words right would be enough. So we can probably excuse that couple for that omission, but not legally.

Speaker 1 (01:52):
Okay. So if a legal declaration isn’t used, perhaps you’re not married. Do we know if that’s true in Australia? Because a lot of people write their own vows.

Speaker 2 (02:03):
They do, but the formality in marriage in Australia is covered differently — not in the vows. It’s in signing the paperwork.

Speaker 1 (02:13):
So just sign that paperwork and off you go. Anyway, let’s move to the courtroom. What sort of person might use a chatbot as a lawyer?

Speaker 2 (02:40):
Australia is recorded as having the second highest rate of misuse of AI in courtrooms in the world behind the US, and the biggest component of that are self-represented litigants who are using it to try to replace their lawyer. But lawyers have been caught out as well.

Speaker 1 (03:04):
Lawyers using AI?

Speaker 2 (03:08):
Yes — for preparing argument and submissions. The problem is that AI hallucinates legal authorities. It makes up precedent cases and they turn out to be absolutely false. The person submitting it hasn’t done the checking that’s required to make sure there are no hallucinations. Judges are throwing their hands up in the air about it. It’s becoming very dangerous behaviour in the courtroom.

Speaker 1 (03:45):
Right. So I might be citing the case of Higgins Divine versus Carter and it never existed, but ChatGPT has decided it did.

Speaker 2 (03:57):
Exactly. It’s like a mishmash — one case here, one case there — and it pulls in names and principles and presents them as authority.

Speaker 2 (04:12):
It’s really unsatisfactory. It will improve, I’m sure, but it’s not at that stage yet.

Speaker 1 (04:19):
I was reading about a couple of cases in the UK where this tripped up actual lawyers. What would happen in a courtroom in Australia if I was a lawyer and used AI and cited a case that didn’t exist — are there professional repercussions?

Speaker 2 (04:54):
Yes — very serious professional repercussions. It’s professional misconduct, probably. There hasn’t been a referral yet to the legal industry watchdog, but it’s been threatened. The Chief Justice in Queensland has threatened that’s what will happen if lawyers are caught using it like this. But the biggest offenders are self-represented litigants using AI to produce arguments that turn out to be irrelevant or factually incorrect.

Speaker 1 (05:35):
Peter Carter, my guest this afternoon on ABC Radio Brisbane and Queensland. We’re talking about AI being used in the legal sphere. If AI gets better at citing precedent, could it become helpful for self-litigants?

Speaker 2 (06:29):
It’s helpful already for certain tasks, but not good enough for final output that requires a lawyer’s expertise. I’m hopeful it improves and becomes an antidote to high legal costs. Litigation is very expensive and AI could reduce those expenses greatly.

Speaker 1 (07:00):
There was a case in New York where a 75-year-old used an AI avatar to deliver his oral argument and didn’t disclose it. Would an AI lawyer ever be allowed?

Speaker 2 (07:31):
That’s quite impressive. I think it’s a great idea. It would save time. Load up the argument, pick the avatar, away you go. That might well get some legs in the future.

Speaker 1 (07:51):
Some people use AI for things they see as benign — wills, for example. What do you think?

Speaker 2 (08:03):
Just about every DIY will ends up in litigation. AI-generated wills at this stage would accelerate that. Be very careful.

Speaker 1 (08:22):
So be careful using AI for any legal problem.

Speaker 2 (08:29):
Absolutely. Particularly with transactions too. If you think AI can generate a contract, think again. It takes decades of experience to produce good transaction documents. AI isn’t there yet.

Speaker 1 (08:50):
What about a resume?

Speaker 2 (08:53):
You can usually tell if it doesn’t match the personality of the person. Embellishment is misleading and can have legal consequences too.

Speaker 1 (09:24):
So make it sound more professional, sure — but you can’t lie.

Speaker 2 (09:34):
Absolutely not. You can’t be an astronaut or a brain surgeon just because you’re using AI.

Speaker 1 (09:41):
Peter Carter, thank you.

Speaker 2 (09:47):
Thanks, Kelly.

Speaker 1 (09:48):
Peter Carter, director of Carter Capner Law.