Is it Ethical for GenAI to Give Advice in High-risk Situations?
The answer I arrived at was surprising (and a little uncomfortable).
I saw a post the other day on LinkedIn asking, "Is it worse to have no legal advice, or to have some legal advice from GenAI that could be wrong because it hallucinated?" My immediate reaction was, "Of course, it's worse to have bad legal information than none."
But ... is it? I thought about more, beyond just my initial reaction. I realized that the question is more nuanced than I gave it credit for. There are layers to this question (just like ogres). As I explored the layers, I realized my initial thought might have been too hasty.
Is it ethical to maybe get the wrong information if your other option is no information at all? I thought it would be interesting to break this down issue this week on Byte-Sized Ethics.
Note: Throughout the article, I'm going to refer to the bad information provided by an LLM as "misinformation" instead of hallucination, because there's no functional difference in this situation.
Context is King
I want to start by broadening the question. Is it better to have no information, regardless of the context, than it is to have misinformation that you don't know is misinformation? Broadening the question serves one purpose, it helps us understand that the answer to this question depends on the subject on the subject matter.
For example, if I ask ChatGPT for a bread recipe and it gives me a bad recipe that calls for 4 pounds of salt, I've lost some ingredients, and sometimes, maybe I lose some face at the family dinner. In the grand scheme, the impact of the bad recipe was pretty small.
But, the situation changes when we start to think about things that can have a greater impact. For example, if I ask ChatGPT for legal advice, that can have a material negative impact on my life. I'm not going to link again the absurd number of lawyers who have been disciplined by the courts for citing fake cases ChatGPT.
The same situation applies when looking at other subject areas that have a low tolerance for mistakes. Medical misinformation comes to mind, and yet we can see big companies are already yolo'ing medical research into LLMs (complete with a great example of LinkedIn influencers more in love with their success than the harm their content causes).
Even medium-risk scenarios are concerning. I was recently laid off, and when I was gearing up for my job search, I tried to use Claude to help me write my resume and match it better to the jobs I was applying for. But Claude wanted to lie to me. He consistently created resumes and content that were patently not true. If I had just used those and not proofread, I would have been in a heap of trouble if I got any interviews from my made-up resumes. (For the record, I corrected all the lies that Claude came up with to make me look better).
The subject matter, and the context here matter in the answer to this question.
Who You Are Matters
This is where things get uncomfortable. I'm a white dude - I live a comfortable life, both in the opportunities I'm presen
ted with and the lifestyle those opportunities enable. It is easy to forget to check my privilege.
For someone who can't afford a lawyer to help with the legal problems, this question isn't actually how I framed it above. Instead, I need to ask, "Is it better to go into court knowing that I know absolutely nothing, or is it better to go in knowing something that may or may not be wrong?"
From a medical context, my question is instead, "Is it better to have no idea about what my symptoms may indicate because I can't afford to go to the doctor and I have no idea what to do, or to get what might be bad advice but at least have something to do?" For others, using an unedited version of their ChatGPT edited resume might be the difference between being able to move out of a shelter, or not.
I initially approached the question as someone in my position making decisions for other folks in different socio-economic situations. That was flawed because I had the means to get legal advice. My situation allows the privilege to say that no information is better than misinformation because I don't have as much at stake as someone in a more challenging socio-economic situation.
Their decision isn't between "Legal advice from a professional, or legal advice from ChatGPT," their decision is actually between "No legal advice at all, and legal advice that might be bad." Their choice is the equivalent of being guaranteed to lose in court or just very likely to lose in court. Or it's the choice of being desperate to go to work so you can make rent and try something, or not and getting evicted.
The situation of actual people impacted is wildly important.
Does the Model even Matter?
We have to remember - it's only happenstance when an LLM provides accurate information. LLMs look for the statistically most likely next word in a sequence of words which can frequently yield accurate information -- but the LLM isn't trying to answer correctly. That's before the randomness is injected into the system just so you get variation in results, making it seem more "real."
But does that matter? Does the fact that ChatGPT doesn't try to be accurate matter when the other option is a guaranteed bad outcome? Does the patent misrepresentation of the capabilities of AI matter when your choice is limited to guaranteed bad outcome, and just a most likely bad outcome for you?
I would argue that it doesn't.
“…it's only happenstance when an LLM provides accurate information.”
The Verdict
So is it ethical for LLM models like ChatGPT to give legal and medical advice, even knowing the propensity they have to generate misinformation? I'm going to change my initial assessment and say, yes it is ethical. For me, the choice is different - because I have a choice between "probably good outcomes" and "likely bad outcomes" when choosing to use ChatGPT for this particular use. But for others, their choices are "guaranteed bad" and "very likely bad," which is a very different situation.
ChatGPT can't know which of the social-economic situations a user is in. I think it's ethical to allow ChatGPT to provide medical and legal advice, even knowing that it might be wrong -- because for a lot of folks, "might be wrong," is still better than "definitely wrong."
This is a rough issue, what do you think? Do you think using ChatGPT as a citizen in high-risk situations is ethical? Let me know in the comments.