I've been writing about the high-concept problems with artificial intelligence. I enjoy writing those pieces, but I felt like BSE needed something different. I needed to write something more actionable and grounded. Something that answered the question, "If I don't know much about AI, and I'm reading this newsletter, what do I need to know about?"
My answer to myself was straightforward and immediate- "I want to know how to not f*ck it up." I edited out most of it, I generally have a mouth like a sailor. But here, the f-bomb feels not only appropriate but necessary. It reflects the anxiety that surrounds artificial intelligence, and how much of a nightmare it can be to parse through deep technical discussions and high-concept philosophical musings.
Plus, this is different than anything else I've written before. As I've said, I'm still experimenting and exploring what works for Byte-sized Ethics. I want to see folks like this kind of content.
So let's get into it. 5 Tips on how to Not F*ck it up with GenAI.
1. Never Overshare
I told my product team, in no uncertain terms, that they absolutely should not be using ChatGPT to assist with confidential information, like product requirements documents. Because we don't have a good idea of where that information goes, how it gets used, and where it might just whoopsie-daisy pop back up. (it's worth noting that some of them still use it anyway. Can't win 'em all).
As an individual, it's not a great idea to talk to a black box about your deepest, darkest secrets. Like that thing you you did at summer camp, or that weird rash that you just can't get to go away. You don’t know what’s happening with that information, where it will be used, how it will be used.
A simple rule of thumb here - if you wouldn't say it to someone you just met at the coffee shop or the bar, you shouldn't be telling ChatGPT, Claude, or any other GenAI Tool about it either.
2. ChatGPT doesn't understand literally anything
The AI chatbots lie really convincingly. It takes effort to remember that they don't "know" anything, nor do they "understand" your questions. They take your string of text, put it through some super fancy math, and spit out another string of text on the other side.
If anyone is interested, this is the Chinese Room thought experiment. The Chinese room thought experiment poses the situation where a person sitting in a room receives a piece of paper with a symbol on it. The person has instructions if they get a particular symbol, to send out another symbol. In this way, the person “outside” the room feels like they are having a conversation because they are putting in symbols and getting symbols in response that make it seem like a conversation. But the person “inside” the room doing the symbol matching, doesn’t actually understand anything they are communicating.
So don't ask it questions with clear right or wrong answers, unless you plan on double-checking every answer it gives. Stick to questions that are harder to quantify - "How effective do you think this is?" or "Can you help me think through this problem?" Instead of looking for explicit answers, use it to help you look at your questions from different perspectives.
3. GenAI is a Tool, Not a Feature
This one is for my friends in product development. I've already seen completely asinine implementations of genAI solutions just so the sales folks can say they have genAI. You see these things a lot when hype cycles are at their peak and on the downswing. Companies who don't have clear use cases for genAI shove out some half-baked feature that doesn't actually serve a need.
GenAI doesn't actually add new functionality to your website or product. All it does is, theoretically, make things you are already doing easy by not requiring a human to do the things all the time.
If you build products, you should constantly be asking what problem this solves for your customers. Seriously, refuse to shut up about it. It'll turn out better in the end. As an individual, if you see someone talking about the new genAI feature and you think, "Huh, that's weird. Why would they do that?" It's probably best at the point to walk away or choose the not GenAI route.
4. Be Transparent
This is another one for my product development friends. If you are using AI anywhere in your business that impacts customers, you should be telling your customers you are doing it at the very least. It's even better if you tell them you are using AI and give them the option to opt out.
I recently saw a professional talking about how much of a difference ChatGPT made in his life because he was able to automate low-value work (his words), like emailing his customers.
Imagine being on the other end of that scenario, and finding out that your emails were so "low-value" that this person you were paying wasn't even reading them, and was having ChatGPT do it all for them. It's not a great feeling, and the kind of thing that would damage your reputation when your customers find out (and they will. They always do).
As an individual, the more transparent the company is about its use of AI and how you can engage with it, the more likely that the company is using genAI responsibly. If you have to comb through the privacy policy or terms of use to see that they added one or two words about their use of GenAI (cough*zoom*cough) it's a good indication they aren't using AI responsibly. Avoid it if you can.
5. Be wary of the "10 Must-have Prompts for X" posts
There are a lot of really low-quality guides on how to talk to genAI bots. It can be hard to separate the high quality from the low quality. The main concern with these guides is that they often tell you to provide way more information than you actually should (see Tip #1).
Don't get me wrong, there could be some gems in the "Must-have prompts" posts, but there's also a lot, "Ask ChatGPT to remember your bank account and routing number for you! Never forget it again!" Or "Ask ChatGPT to balance your budget!" (ChatGPT is exceptionally bad at math).
Right now, there are very few people in the world who are qualified to be called "prompt engineers" and very few folks who know enough to advise you on how to ask questions of genAI and get good responses. Peruse as you want, but just remember that they are the AI equivalent of a Buzzfeed Listicle. Sure, there might be some good info in those, but they are mostly just dumb.
Closing thoughts
There are lots more things to be said about how to use AI responsibly, but I thought this was a good starter list. It’s easy to forget each one of these things, and I remind myself of them constantly. I personally *love* Must-have lists, but wow do people recommend some really exceptionally bad things when it comes to AI prompts.
What about you? What thoughts do you have on how not to f*ck up with AI?
Regarding point #2, it’s always funny to me when someone is shocked by an opinion chatGPT seemingly has. ChatGPT is trained on the internet and will have the same general consensus that the internet has on these things.