We Suck at AI Literacy - But We can do Better
What is AI Literacy? Why is important? How do we get better?
Hi friends. Let me start today with an apology for missing last week. I've mentioned a few times that right now is a tumultuous time for me - vacation, laid off, dealing with a particularly ornery instance of my seasonal affective disorder -- I needed to make some head space for myself. I'm working to get my groove of publishing back, so I'm asking for some grace while I find that cadence again.Â
Relatedly, I'm contemplating dropping down to every other week while I get my bearings. What do you think? Is every other work enough? Let me know in the comments.Â
Finally, I'd like to thank my first yearly subscriber - my mom. You might say, "Well, of course, your mom subscribed, she's your mom." You aren't wrong, but it still means that she cared enough to consistently read and to throw some money at me. So, thanks Mom - I appreciate it. If you'd like to join her, you know what to do. And I'd appreciate it just as much. Well almost. It's hard to compete with Mom.Â
With that, on to the main event.
Recently, I was having dinner with a friend who told me that he was unsure about AI and what it meant for the future. We spent the next 20 minutes discussing his concerns about AI--what AI was, what AI wasn't, where we should be concerned, and where the hype was overblown. It was a lot of fun and a super productive conversation. Our conversation ended with the mutual understanding that a lot of people just don't 'get' what AI is, how it does things, and its limitations.Â
That's a pretty big problem.Â
Most folks lack AI Literacy. If you asked the average person how ChatGPT works, you'd get a shrug, and "I don't know. It just does." The average person knows remarkably little about this technology that touches almost every aspect of their lives and has an impact on them every single day. They make assumptions about what it's doing, or worse they just take outputs from AI systems as objectively true. When people don't have literacy in technology this pervasive, they can't make informed, meaningful decisions about the technology.Â
What is AI Literacy?Â
Ironically, the best definition I found of AI Literacy comes from AI itself. According to Claude3, AI Literacy, "refers to having an understanding of what artificial intelligence (AI) is, how it works, and how it impacts society. It involves being knowledgeable about the capabilities, limitations, and implications of AI systems." Being AI literate is foundational to using AI responsibly.Â
Being AI literate is being able to use Claude3, and understand, "All the model did was convert my question into numbers, run some statistics on those numbers to predict the most likely next numbers in the sequence, and then translate those numbers back into words to print on the screen." Now, there's a whole shitton more actually going on here when I posed my question to Claude3, but conceptually, that's what happening. It's not difficult to understand, but it helps me frame the result I get back.Â
The ultimate goal of AI literacy is to materially educate folks so that can intentionally and deliberately engage with AI, being aware of the risks and limitations of doing so. If you are AI literate, you can use AI responsibly, and get the benefits of the technology while avoiding the worst of the risks and challenges.Â
So my understanding of how Claude works helps me frame my interactions with Claude. Claude didn't understand my question, it just did some algebra. It doesn't mean that Claude is wrong, or that it generated incorrect responses. It just means that I know that Claude can be wrong because it's not programmed to be correct.Â
The Broader VisionÂ
This isn't the first time this kind of thing has come up. In the early 2000s when the internet was just starting to become a part of everyday life, the concept of Digital Literacy came up and as a society, we dropped the ball. We never really got the broad digital literacy we needed to allow people to better engage with the technology that was inundating their lies.Â
There are lots of reasons that have led to the increase in misinformation and disinformation. But our social inability to understand the technology that had come to rule our lives opened the door to our current challenges with misinformation and disinformation.Â
We could repeat the past if we don't focus on our AI literacy today. I won't be hyperbolic and say it'll be worse than the outcome of the lack of digital literacy. But a lack of AI literacy can make things much harder than they need to be for us going forward.Â
If it's so simple, why aren't more people AI literate?
There are a couple of reasons from my perspective - the first is that no one has made a concerted effort to develop an accessible AI Literacy curriculum that anyone with a high school education can understand. The resources just don't exist yet. Teaching yourself requires a lot of motivation, and the ability to translate semi-Technobabble into something that you can understand, which is a pretty high bar to clear. We need folks who take these very head concepts and make them accessible and easy to understand.Â
The second is a more tin-foil hat reason, but companies like OpenAI, Microsoft, and the other genAI players have a vested interest in folks not understanding AI. If people don't understand AI, they are easier to (lie) sell to because the companies can make wild claims like they've developed Artificial General Intelligence, and no one knows any better than to call them on it. They can exploit the misunderstanding to boost usage, increase dependence on their tools, and find ways to make more money through misunderstanding. The fewer people question the results of AI, the more ingrained it becomes in our lives, and the easier it is to manipulate how we engage with AI.
The literacy problem isn't limited to just the average person. Many folks with very impressive titles have the same lack of AI literacy.Â
For example, I have a friend. They're brilliant, one of the most driven, motivated, and intelligent people I have ever met. But when they post on LinkedIn about AI, it's clear they don't understand the technology. They make claims and predictions about AI that if you know enough about AI, you know aren't plausible or realistic. They use words like "understanding" and AI's ability to "reason," as predictors of future performance. But, knowing how GenAI works, we know it just gives the illusion of understanding and reasoning but it's not doing any of that.Â
I know of a VP at another company who told his entire department that if they weren't using ChatGPT they were failing. He told them to use ChatGPT for all of their daily tasks, from writing emails to customers to reviewing draft contracts to keeping track of their quarterly goals. He wanted them to use ChatGPT for everything because, in his mind, it would boost productivity.Â
This is problematic for a lot of reasons, but the most important at the moment is that ChatGPT trains the model based on user inputs. Because this VP didn't didn't understand how ChatGPT was working even at a business level, he put his entire company at risk of unintended disclosure.
Finally, there's a frankly embarrassing number of lawyers who've gotten in trouble because they use ChatGPT to write court documents, ChatGPT hallucinated a bunch of cases, and the lawyers got in trouble. It's driven by these lawyers fundamentally not understanding what ChatGPT is doing and how it works. If you were looking for the legal version of Fuck Around and Find Out, this is probably it.Â
So what do we do?Â
Right now we fight the good fight. Chances are good that if you are reading Byte-Sized Ethics, you have some AI literacy. Your charge is to spread that AI literacy as much as possible.Â
Whenever you see someone using words like "understanding" or "reasoning", correct it. Be *that* person on the comment thread or at the meeting. It might seem like a useless gesture, but it helps change the dialog.Â
Create your own AI elevator story - come up with a quick story to talk about AI that avoids the worst misconceptions. No more than 30 seconds. This will help you prepare for how to talk about AI in a lot of different kinds of scenarios.Â
Read - the best way to learn more about how AI works is just to read. Check out the Book Bytes series here on Byte-Sized Ethics for a good place to start.Â
I also recommend "The Alignment Problem" by Brian Christian for a great primer on the history of AI and concepts. I haven't written about this one yet, but it's coming.Â
Help inform people about the challenges or risks that come with using AI in some cases.Â
Take AI courses - there are a few good introductions to AI courses on Edx, Coursera, and LinkedIn LearningÂ
What do you think? Do you think AI literacy is important? And what are some other ways to increase AI literacy among non-technical folks?Â
I’m writing an article right now about how Devin has highlighted the lack of AI literacy in software developers. It’s pretty crazy how AI-illiterate most people are but they still make claims about it.