The Proxy Problem: Why AI Can't Truly Understand Human Complexity
Proxies are fundamentally flawed, and AI can only see the map, not the territory.
Before this week’s article, I want to take a moment and celebrate breaking 1,000 subscribers! I am so humbled by the number of people who want to read what I write, and I am so excited to keep writing! Thank you to each of your for your support!
In this vein, I wanted to get your opinion. If you don’t see your option, drop it to me in the comments!
When you last played a video game, did you stop to chat with the merchant NPC about their hopes and dreams? Of course not—because they don't have any. They exist solely to sell you items. This deliberate simplification works perfectly in games but fails catastrophically when applied to real people. Yet this is precisely the challenge facing artificial intelligence: it can only understand humans through proxies—simplified models that flatten our messy, contradictory nature into manageable categories. This fundamental limitation may be AI's greatest blind spot.
Proxies Flatten People
NPCs are useful in video games because they are flattened. They don't have depth, character, wants, or desires. They fit cleanly into the buckets the game developers created for them: merchant, enemy, quest giver. The merchant is just a merchant--her entire existence is wrapped up in that single role. And that's perfect for a game environment.
But real people aren't that simple—our identities are messy and we refuse to stay in pre-defined buckets. Consider stereotypes: do you know anyone who perfectly embodies every characteristic of a stereotype and nothing outside of it? Of course not. A stereotype functions like an NPC—recognizable fundamentally unlike a real person.
When humans are represented as proxies or stereotypes, everything that doesn't fit is ignored. We are flattened—the complexity of identity brushed aside in favor of something easier to understand but ultimately incorrect.
Proxies add in things people aren’t
A proxy doesn't just remove complexity--it adds characteristics we don't possess. If I say that I’m gay, people might assume that I am fashion-conscious. If I say that I’m a nerd, they might assume I’m not physically fit. If I say I’m an athlete, they might assume I’m not interested in intellectual pursuits.
But I'm all of these things...and none of the completely. I am gay but decidedly not fashion-conscious. I'm a nerd who loves physical activity. I'm an athlete who writes about AI ethics. I embody parts of these identities while violating their stereotypes in other ways.
Proxies force zero-sum thinking.
Proxies create a false binary—you either embody all aspects of a category or none. This is precisely how data scientists often approach classification problems: distilling complex realities into simple yes/no decisions that make analysis cleaner but less accurate.
We experience this daily. Think about bizarre targeted advertisements that completely misunderstand your interests. Or consider political discourse where belonging to a party supposedly means supporting every policy position without nuance.
We Confuse the Map for the Territory
The fundamental error is mistaking the proxy for the reality it represents. When some people think "gay man," they immediately assume a collection of stereotypical attributes that may not apply to any individual gay man. The rich universe of individual variation gets ignored because it breaks the proxy.
While humans can update our understanding when confronted with contradictory information, AI cannot. For artificial intelligence, the proxy is the actual thing, not a representation.
The problem for AI with proxies
This is the critical limitation: humans can adapt because we experience the reality behind the proxies. AI cannot—it only ever knows the proxy, never the reality that informed it.
If proxies are inadequate for representing human experience even for humans, how can we expect AI to understand the human experience when all it has to work with are these flawed representations?
This isn't just a technical problem—it's a fundamental constraint. As we increasingly rely on AI to make decisions about people, we must recognize that its understanding will always be limited by the proxies it uses, and that those proxies will never capture the beautiful messiness of human identity.
So how close can we really be to ASI, or even AGI?