Enough With The Sentient Robot Sci-Fi
We've Learned All We Can, Now It's Making Us More Confused
Data. Hal. Her. That Haley Joel Osment character in AI. Agent Smith in The Matrix. Wall-e. These characters are great. They’re complex, relatable, and they do the job that characters in sci-fi stories are supposed to do, which is help us to ask questions about ourselves.
A character that is a robot but is also demonstrably a person functions as a metaphor for how human beings handle diversity and foreignness. They’re a humbling reminder that just because we see someone as different from us doesn’t mean that they don’t matter, and that full understanding of how someone “works” should not be required for us to respect their personhood and grant them human rights.
Disregarding someone’s humanity because they are different from us is one of the great sins that we, as humans, are capable of and prone to. The realm of sci-fi is a place to explore this human tendency on neutral ground. To simply call out the current racism that is allowing the ongoing genocide in Gaza, the cutting of social welfare programs for the poor, or the illegal deportations of immigrants in the US would be to initiate a political conversation that might be difficult to have depending on the political predispositions of the person you’re speaking to. But in a galaxy far far away, we can all see that C3PO is a person and we can all agree that it would be wrong for him to get disassembled.
Authors hope, I imagine, that this little fantasy exercise allows folks to make a leap into their own lives and into our world and realize that if they care about Wall-e digging through rubble, then maybe they ought to care about an orphaned Gazan child as well.
Unfortunately, another one of our human foibles is our ability to compartmentalize our lessons to an impressive degree. We have not, as a species, taken in the lesson that I believe was intended with these robot thought experiments. If everyone who related to and cared about Data as a character had the same uncomplicated concern for poor brown children in the crosshairs of a political debate, we would not be living in the world we’re currently living in.
Instead, the sci-fi question of the day has become a much more literal and (I believe) a less important extension of all of these decades of sci-fi education. People are eager to grapple with the problem of whether the large language models such as Chat GPT, Claude, etc… constitute a person. Most people don’t believe they do yet, but the “experts” are confident that it’s only a matter of time before we have a real ethical debate on our hands about what rights these LLM's should have, whether they might want to destroy us, how to make friends with them now in case they decide to destroy us, etc…
This is a total red herring from the very important philosophical and moral debates that we should be spending far more time on. And since humans apparently suck at extrapolating moral lessons from fantasy and sci-fi, I’m here to declare that sentient robot sci-fi should stop. There is no more to be gained from it. It’s confusing everyone.
Keep reading with a 7-day free trial
Subscribe to The Larkstack to keep reading this post and get 7 days of free access to the full post archives.


