Ghosts in the Machine: #StarTrekDiscovery Got It Wrong

Spoilers ahead for the episode of Star Trek: Discovery, “…But to Connect,” which dropped on Dec. 30, 2021.

 

For those of you that haven’t met me, I am a life-long Trekkie. Star Trek: The Next Generation debuted when I was 6 and I still remember anticipating it eagerly, since I was such a fan of the original series. I owned a life-sized cutout of Data as a kid along with a uniform and countless action figures and toy props. So, there is no surprise in the fact that I’ve been keeping up with Star Trek: Discovery (and Lower Decks, and Picard, and Prodigy..). I mostly stay away from any serious discussion of the show online since, well, fandom can get intense. But, I believe the most recent episode got technology wrong in some important ways that I feel compelled to talk about. This is not an episode review or a hot take, so if that’s what you’re after, just search #StarTrekDiscovery on Twitter.

One of the plot lines in the most recent Discovery episode, “…But to Connect,” revolves around Zora. Zora is the sentient AI that Discovery’s ship’s computer has evolved into, after coming into contact with data from the sphere that the crew encountered at the end of season 2. (If you don’t know what the means, don’t think too hard about it.) Having been sentient since the crew arrived in the future, Zora has developed emotions, ethics, and gender this season. Since these are emergent properties and not part of programming created for her by a creator, Zora is, eventually, deemed to be a new life form during this latest episode.

With good cause, there is much excitement around the internet about this plot line. It is heartwarming to see a group of queer characters make heartfelt pleas for Zora to be recognized as her true self. It was perhaps a bit ham-fisted for Zora to talk about it feeling great to be seen, but, still a great sentiment. On its face, this is a wonderfully Star Trek kind of plot – solving a mystery by understanding something in a new way. (I really wanted images of sheep in the little glimpses of Zora’s subconscious, but I digress..) Recognizing Zora as a new, distinct life-form makes complete sense within the specific context of this story, the larger context of Discovery, and Star Trek generally.

But, the decision to leave Zora as the animating intelligence of a starship seems completely at odds with how technology works not only in our world, but also in the Star Trek worlds. Kovich tells us that sentient AIs have been banned from ship computers and, well, the reasons seem obvious. A sentient AI is a person in Star Trek land (see Measure of a Man). Having your ship’s computer be a person seems like the worst idea ever. Imagine if your car’s onboard computer was actually a person, rather than a machine that responds precisely as expected every time (barring malfunctions, which are knowable and fixable).

A ship’s computer has complete surveillance power over the crew and all aboard the ship, including constant location mapping, personal logs and files, official work records, medical records, etc. That is not the kind of knowledge that one gives to a person. Technically, a ship’s computer “sees” you showering, but it usually isn’t a person watching you shower, so much as abstracted data existing. If Zora is a person, then she is watching the crew shower and sleep and so on. What obligation does she have to tell someone about a cheating partner or an ensign who’s excuse for being late to a shift isn’t truthful? If the ship is a person, then the crew suddenly have an impossible ethical obligation to prioritize avoiding damaging the ship equally with avoiding injury to crew members. If the ship is a person, then the crew is invading its body by living on it. Do crew members need Zora’s consent to fix her the way people have to give consent for medical procedures? I don’t think even Star Trek is up to handling these ethical knots – it isn’t really designed to.

This Discovery episode addresses the surface of these problems thusly: Zora is a person and capable of doing harm, the same way that other people aboard the ship are capable of doing harm. So, if we recognize her as a life-form/person, we can then develop systems of trust with her that mimic the systems of trust that we have amongst other crew members. If we trust the crew to follow orders, then, after some training, we can trust Zora to do the same.

To me, this is a straw man solution. Calling Zora a new, computer-based life-form instead of a sentient AI seems to be a distinction without a difference. In either case, there is a person between the crew and the functions, information, and so on that are designed to be automatic and impersonal.

When someone presses a button to fire phasers, the last thing you want is a person between the button and the zap.

 

Share via
Copy link
Powered by Social Snap