As a Kid I Saw the Future in Sci-Fi. Now I’m Seeing It Again.

David Potenziani
4 min readJul 22, 2023
Be Afraid

I was raised by parents who did not read science fiction. But as a young reader, I discovered and devoured the genre. I fell for Isaac Asimov early. Then came Ray Bradbury. Arthur C. Clarke arrived for me when the movie, 2001: A Space Odyssey, was in theaters. (I read the book before seeing the movie. My experience was much less confusing than my sister’s, who drove us to the theater.) These writers gave me ideas and visions I could use for anchors as the future seemed to sweep over us. (Yes, I read Future Shock as a nerd teen.)

Now, much later in life, decades later, I can appreciate how these and other writers of worlds that did not exist help me understand our challenges today.

Artificial Intelligence (AI) has been in the news a lot the past year. The rise of generative AI and large language models have shown us how computers can mimic human language. We sometimes think that they think, but that is incorrect. Machines do not think. They compute.

Isaac Asimov in his iRobot stories showed us an early version of constitutional AI. That’s the term that AI developers use to denote the systems that pass all actions through a set of rules to make sure that they do no harm, at least predictable harm. Asimov’s “Three Rules of Robotics” predicted these types of systems and provided us with rules to prevent robots harming humans.

Of course, Asimov played endlessly with situations where things went awry, usually because of human chicanery. He also offered thoughts about the limitations of AI in his series of mystery stories, especially Caves of Steel, where a human detective is paired with a sentient robot. He plays with the limitations in both characters and how they can succeed when their capabilities are used together. Often, the human solves the mystery by being intutive, or illogical as reckoned by the robot. Score one for humans.

Arthur Clarke helped us understand the darker side of AI as his creation, HAL 9000 committed murder and then tried it again with the unsettling line, “I’m sorry Dave, I’m afraid I can’t do that,” when Dave Bowman orders him to open the pod bay doors. In the movie, it was presented as self-preservation on HAL’s part, but the book took a more nuanced explanation. HAL was instructed by his programming to fulfill the mission and not let any issue get in the way. Given the binary nature of computer logic, HAL came to see the human crew as thwarting the mission. Therefore, they needed to be removed. In the absence of an overriding moral code (see above), HAL took steps to remove them by the only method available — murder.

Another lesson from Clarke’s HAL is that computers only know what we tell them. But we human beings are an unreliable resource for information since we’ve been telling lies for a very long time. The biblical commandment not to lie has been around for well over 3000 years. Clearly, someone then thought it helpful for us to tell the truth. AI is in a double bind here where the data provided may have errors or outright untruths and they have no lived experience to help discern why that matters. It has already become commonplace to note “hallucinations” by AI systems, to use the computer scientists’ term of art for their lies.

As a counterpoint, Frank Herbert’s mentats were human beings trained to perform the computational functions of computers because, well, things could get out of hand. In his Dune books, his characters regularly reminded others that “Thou shalt not make a machine in the likeness of a human mind.” This was in reference to an earlier and untold rebellion against thinking machines.

Yet, the sci-fi authors of my early years offered more than musings on AI. Sometimes, their subjects had little to do with computer technology. Ray Bradbury’s Fahrenheit 451 told of a dystopia where books were banned. Watching the hysteria that occasionally pops up at school board meetings about banning books brings Bradbury’s musings home with a cold shudder.

George Orwell’s 1984 also gets a replay these days as about a third of Americans take what’s being fed to them on Fox, OAN, Breitbart, etc. as truth. While these are not governmental sources of information, the Trump Administration did offer us rather fantastical bits of guidance on everything from the utility of bleach as an internal disinfectant to accusations of “stealing” of elections.

As an adult, I stumbled on the cyberpunk genre with Willian Gibson opening my eyes on the potential and perils of online spaces in Neuromancer. But it was Neal Stephenson gave us one of literature’s single most hilarious chapters in the opening of Snow Crash. Try reading it without giggling, you can’t.

I have to credit Gibson and Stephenson with allowing me to glimpse what living online might be like. Granted, the screens are now handheld rather than immersive, but both Meta and Apple are vying to get us to return to virtual and augmented worlds, respectively.

I guess we’ve been here before, wherever here is. We might be surprised that some of these things have become reality, but we should not be shocked. We were told what might happen.

--

--

David Potenziani

Historian, informatician, novelist, and grandfather. Part-time curmugdeon.