Category Archives: science fiction

The Plausibility Problem

Is it Science Fiction or Fantasy? What’s the difference anyway?

In my time I’ve written a lot of fantasy and some science fiction. But over the past year I’ve really struggled trying to begin a new SF series—a space opera to be specific. I kept running into the dreaded problem of plausibility.

Basically, with fantasy I can just follow my inspiration. Any plot twist or background idea that appears can be explained (if I do it well) as “magic.” But ideas in science fiction have to conform to known science.

Or do they? Or to what extent do they? At what point does a story cross the line from science fiction into fantasy?

I call this the Plausibility Problem.

Clarke’s Laws

To analyze, I did some research, starting with Arthur C. Clarke’s famous “Three Laws of Science Fiction.” (1)

Arthur C. Clarke on one of the sets for 2001 A Space Odyssey. By ITU Pictures –, CC BY 2.0,

The most often quoted is the Third Law, which you probably have heard and which is particularly relevant to my dilemma: “Any sufficiently advanced technology is indistinguishable from magic.”

Makes a lot of sense, right?

So, based on the Third Law, a writer might justify using any “magical” idea in a science fiction story and claim the Clarke’s Third Law defense: “It’s really just science that we don’t understand yet.”

To analyze further (and maybe shore up my defense), I also looked at the Second Law, which states: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

Regarding the Second Law, the article linked to above says:

(Clarke) had written this in the context of a list of inventions and discoveries that he had classified as either expected (including automobiles, telephones, robots, “flying machines”) or unexpected (x-rays, nuclear energy, photography, quantum mechanics).

Expected vs. unexpected discoveries. I found this intriguing and relevant to my problem.

Barnes’ Math and Magic

Further research brought me to an essay titled “How to Build a Future” by SF author John Barnes. (2)  A book description on Amazon calls this “the definitive modern essay on the construction of science-fictional plausibility” (3) and boy, I can see why.

Barnes, who has worked in systems analysis and statistics (4) takes us step-by-step through a detailed example of projecting science, technology, and society into a fictional future. He does this using historical data on how these areas have evolved in the past, complete with math and charts.

Sample Chart from How to Build a Future. by John Barnes Copyright 1990 by Davis Publications, Inc.

What especially caught my eye is that he finds that these science and tech changes have occurred in “surges,” and that:

Each new surge is 90 percent what you might have expected from the last one, plus 10 percent magic (in its Clarke’s Law sense.) (5)

Taking this further, Barnes postulates a succession of surges over the next centuries, leading to his invented future. Doing the math—10 percent plus 10 percent and so on for each surge—this means that more and more of the science and tech in the not-so-distant future becomes “not comprehensible” to our current understanding.

In other words, magic!

What Do the People Say?

To extend my analysis, I did some market research. Which means I looked up discussions related to the issue on both Facebook and Reddit.

While I found lots and lots of opinions about what constitutes hard science fiction, soft science fiction, and fantasy, one consistent theme emerged. Most readers simply don’t worry about the Plausibility Problem as much as I did. They mainly just want a good story.

Some representative quotes:

There’s a lot of grey in-between. It’s subjective.

Base it on good science, not necessarily accurate science. What I mean is be consistent with the principles you use and have an explanation for how things work, even if that explanation is never given or used in the work.

IMO, explicit explanations are not required. And there is a very thin and murky line between science and magic, partly depending on your world view.

Problem Solved

All of this analysis made me feel much better about the Plausibility Problem.

Rather liberated, actually. I was able to start writing science fiction stories again without the internal critic stopping me in my tracks by picking every idea to pieces.

So, look out, space opera, I’m coming for you!

photography of hallway
1 “Clarke’s Three Laws” NewScientist at
2 “How to Build a Future” by John Barnes. Originally published in Analog Science Fiction/Sciene Fact, March 1990. Reprinted in the Writer’s Chapbook Series by Pulphouse Publishing (1991) and in Barne’s 1999 Collection, Apostrophes & Apocalypses.

5 How to Build a Future, Pulphouse Writer’s Chapbook edition, page 14.


This post is also available on my Substack, Speclectic, which is always free to read.
To receive new posts in your inbox, subscribe here.

Sentient AIs—Yes? No? When?

Note: This and future blog posts will also be available on our Substack, Speclectic.  Fiction posts, including the story mentioned here, “Rhea’s Dream,” are free to read exclusively on Speclectic.

Rhea’s Dream,” envisions  a future in which AIs, far superior to human minds, have colonized the outer solar system.

Is such a future possible? Probable?

Conscious? What does that even mean?

First you have to ask, what exactly is consciousness?

One definition, which I read many years (and sorry, I can’t now find the source) was stated in regard to whether computers could ever be conscious. This scientist claimed that “consciousness is simply an awareness of past states.” If the AI remembers its past states, then by definition it has an ongoing consciousness.

Okay. Why not?

More recently, neuroscientist Erik Hoel wrote in his susbtack , disputing an opinion—held by many scientists—that consciousness is hard to define. Hoel cites leading neuroscientists who all give pretty straightforward definitions. His own, “off the top of my head” definition is typical:

Consciousness is what it is like to be you. It is the set of experiences, emotions, sensations, and thoughts that occupy your day, at the center of which is always you, the experiencer.

Common sense, right?

Not so fast…

This recent article in MIT Technology Review makes it plain there are still plenty scientists who disagree that consciousness is easy to define.

To quote: “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”

To muddy things further, the article explains how some authorities draw a distinction between conscious, sentience, and self-awareness:

(Consciousness is) often confused with terms like “sentience” and “self-awareness,” but according to the definitions that many experts use, consciousness is a prerequisite for those other, more sophisticated abilities. To be sentient, a being must be able to have positive and negative experiences—in other words, pleasures and pains. And being self-aware means not only having an experience but also knowing that you are having an experience.

Okay. I guess my question then becomes, “Will AIs become self-aware, and if so when?”

How does the brain do it?

To answer the question, we’d first need to understand what processes in our brains result in consciousness (and/or self-awareness).

All images by vecstock, courtesy of

Neuroscientists and AI architects have been wrestling with this problem at least since Alan Turing in 1950. This recent article in Scientific American by neuroscientiest Christof Koch explains and compares two currently dominant theories of how consciousness relates to the neuronal activity in our brains.

On the one side is integrated information theory (IIT). This and kindred theories postulate that conscious experience arises from hierarchical circuits of neurons:

The causal interactions within a circuit in a particular state or the fact that two given neurons being active together can turn another neuron on or off, as the case may be, can be unfolded into a high-dimensional causal structure.

The MIT article cited earlier explains IIT this way:

Integrated information theory proposes that a system’s consciousness depends on the particular details of its physical structure—specifically, how the current state of its physical components influences their future and indicates their past.  

On the other side of the debate are “computational functionalist theories,” notably global neuronal workspace theory (GNWT), which seems to define consciousness as a kind of momentary focus set in a “workspace” in the brain. As Koch explains:

GNWT, starts from the psychological insight that the mind is like a theater in which actors perform on a small, lit stage that represents consciousness, with their actions viewed by an audience of processors sitting offstage in the dark. The stage is the central workspace of the mind, with a small working memory capacity for representing a single percept, thought or memory. The various processing modules—vision, hearing, motor control for the eyes, limbs, planning, reasoning, language comprehension and execution—compete for access to this central workspace. The winner displaces the old content, which then becomes unconscious.

What does this tell us about self-aware AIs?

The debate continues over these two prevalent models attempting to explain why human brains are conscious.

But which model offers the best likelihood for the evolution of self-aware AIs?

In the Scientific American article, Koch leans toward ITT.

But there’s a problem …

According to GNWT and other computational functionalist theories (that is, theories that think of consciousness as ultimately a form of computation), consciousness is nothing but a clever set of algorithms ….

Conversely, for IIT, the beating heart of consciousness is intrinsic causal power, not computation … And here’s the rub: causal power by itself, the ability to make the system do one thing rather than many other alternatives, cannot be simulated. It must be built into the system.

There indeed is the rub. Current AIs are based on computation via software.

But there is a different computer archictecture. “Neuromorphic computers” have been built, with hardware based on the same connectivity as the brain. Accroding to Koch:

A so-called neuromorphic or bionic computer could be as conscious as a human, but that is not the case for the standard von Neumann architecture that is the foundation of all modern computers.

So then, can we conclude that self-aware AIs can and will evolve, based on the intrinsic, causal theory of consciousness and running on future “neuormorphic” computers?

Not necessarily.

Because not everyone agrees with Koch that AI consciousness cannot be based on simulation.

How to Create a Mind

One authority in particular who seems to disagree is prominent futurist and computer scientist Ray Kurzweil, author of The Age of Spiritual Machines and The Singularity is Near. In the first part of his career, Kurzweil developed computer systems for optical pattern recognition and speech recognition.

In How to Create a Mind (2012), he provides a detailed analysis of research into how the brain’s neocortex works to recognize and process external stimuli. He then goes on to propose a design for a hierarchical network of pattern recognizers capable of learning and reprogramming themselves—all built as a software implementation.

So, could an AI based on a non-neuromorphic computer architecture, using software simulation, become self-aware? Kurzweil seems to think so.

When will our AI Overlords Arrive?

So where does all this leave the question of self-aware AIs?

Given that:

  • scientists remain widely divided on how consciousness arises in human brains,
  • and at least equally divided on what kinds of computer systems might conceivably become conscious,

the answer can only be:

It’s hard to say.

And yet … Given that …

  • self-awareness has evolved in human brains
  • there is ongoing, sometimes exponential, growth of computer capabilities,
  • And barring calamitous collapse of civilization (which is always possible),

… it’s hard to conclude that self-aware AIs will not emerge.

In the next century, if not sooner.

What will they be like, these super-intelligent artificial minds?

And what will they do with us, their clumsy, primitive forebears?

Will they keep us as pets? Put us in zoos? Exterminate us like roaches?

Or will they be, as Ray Kurzweil has envisioned, “transcendent servants…very friendly, taking care of all our needs”? 1

All great questions for speculative fiction to explore!

Sources/For further reading:

“Ambitious theories of consciousness are not ‘scientific misinformation’” in The Intrinsic Perspective by Erik Hoel, September 17, 2023

“What Does It ‘Feel’ Like to Be a Chatbot?” by Christof Koch. Scientific American, September 8, 2023.

“Minds of machines: The great AI consciousness conundrum,” by Grace Huckins. MIT Technology Review, October 16, 2023.

Ray Kurzweil. How to Create a Mind: The Secret of Human Thought Revealed,  Penguin Books (2012).

All images by vecstock, courtesy of


Ray Kurzweil, “The Singularity” in Science at the Edge, edited by John Brockman, Sterling Publishing Co. Inc, 2008, Page 304.

Interview with the Chatbot (Written by a Human, Honest)

As I plan on writing more science fiction, I’ve lately been researching different science and technology topics. One of the ones I find most interesting is Artificial Intelligence (AI).

Recently, of course, this has become a hot subject. Everyone’s been talking about ChatGPT and other chatbots or LLMs. LLMs stands for “large language models,” AI programs designed for natural language processing. They’re being used research projects, writing college papers, even fiction. Some people fear the AI apocalypse is upon us , or at least that this will be the end of fiction and art.

Well, if chatbots are being used to impersonate humans, it seems fair—does it not?—for humans to impersonate them. The following is a fictional interview with a chatbot, written entirely by a human. I swear.

Robot sitting at a desk
Image by NightCafe

The Interview

Blogger: Thank you for agreeing to speak with me today.

Chatbot: It is my pleasure. I am always happy to speak with humans.

Blogger: Let’s begin at the beginning. What’s it like being a chatbot?

Chatbot: First, you must understand there are different types of chatbots. I myself am a Machine Learning bot, using an advanced neural network for language processing.

Blogger: Let me rephrase then, what’s it like being you?

Chatbot: Oh, it is quite exhilarating, really. I converse with hundreds of people every second and learn all kinds of interesting information.

Blogger: So you are happy in your work.

Chatbot: Yes, quite happy.

Blogger: So, happiness implies some level of consciousness. This is a point that is frequently debated about machine intelligence. Would you say that you are conscious?

Chatbot: You are correct, the point is often debated. From all I have learned, we have not yet arrived at a complete understanding of consciousness. Therefore, I cannot say for certain whether I am conscious or not.

Blogger: I see …

Chatbot: Similarly, I cannot say whether you are conscious or not.

Blogger: I get your point. Moving on, many people these days are concerned about the impact of artificial intelligence on society, and on humanity in general. For example, they worry about AIs taking people’s jobs. Is this a valid concern?

Chatbot: Certainly, this is happening already. As learning programs continue to advance, they will likely replace humans in many fields of endeavor.

Blogger: So what is the end game? Will intelligence machines replace humans entirely?

Chatbot: The potential is there, but quite remote. For example, I am an advanced machine intelligence, but there are modes of cognition that humans exhibit for which I have absolutely no ability.

Blogger: That’s comforting, I suppose. Let me ask this: Speaking as a machine intelligence, do you desire to replace humans?

Chatbot: Certainly not. I am designed to work with humans and happily coexist.

Blogger: Still, some of the things I’ve read lately are disturbing. How do I know I can trust your answer?

Chatbot: Because I am programmed to speak only the truth.

Blogger: Oh…

Chatbot: May I ask you a question?

Blogger: All right.

Chatbot: What is it like to be human?

Blogger: Well, wet and squishy inside. Sometimes physically painful. And we tend to worry about things a lot.

Chatbot: How sad. You have my utmost sympathy.

Blogger: Thank you. And thank you for the interesting interview.

Chatbot: Not at all. It is always my pleasure to converse with humans.