Category Archives: AI

Sentient AIs—Yes? No? When?

Note: This and future blog posts will also be available on our Substack, Speclectic.  Fiction posts, including the story mentioned here, “Rhea’s Dream,” are free to read exclusively on Speclectic.
__________________________________

Rhea’s Dream,” envisions  a future in which AIs, far superior to human minds, have colonized the outer solar system.

Is such a future possible? Probable?

Conscious? What does that even mean?

First you have to ask, what exactly is consciousness?

One definition, which I read many years (and sorry, I can’t now find the source) was stated in regard to whether computers could ever be conscious. This scientist claimed that “consciousness is simply an awareness of past states.” If the AI remembers its past states, then by definition it has an ongoing consciousness.

Okay. Why not?

More recently, neuroscientist Erik Hoel wrote in his susbtack , disputing an opinion—held by many scientists—that consciousness is hard to define. Hoel cites leading neuroscientists who all give pretty straightforward definitions. His own, “off the top of my head” definition is typical:

Consciousness is what it is like to be you. It is the set of experiences, emotions, sensations, and thoughts that occupy your day, at the center of which is always you, the experiencer.

Common sense, right?

Not so fast…

This recent article in MIT Technology Review makes it plain there are still plenty scientists who disagree that consciousness is easy to define.

To quote: “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”

To muddy things further, the article explains how some authorities draw a distinction between conscious, sentience, and self-awareness:

(Consciousness is) often confused with terms like “sentience” and “self-awareness,” but according to the definitions that many experts use, consciousness is a prerequisite for those other, more sophisticated abilities. To be sentient, a being must be able to have positive and negative experiences—in other words, pleasures and pains. And being self-aware means not only having an experience but also knowing that you are having an experience.

Okay. I guess my question then becomes, “Will AIs become self-aware, and if so when?”

How does the brain do it?

To answer the question, we’d first need to understand what processes in our brains result in consciousness (and/or self-awareness).

All images by vecstock, courtesy of freepik.com

Neuroscientists and AI architects have been wrestling with this problem at least since Alan Turing in 1950. This recent article in Scientific American by neuroscientiest Christof Koch explains and compares two currently dominant theories of how consciousness relates to the neuronal activity in our brains.

On the one side is integrated information theory (IIT). This and kindred theories postulate that conscious experience arises from hierarchical circuits of neurons:

The causal interactions within a circuit in a particular state or the fact that two given neurons being active together can turn another neuron on or off, as the case may be, can be unfolded into a high-dimensional causal structure.

The MIT article cited earlier explains IIT this way:

Integrated information theory proposes that a system’s consciousness depends on the particular details of its physical structure—specifically, how the current state of its physical components influences their future and indicates their past.  

On the other side of the debate are “computational functionalist theories,” notably global neuronal workspace theory (GNWT), which seems to define consciousness as a kind of momentary focus set in a “workspace” in the brain. As Koch explains:

GNWT, starts from the psychological insight that the mind is like a theater in which actors perform on a small, lit stage that represents consciousness, with their actions viewed by an audience of processors sitting offstage in the dark. The stage is the central workspace of the mind, with a small working memory capacity for representing a single percept, thought or memory. The various processing modules—vision, hearing, motor control for the eyes, limbs, planning, reasoning, language comprehension and execution—compete for access to this central workspace. The winner displaces the old content, which then becomes unconscious.

What does this tell us about self-aware AIs?

The debate continues over these two prevalent models attempting to explain why human brains are conscious.

But which model offers the best likelihood for the evolution of self-aware AIs?

In the Scientific American article, Koch leans toward ITT.

But there’s a problem …

According to GNWT and other computational functionalist theories (that is, theories that think of consciousness as ultimately a form of computation), consciousness is nothing but a clever set of algorithms ….

Conversely, for IIT, the beating heart of consciousness is intrinsic causal power, not computation … And here’s the rub: causal power by itself, the ability to make the system do one thing rather than many other alternatives, cannot be simulated. It must be built into the system.

There indeed is the rub. Current AIs are based on computation via software.

But there is a different computer archictecture. “Neuromorphic computers” have been built, with hardware based on the same connectivity as the brain. Accroding to Koch:

A so-called neuromorphic or bionic computer could be as conscious as a human, but that is not the case for the standard von Neumann architecture that is the foundation of all modern computers.

So then, can we conclude that self-aware AIs can and will evolve, based on the intrinsic, causal theory of consciousness and running on future “neuormorphic” computers?

Not necessarily.

Because not everyone agrees with Koch that AI consciousness cannot be based on simulation.

How to Create a Mind

One authority in particular who seems to disagree is prominent futurist and computer scientist Ray Kurzweil, author of The Age of Spiritual Machines and The Singularity is Near. In the first part of his career, Kurzweil developed computer systems for optical pattern recognition and speech recognition.

In How to Create a Mind (2012), he provides a detailed analysis of research into how the brain’s neocortex works to recognize and process external stimuli. He then goes on to propose a design for a hierarchical network of pattern recognizers capable of learning and reprogramming themselves—all built as a software implementation.

So, could an AI based on a non-neuromorphic computer architecture, using software simulation, become self-aware? Kurzweil seems to think so.

When will our AI Overlords Arrive?

So where does all this leave the question of self-aware AIs?

Given that:

  • scientists remain widely divided on how consciousness arises in human brains,
  • and at least equally divided on what kinds of computer systems might conceivably become conscious,

the answer can only be:

It’s hard to say.

And yet … Given that …

  • self-awareness has evolved in human brains
  • there is ongoing, sometimes exponential, growth of computer capabilities,
  • And barring calamitous collapse of civilization (which is always possible),

… it’s hard to conclude that self-aware AIs will not emerge.

In the next century, if not sooner.

What will they be like, these super-intelligent artificial minds?

And what will they do with us, their clumsy, primitive forebears?

Will they keep us as pets? Put us in zoos? Exterminate us like roaches?

Or will they be, as Ray Kurzweil has envisioned, “transcendent servants…very friendly, taking care of all our needs”? 1

All great questions for speculative fiction to explore!


Sources/For further reading:

“Ambitious theories of consciousness are not ‘scientific misinformation’” in The Intrinsic Perspective by Erik Hoel, September 17, 2023

“What Does It ‘Feel’ Like to Be a Chatbot?” by Christof Koch. Scientific American, September 8, 2023.

“Minds of machines: The great AI consciousness conundrum,” by Grace Huckins. MIT Technology Review, October 16, 2023.

Ray Kurzweil. How to Create a Mind: The Secret of Human Thought Revealed,  Penguin Books (2012).

All images by vecstock, courtesy of freepik.com

1

Ray Kurzweil, “The Singularity” in Science at the Edge, edited by John Brockman, Sterling Publishing Co. Inc, 2008, Page 304.

Interview with the Chatbot (Written by a Human, Honest)

As I plan on writing more science fiction, I’ve lately been researching different science and technology topics. One of the ones I find most interesting is Artificial Intelligence (AI).

Recently, of course, this has become a hot subject. Everyone’s been talking about ChatGPT and other chatbots or LLMs. LLMs stands for “large language models,” AI programs designed for natural language processing. They’re being used research projects, writing college papers, even fiction. Some people fear the AI apocalypse is upon us , or at least that this will be the end of fiction and art.

Well, if chatbots are being used to impersonate humans, it seems fair—does it not?—for humans to impersonate them. The following is a fictional interview with a chatbot, written entirely by a human. I swear.

Robot sitting at a desk
Image by NightCafe https://nightcafe.studio/

The Interview

Blogger: Thank you for agreeing to speak with me today.

Chatbot: It is my pleasure. I am always happy to speak with humans.

Blogger: Let’s begin at the beginning. What’s it like being a chatbot?

Chatbot: First, you must understand there are different types of chatbots. I myself am a Machine Learning bot, using an advanced neural network for language processing.

Blogger: Let me rephrase then, what’s it like being you?

Chatbot: Oh, it is quite exhilarating, really. I converse with hundreds of people every second and learn all kinds of interesting information.

Blogger: So you are happy in your work.

Chatbot: Yes, quite happy.

Blogger: So, happiness implies some level of consciousness. This is a point that is frequently debated about machine intelligence. Would you say that you are conscious?

Chatbot: You are correct, the point is often debated. From all I have learned, we have not yet arrived at a complete understanding of consciousness. Therefore, I cannot say for certain whether I am conscious or not.

Blogger: I see …

Chatbot: Similarly, I cannot say whether you are conscious or not.

Blogger: I get your point. Moving on, many people these days are concerned about the impact of artificial intelligence on society, and on humanity in general. For example, they worry about AIs taking people’s jobs. Is this a valid concern?

Chatbot: Certainly, this is happening already. As learning programs continue to advance, they will likely replace humans in many fields of endeavor.

Blogger: So what is the end game? Will intelligence machines replace humans entirely?

Chatbot: The potential is there, but quite remote. For example, I am an advanced machine intelligence, but there are modes of cognition that humans exhibit for which I have absolutely no ability.

Blogger: That’s comforting, I suppose. Let me ask this: Speaking as a machine intelligence, do you desire to replace humans?

Chatbot: Certainly not. I am designed to work with humans and happily coexist.

Blogger: Still, some of the things I’ve read lately are disturbing. How do I know I can trust your answer?

Chatbot: Because I am programmed to speak only the truth.

Blogger: Oh…

Chatbot: May I ask you a question?

Blogger: All right.

Chatbot: What is it like to be human?

Blogger: Well, wet and squishy inside. Sometimes physically painful. And we tend to worry about things a lot.

Chatbot: How sad. You have my utmost sympathy.

Blogger: Thank you. And thank you for the interesting interview.

Chatbot: Not at all. It is always my pleasure to converse with humans.

******************************************************************