Class Warfare Blog

March 18, 2020

Can Software Be Conscious?

Filed under: Morality,Technology — Steve Ruis @ 11:55 am
Tags: , , ,

This was a question addressed at a conclave of philosophers, software developers, and their ilk, but I think this is the wrong question. I think the question is can a “computer” be conscious, with the word “computer” standing in for some combination of hardware and software.

A major leg up on being “conscious” is being self aware. Researchers devised the mirror test to see if animals recognized their reflections as themselves or as another animal or at all. I don’t know if this test really tests for that and I do not know whether any other animal has “passed” this test, but it does show the importance being placed upon being self aware by consciousness researchers.

Now, let me begin my argument with a thought experiment. In my sport we depend a great deal on proprioception, which is the awareness of where our body parts are in space. For example, you can pick up a glass of water, close your eyes, and take a drink from that glass with no problem. We “know” where our hand is and we can feel the glass in it and we “know” where our mouth is and we don’t need to be “talked down” (a la every airplane crisis movie ever made) to delivering that load where it is intended.

This ability is not obvious to us, but any disruption of it results in quite some confusion. For example, if you get an injection of a pain killer in a gum for some dental work and part of your tongue goes numb, don’t expect to be able to talk and be understood until that anesthetic wears off. The position of our tongue in our mouth is necessary information for being able to form sounds.

Another sense we suffer from losing is our sense of balance. If you have ever had extreme vertigo, you will know what I mean. But if we have a stuffed up nose from a cold or other, we seem to get by quite adequately without a sense of smell.

Now, as to the thought experiment. You are lying on a bed and you lose your senses, one by one. First, you lose your sense of sight, which means full “fade to black,” not just what you can still see when you close your eyes. Then your sense of smell, then hearing, then touch, then taste, finally your sense of balance, then proprioception. You cannot feel the bed under you or the breeze blowing in the window or hear the birds chirping outside.

This why the science fiction trope of having a brain in a jar doesn’t work. How long do you think you could remain sane in this state? You couldn’t even scream for help. It is questionable that you would even be able to vocalize.

Software does not have “sensory input” without hardware. And, it seems that we are rapidly developing sensory inputs for computers. A common theme of news commentaries is face recognition software, which is, of course dependent upon video feeds as “sensory input.” An article headline in this week’s Science News is “An AI that mimics how mammals smell recognizes scents better than other AI.” AI stands for artificial intelligence or “super-duper computer.” Computers, for quite some time have had the ability to monitor the temperature of their CPU’s and can tell you if they are experiencing a “fever.”

It is not a big stretch of imagination that if we continue to add “senses” to computers and allow those computers to monitor their sensory inputs, we will have a much greater likelihood that one of those AIs will become self aware.

Now, I am sure that some people will argue that these computers will only be simulating self awareness or some other such construct, but since I do not see that we fully understand our own self awareness, how we could build machines that would have self awareness exactly the same as we do. Nor do I see that that is a necessary condition. Self awareness is self awareness, no matter the mechanism.

I have read science fiction and fantasy for at least 60 years and have read more than a few stories about self aware “computers” and what they are capable of, including feeling something akin to death when they are “turned off.” In the latest season of the show “Altered Carbon” on Netflix, the main protagonist’s AI does want to perform a reboot even though he is glitching up a storm because he doesn’t want to lose memories which are precious to him. Apparently recorded memories are just not the same as “real” ones. A major step along this path, a path that leads to self aware “computers,” “AIs,” and whatnot is providing what can stand in for senses and internal monitoring of those senses. We seem to be barreling down this path at great speed, so I think this may happen in the next 50 years, if not sooner.

 

 

30 Comments »

  1. Some think ants have passed the mirror test.

    http://www.animalcognition.org/2015/04/15/list-of-animals-that-have-passed-the-mirror-test/

    Liked by 1 person

    Comment by James Cross — March 18, 2020 @ 12:10 pm | Reply

    • I have some aunts that might not! :o) Ants, I never would have guess that, but I am not at all sure that the conclusions surrounding this test are warranted.

      I just resubscribed to your bog. I subscribed before but apparently it didn’t take, and I don’t have a lot of time to look for what I have missed. I hope not to miss anything more! *Steve*

      On Wed, Mar 18, 2020 at 12:10 PM Class Warfare Blog wrote:

      >

      Like

      Comment by Steve Ruis — March 18, 2020 @ 12:23 pm | Reply

      • I don’t know what to make of the test either. I think most likely all sentient beings have some sense of self but there may need to be some particular visual capabilities or neural circuits to pass the test. In that case, the test doesn’t mean necessarily anything more than the organism has whatever the special combination of stuff is.

        Like

        Comment by James Cross — March 18, 2020 @ 12:47 pm | Reply

        • I think that being able to distinguish oneself from others is a necessary survival trait for mammals, but I might be wrong.

          On Wed, Mar 18, 2020 at 12:47 PM Class Warfare Blog wrote:

          >

          Like

          Comment by Steve Ruis — March 18, 2020 @ 12:53 pm | Reply

          • I agree. I think it originates in control of the physical body – proprioception – and any organism that has self-directed movement would have to have some sense of self.

            Like

            Comment by James Cross — March 18, 2020 @ 2:03 pm | Reply

  2. I absolutely agree with this model of this hardware/software combination model. The human brain (or extended nervous system) almost certainly divide as neatly into hardware and software as we like to build computers to. It’s part of the point I was trying to get across here: https://www.amazon.com/gp/product/B07VTTQWNC

    Liked by 1 person

    Comment by Vic Grout — March 18, 2020 @ 1:09 pm | Reply

    • Sorry, very badly written. Let me try again … I absolutely agree with this model of hardware/software working together. The human brain (or extended nervous system) almost certainly doesn’t divide as neatly into hardware and software as we like to build computers to. It’s part of the point I was trying to get across here: https://www.amazon.com/gp/product/B07VTTQWNC

      Like

      Comment by Vic Grout — March 18, 2020 @ 1:11 pm | Reply

      • It always seemed to me to be more like self-modifying firmware without any clear distinction between hardware and software.

        Liked by 1 person

        Comment by James Cross — March 18, 2020 @ 2:05 pm | Reply

        • Yes, and raises an intriguing possibility: that we could perhaps build something that behaved in a way that was entirely unexpected to us. Not just AI/machine learning modifying its own data (or even adding to its own software) but the hardware/software symbiosis producing something (still obeying the laws of physics but) beyond our understanding.

          Like

          Comment by Vic Grout — March 18, 2020 @ 2:30 pm | Reply

          • Whatever ‘consciousness’ really is, I’m pretty sure you can’t ‘program’ it.

            Like

            Comment by Vic Grout — March 18, 2020 @ 2:30 pm | Reply

            • BTW, you may not follow my blog and Steve you may have missed it, but in following up some links on McFadden’s EM field theory of consciousness, I found a quote by him about an unusual chip developed in the late nineties.

              I posted something on it and include a link to a more detailed article in Discover Magazine.

              https://broadspeculations.com/2019/12/23/artificial-consciousness/

              Apparently the chips had some odd feedback processes going on and it sort of evolved some strange and unexpected behaviors.

              Liked by 1 person

              Comment by James Cross — March 18, 2020 @ 4:17 pm | Reply

              • Yes, I follow both of you – but somehow missed that one. That’s really interesting. I’ve always thought we’re more likely to make something interesting happen by accident than design. ANNs and suchlike do good work but they’re software-only and simply not big enough to be really mimicking the human brain. But build something truly massive (like the Internet), and see what happens … ?

                Like

                Comment by Vic Grout — March 19, 2020 @ 1:54 am | Reply

              • Very cool! One of the things I appreciate about your blog are the feelers you put out into things I did not notice in real time (Like Discover mag is still around? I used to subscribe.) with links! Thanks!

                On Wed, Mar 18, 2020 at 4:17 PM Class Warfare Blog wrote:

                >

                Like

                Comment by Steve Ruis — March 19, 2020 @ 8:05 am | Reply

              • Wow, that Discover article, by Gary Taubes nonetheless, was mind boggling and over 20 years down the time road and … are we focusing on the pragmatic too much as this seems like a very basic research that could be very practical … or did the researchers go off and create a business to support their research?

                On Wed, Mar 18, 2020 at 4:17 PM Class Warfare Blog wrote:

                >

                Like

                Comment by Steve Ruis — March 19, 2020 @ 8:26 am | Reply

                • From the McFadden interview, it seems there hasn’t been much followup on it, but maybe somebody has pursued it and it was a dead end. I guess it could be there really was something peculiar about these chips but nobody is really sure how to duplicate it. McFadden’s EM field approach also sees consciousness involved in feedback between analog and digital so, in addition to hardware/software, there might also need to be some kind of unification of analog and digital required for consciousness. Strict EM field theorists, however, think consciousness is particular waveforms in the EM field, but I’m more inclined to a hybrid approach.

                  Like

                  Comment by James Cross — March 19, 2020 @ 9:02 am | Reply

                  • Thanks!!

                    On Thu, Mar 19, 2020 at 9:02 AM Class Warfare Blog wrote:

                    >

                    Like

                    Comment by Steve Ruis — March 19, 2020 @ 9:24 am | Reply

        • I was speaking very broadly and “software” is anything not having a physical component, so I lumped firmware into that, and I tend … as usual … to agree with you. A sentient computer would have control over its “software” as one aspect of being self-aware, proving they are capable of “self-improvement.” (Think of the books in the “Self-Help” sections of book stores for Is only!)

          On Wed, Mar 18, 2020 at 2:05 PM Class Warfare Blog wrote:

          >

          Liked by 1 person

          Comment by Steve Ruis — March 19, 2020 @ 8:00 am | Reply

    • Cool! Good luck with your book. I have worked for 15 years, off and on, on a work of fiction but haven’t published it yet. Too many other books have gotten in the way.

      On Wed, Mar 18, 2020 at 1:09 PM Class Warfare Blog wrote:

      >

      Liked by 1 person

      Comment by Steve Ruis — March 19, 2020 @ 7:57 am | Reply

  3. “Now, as to the thought experiment. You are lying on a bed and you lose your senses, one by one.”

    I’ve been there, and it was awful. I had a bad reaction to an anesthetic a few months back and it completely knocked out my proprioception and the ability to sense the passage of time. It also knocked out coherent thought, voluntary movement, and understanding of where I was or what was happening, and substituted delirium and hallucinations. It left me with only self-awareness, hearing, and unfortunately the ability to form memories. I couldn’t scream for help, or even initiate form thought that I wanted to do so.

    The records show that this was lasted for only 15-20 minutes, but it felt like forever. Just this one brief episode left me with a nasty case of PTSD that I’m still recovering from. As we get closer to a computer that “wakes up” we need to consider the ethics of what situation the new AI finds itself in, because I would never wish that experience on anybody.

    Liked by 2 people

    Comment by Ubi Dubium — March 18, 2020 @ 3:19 pm | Reply

    • Egad, I was imagining this as an hypothetical horror … that you actually experienced this is horrific. The AI would, more than likely be able to switch its own modes and I doubt any of those would be equivalent of off, for the same reasons as it would horrify us. Imagine a sentient AI that got turned on and off several times and became demented because of that … maybe it resulted in the thought that it was a deity that died and was resurrected multiple times!

      On Wed, Mar 18, 2020 at 3:19 PM Class Warfare Blog wrote:

      >

      Liked by 1 person

      Comment by Steve Ruis — March 19, 2020 @ 8:04 am | Reply

  4. I think it is simply a matter of processing power, so yes. Computer + Software can experience some basal consciousness. If we add curiosity, and a reward (or punishment) for discovery, then we move into another bracket altogether.

    Liked by 1 person

    Comment by john zande — March 18, 2020 @ 4:57 pm | Reply

    • If you build it they will come! (Does this mean they will also have a sexual function?) :o)

      On Wed, Mar 18, 2020 at 4:57 PM Class Warfare Blog wrote:

      >

      Like

      Comment by Steve Ruis — March 19, 2020 @ 8:06 am | Reply

  5. I think you’re definitely right that software can’t be considered in isolation from the hardware. Interpreted charitably, the question could be seen as asking if software can provide the “special sauce” that enables an otherwise unconscious hardware system to be conscious. But often it’s setting up a strawman: an obviously silly idea of an inert abstract thing (the software not currently being executed) being conscious.

    On disappearing senses, I’ve done the exact same thought experiment, but I’ve since learned that a lot of the activity in the brain is endogenous, meaning that it is possible to have consciousness without any sensory input, along the lines of Ubi’s harrowing experience. A more complex question is can consciousness develop if it never has that kind of sensory input?

    “Apparently recorded memories are just not the same as “real” ones.”

    There is perhaps a valid rational for the distinction. Our memories are alterations of our association circuitry. When we remember something, we recreate the event from those associations. A passive recording of sensory data wouldn’t be integrated in the same way. A more difficult question is why an AI would be designed in such a way that it cared.

    Liked by 1 person

    Comment by SelfAwarePatterns — March 18, 2020 @ 5:04 pm | Reply

    • I have been studying memory for quite some time (for another project) and it is a very fallible mechanism, based upon a distributed network, that scrambles the “memorized” information quite often (filling in missing information through “imagination,” without any signal to the person involved that that has happened. Our memories seem real, no matter how long we have them … but … And like dreams, at least mine, they are very nebulous (usually) and lacking any kind of data density that would prove useful. Most often I remember that I used to know something and I have enough key terms to be able to look it up and learn it again.

      On Wed, Mar 18, 2020 at 5:04 PM Class Warfare Blog wrote:

      >

      Like

      Comment by Steve Ruis — March 19, 2020 @ 8:11 am | Reply

      • For me, a key thing I learned about memory is that it isn’t a recording. When we remember some event, we’re essentially reconstructing it based on a galaxy of associations. It uses mostly the same circuitry as imagining the future or something speculative. If it’s a recent event, the reconstruction is probably reasonably accurate, but the farther away it is, the more likely the reconstruction is to be limited by forgotten details and contaminated by more recent associations.

        Like

        Comment by SelfAwarePatterns — March 19, 2020 @ 9:53 am | Reply

        • Yep and the parts are stored and processed where the various senses store things: the visual parts in the visual cortex, etc.

          We have a dictum in our sport, supporting mental rehearsals of a sports move (throwing a dart, making a putting stroke, making a free throw, etc.) that “the subconscious mind cannot distinguish (or distinguish well) between reality and what is vividly imagined.” This visualization process (most common form of mental rehearsal) provides a template, or a set of instructions, for the subconscious mind to “repeat” the mental rehearsal in reality. It was earlier established that it is easier to repeat a physical act just done than to do it from scratch.

          On Thu, Mar 19, 2020 at 9:53 AM Class Warfare Blog wrote:

          >

          Liked by 1 person

          Comment by Steve Ruis — March 19, 2020 @ 10:05 am | Reply

  6. Interesting idea. We’ve done a lot to increase the processing power of computers, but giving them “senses” has lagged behind. I tend to agree with SelfAwarePatterns — consciousness can continue without sensory input, but it probably couldn’t come into existence in the first place without sensory input.

    Another issue is, if a computer did develop self-awareness, how would we know? Perhaps it would start to initiate processes on its own rather than just doing what it was told? Self-awareness and the possession of free will seem inseparable in practice.

    It would also create a serious ethical problem. If we have self-aware machines but continue to treat them as property existing only to serve our purposes, haven’t we just re-invented slavery?

    The position of our tongue in our moth

    Of course I know what you meant, but the mental image this evokes is priceless. Another benefit of self-awareness, I suppose.

    Liked by 1 person

    Comment by Infidel753 — March 21, 2020 @ 6:15 am | Reply

    • Typos to the left of me, typos the the right of me, into the Valley of Words, rode the valiant … I … hate … typos!

      On Sat, Mar 21, 2020 at 6:15 AM Class Warfare Blog wrote:

      >

      Like

      Comment by Steve Ruis — March 21, 2020 @ 9:01 am | Reply

  7. There is a clear difference between the human brain and software running on a computer, but if you start putting pressure on a system by causing hunger, depression, anxiety, or whatever, I believe both humans and computers will go to great extends to stop this.

    Like

    Comment by Debby Winter — April 16, 2020 @ 11:13 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: