Thoughts on the nature of consciousness

brainThis was on the end of a big post on the Exi list, responding to some crap. I wanted to pull it out and put it here because it captures something I’ve been thinking about for a while.

How about I go out on a limb and say what I think about subjective consciousness? I can’t say how it works, but I have some ideas on why
it exists and what it’s for.

(Some quick definitions: I use the term “subjective consciousness” or “subjective conscious experience” to mean the first person, felt experience of existing. This is in contrast to things like decision making, processing input, storing & retrieving memories, having ideas, thinking things through, and all the other good stuff which doesn’t appear to use or require subjective conscious existence. Even the reflexive understanding of yourself as a being in the world is just more information processing, distinct from “subjective consciousness”)

It seems to me that subjective consciousness is simply a module of the mind, which is for something very specific, and that is to feel things. Qualia like the “redness of red” and emotions like anger share the property of being felt; they are the same kind of thing. It’s clear to me at least that this is a functional module, in that it takes information from other parts of the brain as input (for example, the currently imagined representation of the world, whether that is current or a reloaded past), produces feelings (how? No idea), then outputs that back to the other parts of the brain, affecting them in appropriate ways. The other parts of the brain do everything else; they create all our “ideas” (and then we get to feel that “aha” moment”), they make all our decisions (to which are added some feelings of volition), they do all the work. The feelings produced/existing in the subjective consciousness module are like side effects of all that, but they go back in a feedback loop to influence the future operation of the other parts.

Why would you have something like this? What can this do that a non-subjectively conscious module couldn’t? Why not just represent emotions (with descriptive tags, numerical levels, canned processing specific to each one), why actually *feel* them? To me that’s as big a question as how. I can’t explain that.

What’s interesting though is how the purpose of the mechanism of feeling seems to be to guide all the other areas, to steer them. eg: some bits of the brain determine that we are in a fight-or-flight situation. They decide “flight”. They inform the feeling module (subjective consciousness) that we need to feel fear. The feeling module does that (“Fear!”), and informs appropriate other parts of the brain to modify their processing in terms appropriate to fear (affecting decision making, tagging our memories with “I was scared here”, even affecting our raw input processing). So we feel scared and do scared things. Probably  most importantly, we can break “not enough information” deadlocks in decision making with “well what would the fearful choice be” – that’s motivation right there.

It’s a blunt instrument, which might be useful if you didn’t have much else in terms of executive processes. It is really weird in our brains though, because we do, we have fantastic higher level processing that can do all kinds of abstract reasoning and complex planning and sophisticated decision making. Why do we also need the bludgeons of emotions like anger, restlessness, boredom, happiness? So we have roughly two systems doing similar things in very different ways, which you’d expect to fight. And thus the human condition 🙂

But where it would not be weird is in a creature without all this higher level processing stuff. Never mind how evolution came up with it in the first place (evolution is amazing that way) but given that it did, it would be a great platform for steering, motivating, guiding an unintelligent being. So what I’m getting at is, it’s a relic from our deep evolutionary past. It’s not higher cognitive functioning at all. Probably most creatures are subjectively conscious. They don’t have language, they might not have much concept of the past or future, but they feel, just as we do (if in a less sophisticated way). They really have pleasure and pain and the redness of red. And suffering.

We have a conceit that we (our subjectively conscious selves) are *really* our higher order cognitive processes, but I think that’s wrong.

We take pride in our ideas, especially the ones that come out of nowhere, but that should be a clue. They come out of “nowhere” and are simply revealed to the conscious us. “Nowhere” is the modern information processing bits of the brain, the neocortex, which does the heavily lifting and informs “us” of the result without the working.

We claim our own decisions, but neuroscience, as well as simple old psychology, keeps showing us that decisions are made before we are aware of them, and that we simply rationalize volition where it doesn’t exist. How do we make decisions? Rarely in a step-by-step derivational, rational way. More often they are “revealed” to us, they’re “gut instinct”. They come from some other part of the brain which simply informs “us” of the result.

We think of the stream of internal dialogue, the voice in the mind, as truly “us”, but where do all those thoughts come from? You can’t derive them. It’s like we are reading a tape with words on it, which comes from somewhere else; it’s being sent in by another part of the brain that “we” don’t have access to, again. We read them, the subjective-consciousness module adds feelings of ownership to them, and decorates them with emotional content, and the result feeds back out to the inaccessible parts of the brain, to influence the next round of thoughts on the tape.

In short, I think that the vast majority of the brain is stuff that our “subjective” self can’t access except indirectly through inputs and outputs. Most of the things that make us smart humans are actually out in this area, and are plain old information processing stuff, you could replace them with a chip, and as long as the interfaces were the same, you’d never know. I think the treasured conscious self is less like an AGI than like a tiny primitive animal, designed for fighting and fucking and fleeing and all that good stuff, which evolution has rudely uplifted by cobbling together a super brain and stapling it to the poor creature.

I hope I’m right. If this is actually how we work, then the prospect of seriously hacking our brains is very good. You should be able to replace existing higher level modules with synthetic equivalents (or upgrades). You should be able to add new stuff, as long as it conforms to the interfaces  (eg: add thoughts to the thought tape? take emotional input and modify accordingly?)

Also, as to correlates of subjectively conscious experience in the mind, we should be looking for something that exists in most animals, not just in us. That might narrow it down a bit 😉

Thoughts on the nature of consciousness

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s