Philosophy of Engineering and Artifact
in the Digital Era


an exploratory workshop

2009 February 6-8
“Stefan cel Mare” University of Suceava

Peter (Piotr) Boltuc


A Philosopher's Take on Robot Consciousness

 

Machines can be conscious if any organism can. Computers are already conscious in terms if soft AI; they can perform operations as if they followed the processes identified as thinking in humans. Machines also perform impressive task in terms of hard AI; they can perform operations using the same, or very similar, mechanisms humans use in their thinking. Soft AI is always identified with functional consciousness; yet, also large subclasses of machines operating within hard AI are functionally conscious.

Depending on one’s chosen definition of phenomenal consciousness the class of its designates is more or less inclusive. We shall use a definition that fully accounts for the ‘phenomenal’ aspect of consciousness [Alexander]. By this standard, phenomenal consciousness takes into account phenomenal information, i.e. information that comes through perceptual qualities: smell, color, touch etc. Strictly speaking only robots can be phenomenally conscious since any preceptors of those ‘perceptual qualities’ would count as robotic devices of sorts. It is quite clear these days that we can have robots equipped with phenomenal consciousness so defined; whether such devices already exist, and what detailed arrangements would satisfy this somewhat vague criterion, is debatable and debated [Franklin at al.].

It is vital to understand that the use of phenomenal information, which is the differentia specifica of phenomenal consciousness, does not imply that there is such a thing as what it is like to be such robot. This is the only philosophical issue I am trying to make in this paper, but a crucial one. There is a difference between me (for me and you for you) having certain perceptions and another subject having qualitatively identical perceptions [Unger]. This is because I feel, see, perceive only my perceptions. This common-sense fact seems trivial but it is hard to describe and consequently it escapes certain philosophers. In particular, the first-person perception in this sense is clearly private (this is the privileged access problem) hence it is questioned by strict verificationists. Such first-person perspective was often viewed as the locus of the soul [Eccless], which is why some hard-core naturalists object to it. Yet, the difference between perceptions that an organism reacts to but is not consciously aware of, and those that it is aware of, is clear [Chalmers, Nagel]. Hence, its rejection is just silly; if convoluted arguments in philosophy of language and other arcane philosophical domains seem to suggest otherwise too bad for those domains. We call the kind of consciousness that we have good reasons to believe that has the first person awareness of the kind presented above h-consciousness [Boltuc/Boltuc].

H-consciousness is important since there are good reasons to believe that it is the source, or the locus, of value attributed to persons [Shalom; Boltuc ‘88], though we do not make this argument here [Boltuc ‘07]. It is clearly what makes the difference between subjects and objects in the world; hence, doctors are likely discontinue the treatment of patients who lost all forms of consciousness. H-consciousness seems to be a necessary condition of one’s status as a moral patient strictly understood (some ethical theories introduce environments, even information, as moral patients, but those use a broader notion of moral patients [Leopold; Floridi]) but it is not a sufficient condition since for intance rats, which are clearly conscious, by some standards have little moral standing, and by some are not moral patients at all.

As a product of human and animal brains, h-consciousness should be explainable in the language of natural science. It is probably produced in the thalamus and more detailed hypotheses, such as the dialogue of hemispheres hypothesis and the quatum hypothesis of consciousness have been formulated in the last decades. Those are at least the examples of research programs of the kind that should lead, someday, to the naturalistic explanation of h-consciousness. Such explanation would consist in giving us detailed understanding of how h-consciousness is produced. This is the position of naturalistic non-reductionism. We do not reduce first-person consciousness to any material phenomena (it may be, and probably is, a new and different sort of a material phenomenon) but we also claim that it is fully explainable by such phenomena.

When h-consciousness is clearly understood we should be able, in principle, to have ways to engineer it. Since h-consciousness is likely to be a complex biological function of the thalamus, or even a feature of interaction of brain hemispheres, it is unlikely to be just a computational process. It is probably more like a new substance, say at the bio-chemical or quantum level, which is why it should be of the kind that can be bioengineered, not just programmed on a computer. What can be programmed is an engineering function, of the kind used in the industry, which would guide a robotic device to build one out of inorganic, or perhaps just organic, matter. We should be able to understand how h-consciousness functions and once we do so we should be able to figure out how to build it.

 

[BACK]