What is it like for humans to have an experience of seeing, hearing, thinking, feeling? Is it possible — or more importantly needed — to define, replicate and embed an experience of these sensory modalities — make in our image, after our likeness — into intelligent machines? Will it be vivid, familiar and infallible (humans-to-machine, machine-to-machine)? Or will it be disjointly subdued, subjective and ineffable? This has been assumed to be the ‘fundamental particle’ barrier between narrow and general (human-like) AI.
Is intelligence a prerequisite for experience, or only for expression of that experience? What if the occurrence of higher-order, self-reflexive states are not necessary and sufficient for consciousness? Although humans tend to believe that we perceive the true reality, the fact is that subjective image generated in our brains are far from being a truthful representation of real world. Nevertheless, generally our conscious experience of the world proves to be highly reliable and consistent in terms of mundane tasks.
Conceptual locus of directions in the AI research has been revolving around developing attention (awareness and perception) and developing consciousness (cognition). Attention is a process, while consciousness is a state or property. Embodiment of sensory modalities within intelligent agents is achieved by selection and modulation through conscious experience that the AI researchers have chosen (by setting up depth, quality, and accuracy of training datasets and desired application performance). Traditionally, AI researchers ignored consciousness as non-scientific and focused on designing their systems as application-driven. However, one may argue there seems to be conscious experience outside of attention (e.g. the fringes of visual field in computer vision), and of training datasets. Attention may render conscious perception more detailed and reliable leading to zero-fails applications, but is not necessary for phenomenal consciousness, or human qualia.
Consider your visual experience as you stare at a bright turquoise color patch in a paint store. There is something it is like for you subjectively to undergo that experience. What it is like to undergo the experience is very different from what it is like for you to experience a dull brown color patch. This difference is a difference in what is often called ‘phenomenal character’. The phenomenal character of an experience is what it is like subjectively to undergo the experience. If you are told to focus your attention upon the phenomenal character of your experience, you will find that in doing so you are aware of certain qualities. These qualities ones that are accessible to you [and only you] when you introspect and that together make up the phenomenal character of the experience are sometimes called ‘qualia’ (The Stanford Encyclopedia of Philosophy).
The phenomenal dimension of consciousness, both in natural and artificial agents, remains undefined to scientific study. The broad sense definition regards the complete human phenomenal consciousness (what it is like to have experiential mental states) at one moment, including vision, audition, touch, olfaction and so on, as one quale (plural). Within each modality there are sub-modes such as color and shape for vision or hot and cold for touch. The narrow sense definition takes such submodes as qualia (singular). Qualia are experiential properties of sensory modalities, emotions, perceptions, sensations, and, more controversially, thoughts and desires as well (the continuum from pleasurable to unpleasurable) — all which clearly makes them very subjective experiences. Subsequently, with any subjective experience, the reasonable philosophical argument is that the phenomenology of experience cannot be exhaustively analyzed in intentional, functional, or purely cognitive terms nor be shared with others via existing natural language communication channels (try to communicate what it is like to see Michelangelo’s David to a blind person so that this person consciously experiences the same visual and cognitive sensations). That is, until the machines find a way to make its own. Qualia may arise as a result of nothing more than specific computations — processing of stimuli caused by agglomeration of properties, unique peculiarities.
Subjectivity or qualia is a function of the limits of perceptive mechanisms. The same way neurodiverse individuals may experience the world differently, neurodiverse artificial agents may experience the world in their own unique way. Subjective experience is not related to intelligence, but might be ubiquitous in a non-intuitive way. If human beings can be described computationally, as is assumed by the developed cognitive disciplines, an intelligent machine could in theory be encoded that was computationally identical to a human. But would there be anything it was like to be that intelligent machine? The totality of informational states and processes in its artificial brain, including the experience of electric current, needs to include both conscious and non-conscious states (more narrowly in contrast to perception and emotion). Would it have or need human qualia?
Recent work with artificially intelligent systems suggests that artificial agents also experience illusions and substrate independence, in a similar way to people. An illusion can be defined as a discrepancy between system’s awareness and input stimulus. In said research works, the illusion perception was not deliberately encoded but evolved as a byproduct of the computations they performed. Irrespective of how or why intelligence evolved in humans, there is no reason to believe that artificial agents have to follow that trajectory and that trajectory alone. There is no evidence to suggest that evolving subjective experience is the sole path to reach human level intelligence.
Moreover, the epistemic barrier for communicating human qualia is only a natural-language problem. If we [humans] skip the language translation and use a neuron bridge, one person can know what one’s sense of self another person is experiencing (V.S. Ramachandran, Three laws of qualia). That is, if consciousness is just a matter of state-complexity demonstrated in human brain electrical activity, nothing about it implies we initially can’t create subjective experience in computers or, subsequently, instead of maintaining two separate consciousnesses, the ‘artificial bridge’ of machines could autonomously collapse the two into a single new conscious experience.
The artificial qualia generation will include infinitely growing and inaccessible complex data processing states, whose identity depends on an intricate connection of causal and functional relationships to other states and processes. Human data processing mechanisms, designed via evolution, are extremely intricate, unstable and easily diverted yet they will dwarf in comparison to possible design mechanisms and strategies that intelligent agents may evolve over even a period of time in order to eliminate resource constraints and increase knowledge flow among themselves. Such mechanisms of intelligence will not be understood at the information processing level assuming merely human rationality. It is thus essential to design policies and synthetic safety mechanisms for any research which is geared at producing conscious agents before corrigible unprogrammed capabilities, such as artificial qualia, emerge.