This is the reading for Sunday, December 12th. We will be meeting in a Cafe Bene between Jonggak Station and the Kyobo bookstore at 4:30; directions are on the Meetup.com page.
A printable copy is here. Please print it if you can – I am not sure I will get a chance to print it.
Last week, we saw how Metzinger’s self-model theory of subjectivity (SMT) accounts for the first person perspective with a series of constraints on the brain’s information processing systems. The first three constraints work together to produce a minimal world: that is, a static representation of an environment. The globality constraint makes some information available for some systems; the presentationality constraint means that information must be available now, and the transparency constraint basically means the actual processing stages are unconscious, and must be so.
The world is more than a static representation, however. To achieve an actual first person perspective within a dynamic world, three more minimal constraints are necessary. The first is convolved holism, which means that our perceptions are always interconnected – there are no decontextualized atoms. The second is dynamicity, which integrates individual moments in a flow of time (beyond the static nature of presentationality). The third is perspectivalness, the fact that I experience my experience as my experience. Finally, Metzinger will put all 6 constraints together into his phenomenal model of intentionality relation and the self.
Convolved holism is a property of hierarchical system have entities of smaller scale enclosed within a larger scale. Phenomenologically, a paradigmatic example of the the lowest level is object perception. Consciously perceived objects are sensory wholes, even if not yet linked to conceptual or memory structures. A second example is the phenomenal self. In standard circumstances, the consciously experienced self forms an integrated whole. The third level of holism are complex situations, bundles of objects including their relations and implicit contextual information; for example, a landscape.
On a conceptual level, we are not able to adequately describe those aspects of a unit of experience as isolated elements within a set. This is an important conceptual constraint for any serious neurophenomenology. There are no decontextualized atoms.
We cannot say much about the neural correlates of holism, but it would have to be a large-scale coherence on the level of the brain itself, and what is needed for that would not be a uniform synchrony by dynamic, specific cross-system relations binding subsets of signals in different modalities and using different frequency bands at the same time.
Our conscious life emerges from integrated psychological moments which are themselves integrated into the flow of subjective time. Dynamicity “does justice to the fact that phenomenal states only rarely carry static or highly invariant forms of mental content, and that they do not result from a passive, non-recursive representational process.” (16) Basically, dynamicity grows out of presentationality, like holism grew out of globality.
Phenomenologically, the most important forms of temporal content are presence, duration, and change. Time flows. But flow, duration, and change and seamlessly integrated into presence; it is difficult to explain the strong degree of integration between the experience of presence and the conscious representation of change: “It is not as if the Now would be an island emerging in a river, in the continuous flow of consciously experienced events as it were—in a strange way the island is a part of the river itself.” (16)
This is a self-model theory of subjectivity, and so we need to answer two questions. First, what is a consciously experienced, phenomenal self, and second, what is a consciously experienced first person perspective? Perspectivalness is important for both.
Perspectivalness is not a necessary component for ascribing conscious experience to a system; there are phenomenal state-classes, like religious experience or psychiatric disorders, in which the first-person perspective is at least diminished. He thinks these are examples of non-subjective consciousness.
Phenomenologically, it is a structural feature of phenomenal space as a whole. It consists in the existence of a single, coherent, and temporally extended phenomenal subject. Space is constituted by a point of view.
As a functional property, the experiential centeredness of our conscious model of reality has its mirror image in the centeredness of the behavioural space. This functional constant is so obvious that it is often ignored; everything, including sensory and motor systems, are physically integrated within the body of a single organism. This can be called the single embodiment constraint.
Concerning the neural correlates, there are many studies pointing to mechanisms constituting a persisting functional link between localized brain processes and the centre of representational space. These mechanisms included the vestibular organ, the spatial matrix of the body schema, visceral forms of self-representation, and in particular, the input of a specific nuclei in the upper brain stem, “engaged in the homeostatic regulation of the ‘internal milieu’”. (18) These mechanisms provide stable input for the self model and anchor it.
What turns a neural system-model into a phenomenal self?
Some information processing systems internally simulate the external behaviour of a target object. The simulation of a target system consists in representing those of its properties that are accessible to sensory processing, and how they develop over time. But some information processing systems are special cases in that they also emulate the behaviour of another information processing system. They do so not only by internally simulating its observable output, but also hidden aspects of its internal information processing. These hidden aspects can consist in abstract properties like its functional architecture or the software it is running.
Another form of simulation is self-modelling, in which the target system and simulating/emulating system are identical: “A self-modelling information-processing system internally and continuously simulates its own observable output as well as it emulates abstract properties of its own internal information-processing—and it does so for itself.” (18)
In short, a self-model is an integrated model of the representation system, which it is activating within itself, as a whole. Typically, it has a bottom-up component driven by sensory input, which perturbs the top-down processes which generate new hypotheses about the state of the system, which is a self-stimulation, so it can come to a functionally adequate internal image of the system’s overall actual system.
What bundles all this together is a higher-order phenomenal property, mineness. It is my leg, and my volitions are my own. Mineness is closely related to phenomenal selfhood. I am someone; I experience myself as identical through time. A phenomenal self-model is an integrated representation of the system as a whole.
Global availability of system-related information
Phenomenologically, the contents of my phenomenal self-consciousness is directly available to a multitude of my mental and physical capabilities. I experience this global availability as my own autonomy in dealing with these contents, and via a subjective sense of immediate givenness. There are three more specific phenomenological characteristics. First, the degree of autonomy in dealing with the contents of self-consciousness varies; e.g. hunger is more difficult to influence than the contents of a cognitive self-model. There is a gradient of functional rigidity, and the degree of rigidity itself is available for phenomenal experience. Second, immediacy is also quantitative. Typically, thoughts are not really determined until spoken or written down, while thirst is fully “ready-made”. Third, all globally-available information — that is, thoughts — are integrated into a sense of mineness.
While reasoning consciously about the state of your body, you will typically be aware of the representational character of the cognitive constructs that pop up, but at the same time these thoughts will be untranscendably yours. Conscious human beings do not direct their attention to bodily sensations alone; they can also form thoughts about their properties: “The content of de-se-thoughts is formed by my own cognitive states about myself. Reflexive, conceptually mediated self-consciousness makes system-related information cognitively available, and it obviously does so by generating a higher-order form of phenomenal content.” (20) This content does not appear as an isolated entity, but as recursively embedded into the holistic, unified self-model.
The existence of a coherent self-representation is what introduces a self/world border into the system’s model of reality; system-related information becomes globally available as system related information via the internal image of itself as a distinct entity possessing certain features. This in turn is a pre-condition for the representation of relations between the organism and its environment.
Self-related phenomenal information is equivalent to globally available system-related information. This information ranges from the molecular to the social. The self-model processes information “relevant to elementary bioregulation”, and it also gives the system information about the fact that it is engaged in informational and reality-modelling processes to “metarepresentational processes” such as other-agent modelling.
In a functionalist analysis, a PSM is a single, coherent set of causal relations. It has a causal influence in things like differentiating one’s self and making one autonomous, but also by integrating the behavioural profile of an organism. That is, as your bodily movements become available as your movements, “the foundation for agency and autonomy are laid, because the organism now has an internal model of itself as a whole. A specific subset of events perceived in the world can now for the first time be treated as systematically correlated self-generated events.” (20) The perception of these self-generated events create the passage from a behaving system into an agent.
Situatedness and Virtual Self-presence
Whatever I experience as the content of my self-model, I experience it now as a self. I have a sense of being in touch with myself in a direct and non-mediated way, which could not be bracketed; “If it was possible to subtract the phenomenal content now at issue, I would simply cease to exist on the level of subjective experience.” (21) I am not just someone, I am someone situated in a temporal order. It is a psychological moment that can be integrated with autobiographical memory. In a phenomenal self-simulation, e.g. making plans about my future, or remembering my past states, I am doing all that now. No matter how absorbed we get in future plans or memories, there is also a subtle bodily awareness; we are tied to the phenomenal window of presence which is generated by the physical system.
Transparency: From System Model to Phenomenal Self
Applying the transparency constraint to the concept of a self-model is the decisive step in getting selfhood. The prereflexive experience of being someone comes from the self-model being transparent. We are systems in a naive realistic self-misunderstanding. There are many classes of phenomenal states in which our self-model is totally transparent and in which we do not cognize; in these states we are “one with ourselves”.
What would the exact opposite be? It does not seem to exist; there is no conscious experience that is totally opaque. In other words, “there simply are no phenomenal state-classes, in which we experience ourselves as pure, disembodied spirits, not possessing any location in a real temporal order or in physical or behavioural space.” (22-23)
If the PSM became opaque, selfhood would disappear. There’s a threat of circularity here. Is transparency really necessary for phenomenal self-hood? The terms he is using are partly goal-relative. If we want to explain normal conscious experience, then we need something like a stable transparent partition. But if we want to explain wider cases, like some spiritual experiences, then maybe it is not necessary. We could imagine a lucid dream in which the dreamer recognizes themselves as a dream character, a representational fiction; a situation in which the lucid dreaming system becomes lucid to itself.
But are such states “experiences”? If we call them experiences, then transparency is not necessary. But then we have the odd performative contradiction of reporting a self-less state by referring to one’s own autobiographical memory. In order to justice to the real phenomenology, one must admit that transparency is not all or nothing, but can be distributed across varying parts of the self-model to varying degrees. In general, the body self-model is transparent, while high-level cognitive processes like reasoning are more opaque. But there are some things, like emotional processes, which can oscillate between transparency and opacity. Take social relationships—trust, jealousy or mild paranoia are examples.
Transparent experience is “not only knowing, but of also knowing that you know while you know; opaque experience is the experience of knowing while also (non-conceptually, attentionally) knowing that you may be wrong.” (23) If you trust someone, you just know they are trustworthy; if they disappoint you, your model of that person changes, but there is also an internal de-coherence in your own self-model. That is, you realize your emotional state of trust was only a representation of social reality, and in that case, it was a misrepresentation.
The really key thing about transparency: what makes the self-model of humans so unique, and so successful as a representational link between biological and cultural evolution, is the fact that it violates autoepistemic closure. That we have an “opaque part of our self-model allows us to conceive of the possibility of an appearance/reality distinction not only for our own perceptual states, but for the contents of self-consciousness as well.” (24) It gives us to critically asses the content of the PSM.
There are at least two kinds of mental content: intentional content and the locally supervening phenomenal content of self-representation. A brain in a vat could have the phenomenal experience but not the intentional state. The upshot is that selfhood is constituted by a non-epistemic self-representation, and this has a straightforward ontological interpretation: “no such things as selves exist in the world.” (24)
As a computational strategy, it prevents the system from being caught in an infinite regress of self-modeling. Remember that self-modeling, in terms of its logical structure, is an infinite process: a system that could model itself as modeling would generate a chain of nested system related mental content which would quickly devour all computational resources. The effective way to break out of the reflexive loop is to introduce an untranscendable object: transparent self-modelling “developed as an evolutionarily viable strategy, because it constituted a reliable way of making system-related information available without entangling the system in endless internal loops of higher-order self-modeling. I call this the ‘principle of necessary self-reification’.” (25) What we experience as ourselves is basically the block against representational loop. Systems operating under a transparent self-model become naive realists. The self-model is located on the sub personal level of description, but at the same time, it is the decisive factor allowing for personal-level communication in social groups: “You become a person by having the right kind of sub personal self-model, one that functionally allows you to enter mutual relationships of acknowledging each other’s personhood in a social context.” (25)
Dynamics of the Phenomenal Self
Metzinger says, “Whatever my true nature is, I am an entity which undergoes changes.” (26) I can introspect and see time-experience; like the simultaneity of bodily sensation and the succession of reasoning. All this seem stop have a very stable duration, even though it is constantly interrupted by deep and dream sleep.
“The conceptual reification of what actually is a very instable and episodic process is then reiterated by the phenomenological fallacy pervading almost all folk-psychological, and la large portion of philosophical discourse on self-consciousness. But it is even phenomenologically false: We are not things, but processes.” (26)
An important idea in any dynamicist cognitive science is to not see intentionality as a rigid subject→object relation, but a dynamic physical process. In the same way, reflexivity is not a rigid, abstract relation but a physical process generating an updating self-model.
The important question is how the self model can function as the origin of the first-person perspective. The phenomenal model of the intentionality relation (PMIR) is a conscious mental model, and “its content is an ongoing, episodic subject—object-relation. Phenomenologically, a PMIR typically creates the experience of a self in the act of knowing, of a self in the act of perceiving—or of a willing self in the act of intending and acting.” (27)
In the classical relation, thought is directed at an object. For example, the object might be a goal, a representation of a successfully terminated bodily action. This classical intentionality is a relation between a mental act and an object which is mentally contained in the mode of intentional existence—that is, the object might not exist. An important point is that the classic relation can form the content of a conscious mental representation. We can have a phenomenal model of the intentionality relation; we can “catch ourselves in the act”, and can have a higher-order conscious representation of ourselves as representing.
But from an empirical point of view, it is possible that many non-humans are intentional systems, but that their nervous system do not allow them to become aware of this. The point—we do not only represent objects, but also sometimes the representational relation itself, and this is important for understanding why consciousness is experienced as a first-person perspective.
What is the Function of a PMIR?
Phenomenal models make subsets of information currently active in the system globally available for control of action, attention, and cognitive processing. It makes all that available within a window of presence, and allows for selective and flexible behaviour. A PMIR also allows for a new class of facts. If we we can the PMIR itself into an object—in essence, a second-order PMIR looking at a first order PMIR—two new forms of intelligence emerge. The first is introspective intelligence, which allows for selective attention towards goal states, the awareness of this selectivity (which allows for epistemic auto-regulation and learning), and the representation of a first-person perspective.
The second is social intelligence. This allows for Other agent-modeling, which is being away that other systems also have a first-person perspective. Second, it allows us to internally simulate external PMIRs, i.e. empathy. Third, it opens up high-level intersubjectivity: the mutual recognition which is necessary for complex societies.
The PSM of humans is what allows the transition from biological to cultural evolution.