In their comprehensive analysis of cognitive neuroscience literature, Maxwell Bennett and Peter Hacker (B&H) strongly criticize the use of psychological predicates in sub-personal explanations; they assert that their use creates conceptual confusions and leads to meaningless explanations. Thus they are committed to abandoning all such usage from sub-personal (i.e., neuroscientific) level of explanations. In this essay, I plan to focus on B&H’s argument to show how, contrary to their main thesis, outright dismissal to use psychological predicates in sub-personal explanations can limit a cohesive understanding of observed phenomena; I argue on the positive how such uses can have important explanatory advantages.
In our everyday casual parlance, there are many properties that we often use to refer to, or explain, people’s behavior that may not, in the same vein, make much sense to use for other inanimate objects, such as chairs, tables, or cars. Properties of the kind ascribed to people (or other non-human animals) are usually expressed in intentional predicate form; “planning a trip”, “imagining an island”, “seeing a bird”, “believing in fairy tales'' and so on, that denote psychological phenomena pertaining to the whole organism. Yet instances where we reflexively react in response to a painful stimulus or other involuntary responses, for example, we ascribe to sub-personal levels, or parts of the organism such as the brain. We often say things like “my eyes see”, or “the brain believes”. This practice is usually not problematic in our quotidian applications. But in technical domains, where precise meanings of terms play an important role in explanations, they can prima facie seem problematic.
Explanations in psychological sciences often describe observed phenomena at personal or sub-personal levels. The intentional capacities studied are usually representational in nature. Daniel Dennett, in his book Content and Consciousness, provided one of the first in-depth introductions to the distinction between the personal and the sub-personal level of explanation. One of the main points of emphasis was that when, for example, a person is having some pain, or having a thought, that person, in the capacity of being a person, does not do anything in order to recognize that he or she is in pain, or having a thought. Being in such states are attributed to the sub-personal (i.e. neurobiological) levels that are distinct from personal level explanations. He distinguishes the sub-personal and personal levels of explanations based on their mechanical and non-mechanical properties (Drayson, 2014). Yet, Dennett acknowledges that the two levels are intertwined and the importance of understanding their relations in our explanations by pointing to personal level attributes (i.e., human agency). He states:” Typically, such explanations allude to the interconnected operation of a person's parts, e.g., the machinations of her brain. Does this render them uninteresting for the philosopher, for the theorist who seeks to better understand what human agency is? Not at all, replies Dennett, for distinguishing the levels "gives birth to the burden of relating them.” (Elton, 2000)
Such intertwined relations, however, tend to elicit the usage of psychological (or intentional) predicates in descriptions of phenomena and hence raise philosophical questions; especially criticisms from scientific circles. For instance, in the domain of cognitive neuroscience that mainly studies sub-personal processes, is it acceptable to use intentional predicates when explaining brain mechanisms? Maxwell Bennett and Peter Hacker (B&H) have criticized such common usage as unacceptable since, they claim, it creates conceptual confusions and leads to meaningless explanations. The core of their criticism centers on what they allegedly call the “mereological fallacy” that consists in “...ascribing to a part of a creature attributes which logically can be ascribed only to the creature as a whole.” (Bennett and Hacker, 2022). They, therefore, are strongly committed to getting rid of such usage of predicates in sub-personal explanations in cognitive neuroscience.
Although I can see how prima facie the use of intentional predicates in sub-personal explanations can seem to cause conceptual confusions, I argue that B&H’s dismissive proposal severs the continuity relation between personal and sub-personal levels of explanation. Preserving continuity in this case is important since it helps us understand how different levels of the system contribute to the uniformity of the organism as a whole to solve problems. Psychological predicates help maintain this uniformity by functioning as guiding elements that link personal and sub-personal level of explanations. As a plausible solution to B&H’s worries, I propose psychological predicates to be contextually redefined in each level of explanation where their meaning is constrained by 1) the type of problem that the respective system tries to solve, and 2) how it goes about solving it.
I will attempt to provide only a very brief overview of B&H’s criticisms and point to a few examples from their work that I find to be relevant to the discussion here.
One of the chief complaints that stands out from B&H’s work is recognizing the prevalent cases where cognitive neuroscientists ascribe various representations to the brain when they are trying to explain sub-personal processes; they often employ terms such as computational “symbols”, “representata”, “images”, or cognitive “maps”. Use of intentional predicates are often tied to some kind of a representation. For instance, “the eyes or the brain seeing an image”, or “the brain has or uses a mental map”. In all such use cases, B&H claim that it does not make any sense to have representations of the external world in the brain. The authors call for a complete dismissal of the concept in theoretical explanations by asserting that “...the term “representation” is a weed in the neuroscientific garden, not a tool - and the sooner it is uprooted the better.” (Churchland, 2005)). In order for the representational concepts to be meaningfully applied, they argue, 1) one must first of all presuppose agency, and 2) the supposed agent to correctly use a given representation according to some criteria or pre-specified rules. But, they claim, no sub-personal explanation carries any meaning in this regard (Bennett and Hacker, 2022).
In what follows, I consider B&H’s points 1 and 2 above to show how their conclusion fails to obtain.
As an example to first point about presupposition of agency, let’s consider cognitive “maps” as a case of representation that B&H provide an argument for as follows:
“... a map is a pictorial representation, made in accordance with conventions of mapping and rules of projection. Someone who can read an atlas must know and understand these conventions, and read off, from the maps, the features of what is represented. But the ‘‘maps’’ in the brain are not maps, in this sense, at all. The brain is not akin to the reader of a map, since it cannot be said to know any conventions of representations or methods of projection or to read anything off the topographical arrangement of firing cells in accordance with a set of conventions. For the cells are not arranged in accordance with a set of conventions at all, and the correlation between their firing and the features of the perceptual field is not a conventional one but a causal one.” (Blakemore, 1990)
This perspective illustrates a misconstrual of the role cognitive “maps” play in sub-personal explanations. For instance, there is ample evidence in the scientific literature pointing to parts of the brain regions (i.e hippocampus) that enable navigational behavior by invoking neuronal level ‘maps’ (O’Keefe & Nadel 1978, Moser et al. 2008) without presupposing any agency. These structures are not only found in humans but also extend across other mammals and species such as birds and insects in a continuum and their morphology (i.e spatial arrangement) correlates with the types of problems the organism is adapted to solve. Therefore ‘having and using’ such cognitive ‘maps ’in this case is meaningful in sub-personal explanations and serves as a guide to understand navigational behavior of the whole organism in the environment. Moreover, such evidence extends the intentional paradigm from humans to other biological species and provides support for plausible evolutionary selection of representational states. Uprooting the term ‘maps’ from neuroscientific explanation, therefore, will deprive us from attempts to understand how different levels of organization relate to solve problems.
An objection raised here may be that to say the “brain predicts” versus “I, qua person, predict'', for example, carry different meanings within their respective levels (i.e sub-personal vs. personal); the two cases mechanistically differ in how, and the context in which, they achieve an act of ‘prediction’. I would agree with this if we are only considering the systems in isolation. But the use of intentional predicates becomes advantageous when we try to understand the entire system and its interrelations. Intentional predicates capture the importance of how the organism and its constituent parts work together to solve problems; especially in the sense that the parts, in this case the organization of the brain, by virtue of being a part of the whole organism, contribute to solving a problem requiring predictive capacities. To accuse the users of such intentional notions as committing a ‘mereological fallacy’ will limit the scope of our attempts to reach a ‘full-fledged’ understanding of phenomena in question.
A more appropriate perspective may be to contextually redefine what we mean by a particular psychological predicate when applied to each level. For instance, the use of the predicate PREDICT in sub-personal context could point to how the hierarchical organization of neuronal layers is able to process the incoming sensory inputs at increasing levels of complexity that result in crude internal representations that can be understood as a prediction of external environment (i.e. similar to a generative model posited in predictive coding hypothesis). On the other hand, the predicate PREDICT used in personal level explanations can be understood in the context of action as the organism as a whole interacts dynamically in the environment (e.g. in social contexts) to solve problems by adjusting its overall behavior. This dynamical interaction between the sub-personal and personal level is critical not only for our understanding of how different levels are intertwined and work systematically in a continuum, but also critical as a matter of survival for the organism as a whole. This point illustrates the crucial role psychological predicates can play in helping maintain uniformity in our understanding of a system by functioning as an explanatory element that links the personal and sub-personal level of explanations.
As an example of B&H’s second point about rule-governed use, let’s consider the computational theory of cognition hypothesis where neural level information processing is understood as rule-governed symbolic manipulations with semantic (meaningful) properties that are often attributed to representations. B&H argue that there can be no such symbols in the brain and that it is a mistake to think that the brain can use or mean anything by a symbol (Dennett, 2007). But to outrightly dismiss such a hypothesis will have substantial explanatory impact in our theoretical understanding of the processes under study. Regardless of whether the hypothesis may turn out to be true or false, it will be premature to claim it to be meaningless. On the contrary, the rule-governed model in computational theory of cognition in conjunction with semantically loaded representational content has provided significant insights into our understanding of the brain and behavior that extends from sub-personal level to personal level explanations.
There are a number of empirical evidence showing how neurons are understood to be ‘computing’ (used as a predicate here) analogous to rule-governed logical operations of silicon-based computing devices. For instance, neuroscientist Albert Gidon et al. have shown that the dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks (Gidon et. al, 2022). The subpersonal levels of computations are known to solve ‘local’ problems that correspond to the type and amount of sensory data presented to them. Such neuronal level activities are extended to larger assemblies and ultimately the brain as an organ that contributes in the continuum of problem solving for the whole organism. At the personal, organismal level, intentional states, for example the belief that there is a tiger chasing me, appropriately take effect. It is at this level that the contextually redefined meaning of the predicate ‘compute’ that was similarly used for the sub-personal level, can take on a meaning based on the type and the way by which it solves a problem. This illustrates how the use of predicate ‘compute’ denoting a rule-governed use, can play an important role in our understanding of sub-personal explanations. Moreover, it shows how the meaning of it, when contextually redefined, can aid our complete understanding of how the system as whole goes about solving problems.
B&H’s dismissive account deprives us from having a uniform and cohesive understanding of observed phenomena across personal and subpersonal levels. The use of psychological predicates in sub-personal explanations can be advantageous. It helps us preserve explanatory continuity across hierarchical levels of organization. This is important since it helps us understand how different levels of the system contribute to the uniformity of the organism as a whole to solve problems. By functioning as guiding elements, psychological predicates help link the personal and sub-personal level explanations that aid our understanding.
Reference List
Bennett, M.R. and Hacker, P.M.S. (2022) Philosophical foundations of neuroscience. John Wiley & Sons.
Blakemore, C. (1990) ‘Understanding images in the brain’, in Images and understanding: THOUGHTS ABOUT IMAGES, IDEAS ABOUT UNDERSTANDING. Cambridge University Press, pp. 257–283.
Churchland, P.M. (2005) ‘Cleansing Science*’, Inquiry, 48(5), pp. 464–477. doi:10.1080/00201740500242001.
Dennett, D.C. (2007) ‘Philosophy as na"ive anthropology: comment on Bennett and Hacker’.
Drayson, Z. (2014) ‘The Personal/Subpersonal Distinction’, Philosophy Compass, 9(5), pp. 338–346. doi:10.1111/phc3.12124.
Elton, M. (2000) ‘The personal/sub‐personal distinction: An introduction’, Philosophical Explorations, 3(1), pp. 2–5. doi:10.1080/13869790008520977.
Figdor, C. (2014) ‘On the proper domain of psychological predicates’, Synthese, 194(11), pp. 4289–4310. doi:10.1007/s11229-014-0603-2.
Figdor, C. (2018) Pieces of mind: The proper domain of psychological predicates. Oxford University Press.
Gidon, A. et al. (2020) ‘Dendritic action potentials and computation in human layer 2/3 cortical neurons’, Science, 367(6473), pp. 83–87. doi:10.1126/science.aax6239.
Musholt, K. (2017) ‘The personal and the subpersonal in the theory of mind debate’, Phenomenology and the Cognitive Sciences, 17(2), pp. 305–324. doi:10.1007/s11097-017-9504-4.
Putnam, H. (1967a) ‘Psychological predicates’, Art, mind, and religion, 1, pp. 37–48.
Putnam, H. (1967b) ‘The nature of mental states’, Art, mind, and religion, pp. 37–48.
Solomon, R.C. (1976) ‘Psychological Predicates’, Philosophy and Phenomenological Research, 36(4), pp. 472–493.
van Buuren, J. (2015) ‘The philosophical–anthropological foundations of Bennett and Hacker’s critique of neuroscience’, Continental Philosophy Review, 49(2), pp. 223–241. doi:10.1007/s11007-015-9318-4.