By: Anthony Aubel
The computational theory of cognition claims that: “to think is to compute.” To say that a thing ‘computes’ is roughly equivalent to saying that the thing has the capacity to manipulate something. But what exactly is the nature of the contents that are said to be manipulated? In the case of typical computing devices, the contents are understood to be symbols manipulated via formal rules. Such symbols are abstract entities that, in general, denote a system's representational capacities. Therefore, classical computational theories of cognition, particularly those that refer to neurobiological systems, hold that thinking in general, and its associated intentional states, such as our commonsense beliefs and propositional attitudes that refer to things in the world, involve processing of internal mental representations (Godfrey-Smith, 2004).
However, mental representations have raised plenty of skepticism in scientific and philosophical circles. Several anti-representalists and eliminativists have argued that we ought to do away with concepts of mental representations as they fail to play any significant role in our theories of cognition (Egan, 1990, p. 1). In what follows, I challenge such views by showing how they are inconclusive. Specifically, I will argue against two underlying anti-representationalist views by showing how motivations behind the computational theory of cognition (CTC) do point to a representational theory of cognition that builds upon our commonsense (folk psychological) theories by assigning a central role to representations that make intentional explanations plausible within CTC’s framework.
Since its inception around the middle of the 20th century, the CTC hypothesis has gained significant support among mechanistically inclined philosophers and scientists as a plausible explanatory framework for our cognitive capacities. Much of the efforts that gave rise to CTC have been driven by the advances in the computer revolution that expanded on the works of Alan Turing and others, setting the stage for fundamental questions to be raised as to what it means to think and engage in mental activities. The concept of a “Turing machine” is one of the most famous formalisms that emerged from computational theory posited to be an “abstract mathematical model of an idealized computing device with unlimited time and storage capacity” (Rescorla, 2015a, pp. 2–5). By applying this theoretical notion to neurobiological systems, McCulloch and Pitts were the first to adopt the model in understanding cognitive activity (Piccinini, 2016) which paved the way for the CTC hypothesis. They treated cognition literally to be a form of computation where mental processes such as perception, problem-solving, reasoning, etc., can be instantiated in biological brains.
This in turn raised the fundamental question: could the mind itself be a computational system? It would seem more plausible to claim biological neurons as performing computation somewhat akin to a physical silicon computing machines since one can at least loosely identify similarities in the way the systems are organized. But what does it mean to say that the mind ‘computes’? If we accept such a proposal, we are faced with a much greater explanatory challenge of accommodating abstract mental phenomena under a computational theory; how are we to reconcile mental states with the workings of the underlying neural substrate? We would be faced with the task of explaining how mental phenomena are realized in neural (physical) processes in the first place. This has been one of the biggest challenges posed in the cognitive sciences.
From such motivations emerged a variant of CTC, namely the computational theory of mind (CTM), based on the prominent works of Jerry Fodor, among others, that expanded to engulf the mind in the purview of computation by adding one crucial component to the mix: representation, which asserts that computation essentially involves representational content (Fodor and Pylyshyn, 1981). With regard to mental phenomena, representations are thought to be content-bearing states that figure in our commonsense psychology: namely, propositional attitudes - states that are said to have ‘intentionality’ — “they are about or refer to things and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy (Sterelny, 1990). This form of representational stance that embodies intentional states is central to a view called ‘intentional realism’ that encompasses commonsense (folk) psychology.
Representational theories of cognition relating to intentional realism have been criticized in some form or another by various anti-representationalists and eliminativists. Although, there are variations of arguments against representationalism, two related conclusions that often stand out as proposals for its elimination are that: 1) representations do not play any significant role in mature (scientific) theories since they’re thought to be incoherent or lack robust empirical support and/or 2) such representations do not exist because they do not refer to anything identifiable (Churchland, 2002).
Proponents of anti-representationalism commonly hold that we should refrain from talk about mental representation, especially in reference to such phenomena as intentionality or propositional attitudes as posited by folk psychology (Chemero, 2013; Churchland, 1981). They claim that such descriptions play no role in a mature scientific framework as they fail to refer to any physically identifiable entities.
Anti-representationalists also claim that folk psychology lacks explanatory power within empirical standards which severely undermines its value as a plausible theory. Churchland proposes to shift our attention from what FP can explain to what it cannot. For instance, he points to several mental phenomena such as mental illness, sleep, creative activities, and others by asserting that “when one centers one’s attention not on what FP can explain, but what it cannot explain or fails even to address, one discovers that there is a very great deal.” (Churchland, 1981).
Another common method that anti-representationalist use is to leverage an inductive argument that uses historical cases that point to other folk theories, such as alchemy which it turned out to be deeply mistaken and hence replaced by empirically robust theories of modern chemistry, as basis to eliminate folk psychology and replace it with modern neuroscience (Churchland, 1981).
I will first clarify what I mean by the phrase “mental representation.” For this, I will point to what Tyler Burge defines as veridicality conditions which are conditions that correspond to veridical representations of the world (2014; Burge and others, 2010). In the case of intentional states, such as beliefs and desires, for instance, these would be references to things that are true or false and states that are satisfied or avoided, respectively. My belief that Edinburgh is the capital of Scotland is true (or false if not the case). My desire to eat a cake is satisfied if I in fact eat the cake (and not satisfied if I don’t). Such veridical conditions also apply to things that we perceive in our environment. For example, my experience of perceiving an object before me occurs if that object is presented before me.
Our quotidian engagement with the world, therefore, assigns a central role to representations that satisfy such veridicality conditions and this contributes to our intentional explanations. But to support my main argument, I will provide specific cases where intentional explanations have notably been successful in contributing to scientific theories.
As a prime example, consider a classical case from the scientific study of perception: Helmholtz’s theory of Unconscious Inference. The theory states that in order for perceptual systems to process the sensory ‘proximal’ stimulations received from distal environmental factors, inferences are made by invoking representational states. This scientific theory of “unconscious Inference” clearly points to an instance where the sub-personal (‘unconscious’ ) level of computational (cognitive) processing transitions to perceptual states representing veridicality conditions in the environment (Meyering, 1989). To describe such perceptual inferences, scientific (cognitive) theories build on folk psychology (often via mathematical models) by providing subpersonal processes in representational terms (Stich, 1983).
Such a description is motivated by an intentional explanation at its basis and runs contrary to anti-representationalist views about representations not playing any (significant) role in mature (scientific) theories. Meyering, for instance, points out:
As we have seen, the spatial determinations of the sensory input on the receptor organs is not strictly due to immediate sensations alone, i.e., their formation cannot be adequately explained by purely physiological processes. For all sensations, including the local signs, are merely empty symbols which our intellect must learn to interpret. Thus, for a sound and comprehensive theory of perception the physiologist must enter the field of psychology. (Meyering, 1989).
This suggests that perceptions are meaningful impressions; that they seem to be possessed of intentional content that represents things in our environments being certain ways.
Contrary to Churchland’s argument about FP’s lack of explanatory power for a vast array of cognitive processes, FP regularly cites what are collectively referred to as “high-level” cognition. These include abstract mental processes such as reasoning, decision-making, planning, problem-solving, and especially the learning process in its pre-linguistic or entirely nonlinguistic form (as in infants and animals). As Churchland states “ FP would thus appear constitutionally incapable of even addressing this most basic of mysteries.” (Churchland, 1981).
On the contrary, empirical studies in modern developmental and social psychology reveal abundant cases pointing to clear presuppositions of intentional states posited by FP. Many of the fields that study high-level cognition routinely use representational features adopted from folk psychology as bedrock to their experimental designs, since without them, they would not be able to support meaningful interpretations of the results. For instance, experiments in developmental (child) psychology have emphasized that “explaining behavior is a matter of acquiring folk psychological concepts within a culture and then learning how to deploy such terms with competence.” (Ohreen, 2008). In the context of human development, such FP concepts as referred point to acquiring basic learnings through representations that correspond to veridicality conditions in the environment. It is difficult to see how denying such intentional explanations playing a significant role in our scientific theories of cognition could be justified since we cannot make sense of the data in any other way. If a mature scientific theory is to remain objective, it cannot deny such intentional states; especially those that correspond to veridicality conditions. This suggests that representational states do play a significant explanatory role in scientific theories.
One possible objection here may be that such experiments will only make sense with representational states because they’re already assumed as part of the experimental design. However, data has consistently shown to be reproducible across varying controlled conditions under such presuppositions that fit various models and provide explanatory accounts for the observed phenomena. So it will be rather premature to undermine the role of such representational states, let alone call for their elimination.
To be clear, my claim here is not to say that non-representational states do not play a role in cognition. Non-representational descriptions are essential factors that figure prominently in processes modeled by computational theories of cognition. For instance, the firing of neural action potentials or neuronal ensemble (a group of nerve cells involved in a particular neural computation) are key processes that it may not make much sense to describe in representational terms. Of course, such processes and their physical constituents are clearly identifiable in the domain of neuroscientific enquiry and hence quantifiable. But this, in and of itself, is not adequate ground for ontological dismissal of intentional representations; especially if they distinctly correspond to veridicality conditions as an aspect of their nature (Burge, 2014).
There is ample empirical evidence in the field of neuroscience charting the role of, for example, the hippocampus in enabling navigational behavior that point strongly to internally represented ‘cognitive maps’ (Moser et al., 2008; O'Keefe and Nadel, 2011), especially in other species such as birds and insects, in addition to higher mammals and humans. The so called ‘maps’ help represent how the environment is spatially arranged so as to guide navigation. Such evidence extends the intentional paradigm from humans to other biological species and provides support for plausible evolutionary selection of representational states.
Even from a symbolic system standpoint of computational theory, which is constructed on a purely mathematical framework, the critical role played by intentional representations is undeniable (Rescorla, 2015a). A model such as a hypothetical Turing machine, for instance, syntactically manipulates symbols that are formally considered to be representational entities that are defined by mathematical rules. In this case the representational entities are about mathematical notations or symbols which makes them intentional in the sense that they refer to something. In order for a computational theory to account for mental phenomena, such as the CTC hypothesis, its elements must constitute representational capacities that are computable. Hence, computation itself essentially involves intentional representational content (Rescorla, 2015b).
The anti-representationalist’s eliminativist argument against mental representations is unwarranted. Rejecting the significance of the role mental representations play in intentional explanations within computational theory of cognition is inconclusive. For an anti-representationalist computational theory of cognition to plausibly stand, it must show how exactly it can account for thoughts and intentions in the presence of contrary evidence as well as in the absence of representational content.
Reference List
Baker, L.R. et al. (1995) Explaining attitudes: A practical approach to the mind. Cambridge University Press.
Burge, T. (1979) ‘Individualism and the Mental’, Midwest studies in philosophy, 4, pp. 73–121.
Burge, T. (2007) Foundations of mind. Oxford University Press.
Burge, T. (2014) ‘Perception: Where Mind Begins’, Philosophy, 89(3), pp. 385–403. doi:10.1017/s003181911400014x.
Burge, T. and others (2010) Origins of objectivity. Oxford University Press.
Chemero, A. (2013) ‘Radical Embodied Cognitive Science’, Review of General Psychology, 17(2), pp. 145–150. doi:10.1037/a0032923.
Churchland, P.M. (1981) ‘Eliminative Materialism and Propositional Attitudes’, Journal of Philosophy, 78(2), pp. 67–90. doi:10.5840/jphil198178268.
Churchland, P.M. (2002) ‘Eliminative Materialism and the Prepositional Attitudes’, in Contemporary Materialism. Routledge, pp. 166–185.
Collins, A. (1987) ‘The Nature of Mental Things. notre Dame: university of notre Dame Press’.
Dennett, D.C. (1981) ‘True believers: The intentional strategy and why it works’.
Dennett, D.C. (1987) The intentional stance. MIT press.
Egan, F. (1990) ‘Vindicating Intentional Realism’, Behavior and Philosophy, 18(1), pp. 59–61.
Fodor, J.A. and Pylyshyn, Z.W. (1981) ‘How direct is visual perception? Some reflections on Gibson's" ecological approach.’, Cognition [Preprint].
Ganson, T., Bronner, B. and Kerr, A. (2014) ‘Burge's defense of perceptual content’, Philosophy and Phenomenological Research, 88(3), pp. 556–573.
Godfrey-Smith, P. (2004) ‘On Folk Psychology and Mental Representation’, Representation in Mind, pp. 147–162. doi:10.1016/b978-008044394-2/50011-7.
Meyering, T.C. (1989) ‘Helmholtz’s Theory of Unconscious Inferences’, Historical Roots of Cognitive Science, pp. 181–208. doi:10.1007/978-94-009-2423-9_10.
Moser, E.I. et al. (2008) ‘Place cells, grid cells, and the brain's spatial representation system’, Annual review of neuroscience, 31(1), pp. 69–89.
Nagel, T. (1974) ‘What Is It Like to Be a Bat?’, The Philosophical Review, 83(4), pp. 435–450.
O’Brien, G. and Opie, J. (2008) ‘The role of representation in computation’, Cognitive Processing, 10(1), pp. 53–62. doi:10.1007/s10339-008-0227-x.
Ohreen, D. (2008) ‘A Socio-Linguistic Approach to the Development of Folk Psychology’, Human Affairs, 18(2), pp. 214–224. doi:10.2478/v10023-008-0020-6.
Ohreen, D.E. (2004) The scope and limits of folk psychology: a socio-linguistic approach. Peter Lang.
O'Keefe, J. and Nadel, L. (2011) ‘Précis of O'Keefe & Nadel's<i>The hippocampus as a cognitive map</i>’, Behavioral and Brain Sciences, 2(4), pp. 487–494. doi:10.1017/s0140525x00063949.
Piccinini, G. (2016) ‘The Computational Theory of Cognition’, Fundamental Issues of Artificial Intelligence, pp. 203–221. doi:10.1007/978-3-319-26485-1_13.
Pierre, J. (2003) ‘Intentionality’, Stanford Encyclopedia of Philosophy [Preprint].
Rescorla, M. (2015a) ‘The computational theory of mind’.
Rescorla, M. (2015b) ‘The Representational Foundations of Computation’, Philosophia Mathematica, 23(3), pp. 338–366. doi:10.1093/philmat/nkv009.
Sterelny, K. (1990) The representational theory of mind: An introduction. Basil Blackwell.
Stich, S. (1992) ‘What Is a Theory of Mental Representation?’, Mind, 101(402), pp. 243–261.
Stich, S.P. (1983) From folk psychology to cognitive science: The case against belief. the MIT press.
Thau, M. (2002) Consciousness and cognition. Oxford University Press.