Is the Mind a Collection of Evolved Cognitive ‘Modules’?

By: Anthony Aubel

The modularity hypothesis claims that the mind, similar to its underlying neural structure, may be exclusively composed of ‘modules’: systems with highly specialized functions. Arguments for or against modularity often fall in two broad areas: one that questions the extent by which modules can accurately characterize the human mind and the other about the meaning of ‘modules’. One prominent theory, advanced by Jerry Fodor, limits ‘modules’ to only a subset of cognitive structures that mostly operate beyond our conscious control whereas massive modularity (MM), a hypothesis supported by several prominent evolutionary psychologists, assigns a more pervasive functional role to modules. In fact, MM claims that the entire cognitive structure, including the mind, is made up of a collection of evolved cognitive 'modules' that carry out specialized functions. Although there has been extensive debate between both sides, the use of the term ‘modules’ in explanations has created substantial confusion. While the question of what attributes should be assigned to modules has been the focus of the debate, closer examination reveals that the sides are often talking past each other because their analyses reside in different levels of explanation.

In this paper, I plan to use Dennett’s distinction of personal and sub-personal levels of explanation to evaluate some of the arguments for and against massive modularity and show how a failure to account for the appropriate level of explanation can be the cause of persistent confusion in ongoing debates. From this analysis, I conclude that both sides do provide plausible descriptions about the modular organization of the mind insofar as their claims are understood within the relevant level of explanation; MM hypothesis is plausible insofar as it is understood within a mechanistic (sub-personal) level of explanation whereas arguments against it make sense from an intentional (personal level) stance. While both sides have important points to contribute, criticisms advanced from either side that cross-over different levels fail to cohere.

In his book Content and Consciousness (1969), Daniel Dennett makes an explicit distinction between personal and sub-personal levels as a basis for explaining human behavior. Since its introduction, many researchers studying the mind have adapted it in different ways to fit their explanatory framework. Perhaps the best way to understand the distinction between personal and sub-personal levels is to use Dennett’s example of ‘pain’ phenomena. The personal level of explanation would involve expressing words or sounds we associate with an ache that indicate the experience or feeling of pain which is simply picked out by the person that is experiencing it; in Dennett’s words, “the person just knows that he or she is in pain without providing a mechanical account of how they know.” (Dennett, 1969). Personal level explanations often invoke (either directly or indirectly) the concept of ‘agency’ since it is assumed that there exists a subject, the experiencer. Or in some cases, a distinct central system with broad access to information available to the organism integrates the incoming stimuli that somehow translate to what is experienced as a sensation of pain. Moreover, Personal level explanation constitutes intentional attributes that encompass the agent’s beliefs, attitudes (propositional), desires, and other aspects that fall under the classical meaning of ‘intentionality’ in philosophy of mind; that is, the peculiar characteristic of mental states “…to be about, to represent, or to stand for, things, properties and states of affairs.”(Pierre, 2003). Hence the personal level explanation is an intentional level of analysis.

The sub-personal level explanation, however, involves providing an account of underlying neural processes that give rise to the sensation of pain in purely mechanistic terms. “Pain” is interpreted as some physical damage to the nerves that activate physiological responses allowing the organism to evade the situation. The claim here is that there exists an evolutionary advantage for ‘pain’ aversion since it tends to preserve the fitness of the organism which contributes to its overall survival. Although there is no explicit references to the phenomenon of ‘pain’ itself; only a mere description of the organism’s behavior from an evolutionary, purely mechanistic, standpoint (Barkow, Cosmides and Tooby, 1995). This characterization will become pertinent in the following discussion about massive modularity from an evolutionary psychology perspective.

In what follows, I will use ‘intentional’ to refer to the personal and ‘mechanistic’ to refer to the sub-personal levels of explanation.

Jerry Fodor, in his book The Modularity of Mind (1983) advanced a set of criteria restricting modularity to only a subset of features that he attributed to ‘automatic’ or ‘reflexive’ processes involved in cognition. A primary example of this includes sensory receptors in the eye that receive, as input, visual stimuli that are then transmitted to perceptual centers for further processing or to motor control parts of the brain that are involved in the coordination of movement. Such systems primarily deal with the processing of incoming data. Fodor identified several distinguishing features of modules, but in what follows, we will consider only two of the main features: namely, informational encapsulation and domain specificity.

Basically, encapsulation means that access to information external to modules is strictly limited and domain specificity refers to any processing of information by modules that is of a very specific kind. Fodor regarded such features not for defining ‘modules’, but claimed that they co-occurred in functionally specialized systems: ‘‘I am not, in any strict sense, defining my terms...the notion of modularity ought to admit of degrees’(Fodor, 1983). His classification pushed modules to the peripheries of the cognitive architecture making them distinct from central systems while characterizing their properties as inflexible, independent, and autonomous. Such systems, he maintained, functioned below the level of centralized systems that are responsible for ‘higher’ cognitive processes such as reasoning, decision-making, beliefs, imagination, and propositional attitudes. To Fodor, ‘encapsulation’ means that a system has very limited access to the information that is readily available to the central cognitive system of the organism. Similarly, central systems have limited control over what modules do but they themselves are unencapsulated; meaning they have access to most, if not all, information available to the entire organism. In effect, central systems cannot themselves be modular.

The way Fodor analyzes modules is consistent with his foundational commitments to 'intentional realism' and the 'Representational Theory of Mind', both of which he defends. It should help support the proceeding case to briefly lay these positions out here since it provides evidence for the level of explanation at work. Commonsense (or folk) psychology attempts to explain behavior by ascribing mental states with representational content that are assumed to be causally efficacious in producing the behavior. The attributes afforded to representations here are intentional (i.e. they are about something) and commonsense psychology entails intentional realism. Fodor articulates representational content that he considers to be intentionally real in the context of folk psychological generalizations where mental states such as beliefs or desires, in virtue of a central agency, are 1) semantically evaluable and 2) causally efficacious (Egan, 1990). In other words, the presence of agency is necessary in ascribing meaning to content and effecting behavior. Intentionality, therefore, plays a critical role in Fodor's explanatory account.

My argument here is that Fodor’s analysis of modularity makes sense only within an intentional level of explanation. For one, the way he uses ‘encapsulation’ presumes that there is another system, namely central cognitive faculties, that the information is encapsulated from. This presumption strongly implies that the central systems are to be treated as distinct levels of processing. Modules are understood as inflexible, independent, and autonomous features while central systems, unlike modules, do have full access to the organism’s intentional states (i.e. beliefs, attitudes, desires, etc.). But Fodor uses this distinction to argue against MM. He claims: since massive modularity assumes isolated and inflexible units largely disconnected from each other and centralized systems, such cognitive architecture will fail to cohere and hence is implausible. But this analysis is clearly arising from an intentional (personal) stance; the argument only makes sense if one accepts this distinction. However, as we would see below from an evolutionary psychology standpoint, this argument does not make much sense at mechanistic (sub-personal) level.

Proponents of evolutionary psychology (EP), especially Carruthers and Barrett, have argued for an extension of modularity pushing for the concept to be much more pervasive than originally proposed by Fodor. In fact, they adopted the term ‘module’ from Fodor’s book in an attempt to support the hypothesis of massive modularity to reason as follows: since natural selection shaped human cognitive architecture through many individuated subsystems each serving a specific function, it’s plausible that the mind also shares the same organizational structure. Therefore, they hold that all of cognitive architecture including the mind is modular (Cosmides and Tooby, 1994b). Here, however, we can see how the term ‘modular’ makes sense under a mechanistic level description. Used in an inductive form of an argument, for instance, it can serve as a plausible hypothesis of MM for an evolutionary explanation of the organization of the mind. An inductive inference for MM can loosely be put as follows: ‘Neural organization at every level is observed to be ‘modular’ where each part serves specific functions to solve a particular problem. The entire modular organization supports an organism’s fitness and helps it adapt to its environment. Since the mind is also part of the functional organization of the organism, it is also modular. This reasoning makes sense from a mechanistic standpoint because there is no need for any reference to a distinct central system (as per Fodorian definition) from which the modular system is encapsulated from.

Yet, there is no explicit reference from either side in the ongoing debates about MM that points to a clarification in the levels of explanation. This failure to account for the underlying levels of explanations is a major source of confusion in these ongoing debates. It makes no sense for MM to make the claim that any distinct part of the system, including Fodor’s central system, have access to all (or most) information available to the entire organism since the whole organism itself is claimed to be composed entirely of mechanisms; there can’t really be any central place where all the information comes together. Therefore, such an explanation under the mechanistic level is meaningless. It is clear that the explanations provided by either side are talking past each other because they fail to account for these different levels of explanations.

Domain specificity is another feature that has caused confusion in debates around modularity. It describes information processing by modules that serve a particular function in response to specific kinds of inputs. Similar to how Fodor had defined information encapsulation, he meant to use domain specificity as another criterion for modules that were outside the control of any central agency. His point is that the central processes do not exercise any control over the functioning of such peripheral modules. The information that is encapsulated in a module is of a specific kind within the domain which the module functions. We can see here how his explanation takes on an implicit intentional stance. It is the part of ‘us’, the central agent, that does not have any control over the functioning of the modules.

A prime example that can be used to illustrate this lack of central control is the Müller-Lyer illusion - one of the most famous optical illusions in psychology. Briefly, the illusion is depicted as two straight lines of the same length; one that has at each of its ends an open fin and the other line closed fins. Viewers often estimate that the line with two open fins at its ends is longer, even though both lines are actually the same length (see figure below):

The Müller-Lyer illusion (Figure adopted from SEP, “Modularity of Mind”)

The fact that the illusion persists despite viewers' knowledge that the lines are in fact the same length is indicative that the central domains do not have functional control over modular processes. But according to MM, the entire system is composed of modules. This suggests that any part of the entire functional organization of the mind that is sensitive to the illusory effects takes part in the process without any entailment of intentional attributes. So domain specificity here relates to the specific functions carried out by modules that are involved in the cascade of mechanisms that result in the particular effects observed (without the need to invoke agency).

From an evolutionary psychology perspective, such mechanisms are described as input/output processes that dictate the organism’s behavior and they fit the functional level of explanation (Dennett, 1969). Over multiple generations, systems receive inputs from the environment and produce outputs that are selected based on their reliability to serve the fitness of the organism. The traits get passed on to the subsequent generations that reflect modules’ functional role in solving specific problems faced by the whole organism (Tooby and Cosmides, 1992). This is key to the adaptability of the organism to cope with the challenges it faces in the environment. For evolutionary psychologists that subscribe to MM, all processes, from the organismal-behavioral level down to the molecular (e.g. enzymatic) level are, therefore, modular. The entire system is hierarchically organized with modules serving specific functions at each level to solve specific problems. Hence, descriptions of modules in functional levels are almost always in mechanistic terms. Therefore, it will not make much sense for evolutionary psychologists to defend MM in terms of domain specificity and information encapsulation as specified in intentional (personal) level. Yet, it can be perfectly coherent to use these terms at a mechanistic level since they are understood as intrinsic features of modules that enable the input/output processes of the entire functional organization of the cognitive architecture.

We can see here how a debate around whether the mind is a collection of evolved cognitive modules results in sides talking past each other. While Fodor approaches the issue from an intentional level considering central agency, proponents of MM describe the system entirely in terms of modular mechanisms without invoking any need for central agency. Yet considerations of which levels of explanation each side is using to advance their argument do not explicitly figure in their respective analysis.

It is possible to counter our criticism here to claim that perhaps it is understood, implicitly, that levels of explanations are baked in the arguments from both sides; that in virtue of their arguments, the sides are in fact attempting to convince us that the correct level at which we ought to understand modularity is indeed the level that they argue from. After all, evolutionary psychologists tend to strictly adhere to a scientific (physicalist) framework where invoking any kind of agency in their explanation cannot clearly be supported. Whereas Fodor's analysis is intrinsically dualistic (Barrett, 2014). This characterization would be expected and is consistent with an intentional level of explanation. However, failure to explicitly address the appropriate level of explanation can create widespread confusion. Consider for instance that information is encapsulated in modules and that very specific functions are carried out within each modular domain. What would this mean for MM? Since the entire system is understood to be composed of modules, this will mean that the mind contains encapsulated modules each designed to autonomously solve a particular problem without any of the processes being shared with or accessible to external domains. But this view would be a misrepresentation of what evolutionary psychology is proposing. Pietraszewski & Wertz (2021) have argued that applying Fodorian modules that need separate, "bounded' computers for each adaptation would be a big misunderstanding of evolutionary psychology's proposal (Pietraszewski and Wertz, 2021). Solving particular problems as part of the organism's adaptive process requires a fully integrated mechanistic description which need not constrain every module exclusively to a unique adaptive problem (Tooby & Cosmides, 1992; Barrett, 2015). Yet, we can see how under an intentional level of explanation, the evolutionary psychologist's thesis can be misconstrued as proposing a one-to-one relation: each encapsulated and distinct module functions separately to solve each adaptive problem. From a mechanistic level, however, the explanation is perfectly coherent since it need not invoke any central agency which the system is presumed (under intentional stance) to be isolated from. Therefore, both sides seem to have valid points, but it's the failure to realize the distinct levels of explanations from which they're approaching the problem that causes the confusions.

It is possible that proponents or critics of MM entirely reject or outright deny any need for distinction of personal (intentional) and sub-personal (mechanistic); they need not necessarily commit to such analysis. However, regardless of whether they reject such distinction or not, they are still positioning themselves in an explanatory divide based on how their analyses are structured. A Fodorian perspectivist, for instance, may find it appropriate to use intentional attributes to describe certain mental phenomena under investigation because doing so allows for a more meaningful explanatory account to be provided. Interpreting the Müller-Lyer illusion, for example, may make more sense under an intentional level of explanation since it presumes a lack of central control over peripheral visual modules. On the other hand, an evolutionary psychologist that subscribes to the MM hypothesis may find it more meaningful to describe the phenomena mechanistically based on the domain specific functional organization of modules that give rise to the illusory effects. In either case, the levels of explanations naturally divide in response to the way the descriptions are analyzed.

So where should the debate go from here? We have seen how failure to account for the appropriate level of explanation can cause confusion across debates. Although mechanistic (sub-personal) level explanations make sense in support of MM, an intentional (personal) level argument against it can cause confusion due to presumptions (i.e. central agency) that do not belong in a mechanistic level. While an intentional level explanation can be perfectly coherent if it's understood with respect to personal level description. Our analysis has led us to question whether it is fruitful to even have the kind of debates around modularity that we have presented. In our investigations of the mind, it would perhaps be more productive to allow for descriptions from each level to develop in parallel since each side has something important to contribute. Insofar as they remain in their appropriate level of explanation, the ensuing developments should enrich our understanding while minimizing unnecessary confusion.

Reference List:

Barkow, J.H., Cosmides, L. and Tooby, J. (1995) The adapted mind: Evolutionary psychology and the generation of culture. Oxford University Press.

Barrett, H.C. (2005) ‘Enzymatic Computation and Cognitive Modularity’, Mind and Language, 20(3), pp. 259–287. doi:10.1111/j.0268-1064.2005.00285.x.

Barrett, H.C. (2012) ‘A hierarchical model of the evolution of human brain specializations’, Proceedings of the National Academy of Sciences, 109(supplement_1), pp. 10733–10740. doi:10.1073/pnas.1201898109.

Barrett, H.C. (2014) The shape of thought: How mental adaptations evolve. Oxford University Press.

Bechtel, W. (2003) ‘Modules, Brain Parts, and Evolutionary Psychology’, Evolutionary Psychology, pp. 211–227. doi:10.1007/978-1-4615-0267-8_10.

Carruthers, P. (2006) The architecture of the mind. Oxford University Press.

Carruthers, P., Laurence, S. and Stich, S. (2008) ‘Barrett, HC and Kurzban, R.(2006). Modularity in cognition: Framing the debate. Psychological Review, 113, 628--647. A theoretical article far-reaching in scope that sets out the issues for conceptualizing modularity and its importance for the study of cognition and cognitive development.’, Marvelous Minds: The Discovery of what Children Know, pp. 241–241.

Cosmides, L. and Tooby, J. (1994a) ‘Beyond intuition and instinct blindness: Toward an evolutionarily rigorous cognitive science’, Cognition, 50(1-3), pp. 41–77.

Cosmides, L. and Tooby, J. (1994b) ‘Origins of domain specificity: The evolution of functional organization’, Mapping the mind: Domain specificity in cognition and culture, 853116.

Dennett, D. (1969) ‘Personal and sub-personal levels of explanation’, Content and consciousness, pp. 17–20.

Egan, F. (1990) ‘Vindicating Intentional Realism’, Behavior and Philosophy, 18(1), pp. 59–61.

Fodor, J.A. (1983) The modularity of mind. MIT press.

Frankenhuis, W.E. and Ploeger, A. (2007) ‘Evolutionary Psychology Versus Fodor: Arguments For and Against the Massive Modularity Hypothesis’, Philosophical Psychology, 20(6), pp. 687–710. doi:10.1080/09515080701665904.

Pierre, J. (2003) ‘Intentionality’, Stanford Encyclopedia of Philosophy [Preprint].

Pietraszewski, D. and Wertz, A.E. (2021) ‘Why Evolutionary Psychology Should Abandon Modularity’, Perspectives on Psychological Science, 17(2), pp. 465–490. doi:10.1177/1745691621997113.

Prinz, J. (2006) ‘Is the mind really modular’, Contemporary debates in cognitive science, 14, pp. 22–36.

Sperber, D. (2001) ‘In defense of massive modularity’, Language, brain and cognitive development: Essays in honor of Jacques Mehler, 7, pp. 47–57.

Tooby, J. and Cosmides, L. (1992) ‘The psychological foundations of culture’, The adapted mind: Evolutionary psychology and the generation of culture, 19.

Tooby, J. and Cosmides, L. (2015) ‘Conceptual Foundations of Evolutionary Psychology’, The Handbook of Evolutionary Psychology, pp. 5–67. doi:10.1002/9780470939376.ch1.