Functionalism (philosophy of mind)

Functionalism is a viewpoint of the theory of the mind (not to be confused with the psychological notion of one's Theory of Mind). It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role in, i.e. causal relations with, other mental states, sensory inputs and behavioral outputs.[1] Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Functionalism is a theoretical level between the physical implementation and behavioral output.[2] Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since mental states are identified by a functional role, they are said to be realized on multiple levels; in other words, they are able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While computers are physical devices with electronic substrate that perform computations on inputs to give outputs, so brains are physical devices with neural substrate that perform computations on inputs which produce behaviors.

Multiple realizability

An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles. Thus, mental states are individuated much like a valve; a valve can be made of plastic or metal or whatever material, as long as it performs the proper function (say, controlling the flow of liquid through a tube by blocking and unblocking its pathway).

However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis[3] and David Malet Armstrong.[4] According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (e.g., they tend to yell "Yowzass!" when poked with sharp objects, etc.) then we would say that their type of mental state is perhaps similar to ours, but too different to say it's the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's[5][6] arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types

Machine-state functionalism

Maquina
Artistic representation of a Turing machine.

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam[5][6] in the 1960s. This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Turing machines). Putnam himself, by the mid-1970s, had begun questioning this position. The beginning of his opposition to machine-state functionalism can be read about in his Twin Earth thought experiment.

In non-technical terms, a Turing machine is not a physical object, but rather an abstract machine built upon a mathematical model. Typically, a Turing Machine has a horizontal tape divided into rectangular cells arranged from left to right. The tape itself is infinite in length, and each cell may contain a symbol. The symbols used for any given "machine" can vary. The machine has a read-write head that scans cells and moves in left and right directions. The action of the machine is determined by the symbol in the cell being scanned and a table of transition rules that serve as the machine's programming. Because of the infinite tape, a traditional Turing Machine has an infinite amount of time to compute any particular function or any number of functions. In the below example, each cell is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:

  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.

An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:

State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

The above point is critical to an understanding of machine-state functionalism. Since Turing machines are not required to be physical systems, "anything capable of going through a succession of states in time can be a Turing machine".[7] Because biological organisms “go through a succession of states in time”, any such organisms could also be equivalent to Turing machines.

According to machine-state functionalism, the nature of a mental state is just like the nature of the Turing machine states described above. If one can show the rational functioning and computing skills of these machines to be comparable to the rational functioning and computing skills of human beings, it follows that Turing machine behavior closely resembles that of human beings.[8] Therefore, it is not a particular physical-chemical composition responsible for the particular machine or mental state, it is the programming rules which produce the effects that are responsible. To put it another way, any rational preference is due to the rules being followed, not to the specific material composition of the agent.

Psycho-functionalism

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psycho-functionalism.

The fundamental idea of psycho-functionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist . On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:

Mental state M is the state that is preconceived by P and causes Q.

For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Given the transitivity of supervenience, if M1 supervenes on M2, and M2 supervenes on P (physical base), then M1 and M2 both supervene on P, even though they are (allegedly) totally different sets of mental facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Mechanistic functionalism

Mechanistic functionalism, originally formulated and defended by Gualtiero Piccinini[9] and Carl Gillett[10][11] independently, augments previous functionalist accounts of mental states by maintaining that any psychological explanation must be rendered in mechanistic terms. That is, instead of mental states receiving a purely functional explanation in terms of their relations to other mental states, like those listed above, functions are seen as playing only a part—the other part being played by structures— of the explanation of a given mental state.

A mechanistic explanation[12] involves decomposing a given system, in this case a mental system, into its component physical parts, their activities or functions, and their combined organizational relations.[9] On this account the mind remains a functional system, but one that is understood mechanistically. This account remains a sort of functionalism because functional relations are still essential to mental states, but it is mechanistic because the functional relations are always manifestations of concrete structures—albeit structures understood at a certain level of abstraction. Functions are individuated and explained either in terms of the contributions they make to the given system[13] or in teleological terms. If the functions are understood in teleological terms, then they may be characterized either etiologically or non-etiologically.[14]

Mechanistic functionalism leads functionalism away from the traditional functionalist autonomy of psychology from neuroscience and towards integrating psychology and neuroscience.[15] By providing an applicable framework for merging traditional psychological models with neurological data, mechanistic functionalism may be understood as reconciling the functionalist theory of mind with neurological accounts of how the brain actually works. This is due to the fact that mechanistic explanations of function attempt to provide an account of how functional states (mental states) are physically realized through neurological mechanisms.

Physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

China brain

Ned Block[16] argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron. (The tremendous difference in speed of operation of each unit is not addressed.). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.

Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[17] Indeed, it may be the case that we are constrained by our theory of mind[18] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.[19]

The Chinese room

The Chinese room argument by John Searle[20] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.

In connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. Of course, Searle responds that there is nothing more than syntax going on at the higher-level as well, so this reply is subject to the same initial problems. Furthermore, Searle suggests the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[16][21] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike normal people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[22]

David Chalmers tries to show[23] that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. The general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least in the human case. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that it begs the central question: How could Chalmers know that functional profile can be preserved, for example while the conscious subject's brain is being supplanted with a silicon substitute, unless he already assumes that the subject's possibly changing qualia would not be a determining factor? And while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam,[24] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H₂O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H₂O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H₂O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[21] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counterintuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic–synthetic distinction, as many seek to avoid).

Triviality arguments

According to Ned Block, if functionalism is to avoid the chauvinism of type-physicalism, it becomes overly liberal in "ascribing mental properties to things that do not in fact have them".[16] As an example, he proposes that the economy of Bolivia might be organized such that the economic states, inputs, and outputs would be isomorphic to a person under some bizarre mapping from mental to economic variables.[16]

Hilary Putnam,[25] John Searle,[26] and others[27][28] have offered further arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further[29] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[29]

See also

References

  1. ^ Block, Ned. (1996). "What is functionalism?" a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan. (PDF online)
  2. ^ Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
  3. ^ Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
  4. ^ Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
  5. ^ a b Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
  6. ^ a b Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
  7. ^ Putnam, H. (1967). “The Mental Life of Some Machines,” in H.-N. Castaneda (Ed.), Intentionality, Minds, and Perception. Detroit, MI: Wayne State University Press, p. 183.
  8. ^ Putnam, H. (1967). “The Mental Life of Some Machines,” in H.-N. Castaneda (Ed.), Intentionality, Minds, and Perception. Detroit, MI: Wayne State University Press, pp. 179-180.
  9. ^ a b Piccinini G (2010). "The mind as neural software? Understanding functionalism, computationalism, and computational functionalism". Philosophy and Phenomenological Research. 81 (2): 269–311. doi:10.1111/j.1933-1592.2010.00356.x.
  10. ^ Gillett, C. (2007). “A Mechanist Manifesto for the Philosophy of Mind: The Third Way for Functionalists”. Journal of Philosophical Research, invited symposium on “Mechanisms in the Philosophy of Mind”, vol.32, pp. 21-42.
  11. ^ Gillett, C. (2013). “Understanding the Sciences through the Fog of ‘Functionalism(s)’”. In Hunneman (ed.) Functions: Selection and Mechanisms. Dordrecht: Kluwer, pp.159-81.
  12. ^ Machamer P.; Darden L.; Craver C. F. (2000). "Thinking about mechanisms". Philosophy of Science. 67 (1): 1–25. doi:10.1086/392759.
  13. ^ Craver C. F. (2001). "Role functions, mechanisms, and hierarchy". Philosophy of Science. 68 (1): 53–74. doi:10.1086/392866.
  14. ^ Maley C. J.; Piccinini G. (2013). "Get the Latest Upgrade: Functionalism 6.3.1". Philosophia Scientiae. 17 (2): 135–149. doi:10.4000/philosophiascientiae.861.
  15. ^ Piccinini G.; Craver C. F. (2011). "Integrating psychology and neuroscience: Functional analyses as mechanism sketches". Synthese. 183 (3): 283–311. CiteSeerX 10.1.1.367.190. doi:10.1007/s11229-011-9898-4.
  16. ^ a b c d Block, Ned. (1980b). "Troubles With Functionalism", in (1980a).
  17. ^ Lycan, William (1987). Consciousness. Cambridge, Massachusetts: MIT Press. ISBN 9780262121248.
  18. ^ Baron-Cohen, Simon; Leslie, Alan M.; Frith, Uta (October 1985). "Does the autistic child have a "theory of mind"?". Cognition. 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210. Pdf.
  19. ^ Dennett, Daniel (1990), "Quining Qualia", in Lycan, William G. (ed.), Mind and cognition: a reader, Cambridge, Massachusetts, USA: Basil Blackwell, ISBN 9780631160762.
  20. ^ Searle, John (1980). "Minds, Brains and Programs". Behavioral and Brain Sciences. 3 (3): 417. doi:10.1017/s0140525x00005756. Archived from the original on 2001-02-21.
  21. ^ a b Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
  22. ^ Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
  23. ^ Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
  24. ^ Putnam, Hilary. (1975b). "The Meaning of 'Meaning'", reprinted in Putnam (1975a).(PDF online Archived June 18, 2013, at the Wayback Machine)
  25. ^ Putnam, H. (1988). Reality and representation. Appendix. Cambridge, MA: MIT Press.
  26. ^ Searle J (1990). "Is the brain a digital computer?". Proceedings and Addresses of the American Philosophical Association. 64 (3): 21–37. doi:10.2307/3130074. JSTOR 3130074.
  27. ^ Chalmers D (1996). "Does a rock implement every finite-state automaton?". Synthese. 108 (3): 309–333. CiteSeerX 10.1.1.33.5266. doi:10.1007/bf00413692.
  28. ^ Copeland J (1996). "What is computation?". Synthese. 108 (3): 335–359. doi:10.1007/bf00413693.
  29. ^ a b Peter Godfrey-Smith, "Triviality Arguments against Functionalism". 2009. Philosophical studies 145 (2). [1]/"Archived copy" (PDF). Archived from the original (PDF) on 2011-05-22. Retrieved 2011-02-06.CS1 maint: Archived copy as title (link)

Further reading

  • Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
  • Baron-Cohen S.; Leslie A.; Frith U. (1985). "Does the Autistic Child Have a "Theory of Mind"?". Cognition. 21: 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210.
  • Block, Ned. (1980a). "Introduction: What Is Functionalism?" in Readings in Philosophy of Psychology. Cambridge, MA: Harvard University Press.
  • Block, Ned. (1980b). "Troubles With Functionalism", in Block (1980a).
  • Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
  • Block, Ned (1996). "What is functionalism?" (PDF). a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan.
  • Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
  • Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
  • Crabb, B.G. (2005). "Fading and Dancing Qualia - Moving and Shaking Arguments", Deunant Books.
  • DeLancey, C. (2002). "Passionate Engines - What Emotions Reveal about the Mind and Artificial Intelligence." Oxford: Oxford University Press.
  • Dennett, D. (1990) Quining Qualia. In W. Lycan, (ed), Mind and Cognition. Oxford: Blackwells
  • Levin, Janet. (2004). "Functionalism", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), E. Zalta (ed.). (online)
  • Lewis, David. (1966). "An Argument for the Identity Theory". Journal of Philosophy 63.
  • Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
  • Lycan, W. (1987) Consciousness. Cambridge, MA: MIT Press.
  • Mandik, Pete. (1998). Fine-grained Supervience, Cognitive Neuroscience, and the Future of Functionalism.
  • Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
  • Polgar, T. D. (2008). "Functionalism". The Internet Encyclopedia of Philosophy.
  • Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
  • Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
  • Putnam, Hilary. (1975a). Mind, Language, and Reality. Cambridge: CUP.
  • Searle, John (1980). "Minds, Brains and Programs". Behavioral and Brain Sciences. 3 (3): 417. doi:10.1017/s0140525x00005756. Archived from the original on 2001-02-21.
  • Smart, J.J.C. (1959). "Sensations and Brain Processes". Philosophical Review LXVIII.

External links

China brain

In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?

Early versions of this scenario were put forward in 1961 by Anatoly Dneprov, in 1974 by Lawrence Davis, and again in 1978 by Ned Block. Block argues that the China brain would not have a mind, whereas Daniel Dennett argues that it would. The China brain problem is a special case of the more general problem whether minds could exist within other, larger minds.It is not to be confused with the Chinese room argument proposed by John Searle, which is also a thought experiment in philosophy of mind but relating to artificial intelligence.

Cognitive module

A cognitive module is, in theories of the modularity of mind and the closely related society of mind theory, a specialised tool or sub-unit that can be used by other parts to resolve cognitive tasks. The question of their existence and nature is a major topic in cognitive science and evolutionary psychology. Some see cognitive modules as an independent part of the mind. Others also see new thought patterns achieved by experience as cognitive modules.Other theories similar to the cognitive module are cognitive description, cognitive pattern and psychological mechanism. Such a mechanism, if created by evolution, is known as evolved psychological mechanism.

Computational theory of mind

In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher and cognitive scientist Jerry Fodor in the 1960s, 1970s and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008).The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation. This is to say that computation can be implemented by silicon chips or neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM, therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system.Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity. In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states).

Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the Computational Theory of Cognition (CTC) states that neural computations explain cognition. The Computational Theory of Mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC therefore provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness.

Consciousness

Consciousness is the state or quality of awareness or of being aware of an external object or something within oneself. It has been defined variously in terms of sentience, awareness, qualia, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood or soul, the fact that there is something "that it is like" to "have" or "be" it, and the executive control system of the mind. Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is. As Max Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness: "Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives."Western philosophers, since the time of Descartes and Locke, have struggled to comprehend the nature of consciousness and identify its essential properties. Issues of concern in the philosophy of consciousness include whether the concept is fundamentally coherent; whether consciousness can ever be explained mechanistically; whether non-human consciousness exists and if so how it can be recognized; how consciousness relates to language; whether consciousness can be understood in a way that does not require a dualistic distinction between mental and physical states or properties; and whether it may ever be possible for computing machines like computers or robots to be conscious, a topic studied in the field of artificial intelligence.

Thanks to developments in technology over the past few decades, consciousness has become a significant topic of interdisciplinary research in cognitive science, with significant contributions from fields such as psychology, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. The majority of experimental studies assess consciousness in humans by asking subjects for a verbal report of their experiences (e.g., "tell me if you notice anything when I do this"). Issues of interest include phenomena such as subliminal perception, blindsight, denial of impairment, and altered states of consciousness produced by alcohol and other drugs, or spiritual or meditative techniques.

In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.

Embodied cognitive science

Embodied cognitive science is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments.

Embodied cognitive science borrows heavily from embodied philosophy and the related research fields of cognitive science, psychology, neuroscience and artificial intelligence. From the perspective of neuroscience, research in this field was led by Gerald Edelman of the Neurosciences Institute at La Jolla, the late Francisco Varela of CNRS in France, and J. A. Scott Kelso of Florida Atlantic University. From the perspective of psychology, research by Michael Turvey, Lawrence Barsalou and Eleanor Rosch. From the perspective of language acquisition, Eric Lenneberg and Philip Rubin at Haskins Laboratories. From the perspective of autonomous agent design, early work is sometimes attributed to Rodney Brooks or Valentino Braitenberg. From the perspective of artificial intelligence, see Understanding Intelligence by Rolf Pfeifer and Christian Scheier or How the body shapes the way we think, also by Rolf Pfeifer and Josh C. Bongard. From the perspective of philosophy see Andy Clark, Shaun Gallagher, and Evan Thompson.

Turing proposed that a machine may need a human-like body to think and speak:

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again, I do not know what the right answer is, but I think both approaches should be tried (Turing, 1950).

Explanatory gap

In philosophy of mind and consciousness, the explanatory gap is the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced. It is a term introduced by philosopher Joseph Levine. In the 1983 paper in which he first used the term, he used as an example the sentence, "Pain is the firing of C fibers", pointing out that while it might be valid in a physiological sense, it does not help us to understand how pain feels.

The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate. Bridging this gap (that is, finding a satisfying mechanistic explanation for experience and qualia) is known as "the hard problem".To take an example of a phenomenon in which there is no gap, imagine a modern computer: as marvelous as these devices are, their behavior can be fully explained by their circuitry. By contrast, it is thought by many mind-body dualists (e.g. René Descartes, David Chalmers) that subjective conscious experience constitutes a separate effect that demands another cause that is either outside the physical world (dualism) or due to an as yet unknown physical phenomenon (see for instance quantum mind, indirect realism).

Proponents of dualism claim that the mind is substantially and qualitatively different from the brain and that the existence of something metaphysically extra-physical is required to "fill the gap". Similarly, some argue that there are further facts—facts that do not follow logically from the physical facts of the world—about conscious experience. For example, they argue that what it is like to experience seeing red does not follow logically from the physical facts of the world.

The nature of the explanatory gap has been the subject of some debate. For example, some consider it to simply be a limit on our current explanatory ability. They argue that future findings in neuroscience or future work from philosophers could close the gap. However, others have taken a stronger position and argued that the gap is a definite limit on our cognitive abilities as humans—no amount of further information will allow us to close it. There has also been no consensus regarding what metaphysical conclusions the existence of the gap provides. Those wishing to use its existence to support dualism have often taken the position that an epistemic gap—particularly if it is a definite limit on our cognitive abilities—necessarily entails a metaphysical gap.Levine and others have wished to either remain silent on the matter or argue that no such metaphysical conclusion should be drawn. He agrees that conceivability (as used in the Zombie and inverted spectrum arguments) is flawed as a means of establishing metaphysical realities; but he points out that even if we come to the metaphysical conclusion that qualia are physical, they still present an explanatory problem.

While I think this materialist response is right in the end, it does not suffice to put the mind-body problem to rest. Even if conceivability considerations do not establish that the mind is in fact distinct from the body, or that mental properties are metaphysically irreducible to physical properties, still they do demonstrate that we lack an explanation of the mental in terms of the physical.

However, such an epistemological or explanatory problem might indicate an underlying metaphysical issue—the non-physicality of qualia, even if not proven by conceivability arguments is far from ruled out.

In the end, we are right back where we started. The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.

At the core of the problem, according to Levine, is our lack of understanding of what it means for a qualitative experience to be fully comprehended. He emphasizes that we don't even know to what extent it is appropriate to inquire into the nature of this kind of experience. He uses the laws of gravity as an example, which laws seem to explain gravity completely yet do not account for the gravitational constant. Similarly to the way in which gravity appears to be an inexplicable brute fact of nature, the case of qualia may be one in which we are either lacking essential information or in which we're exploring a natural phenomenon that simply is not further apprehensible. Levine suggests that, as qualitative experience of a physical or functional state may simply be such a brute fact, perhaps we should consider whether or not it is really necessary to find a more complete explanation of qualitative experience.

Levine points out that the solution to the problem of understanding how much there is to be known about qualitative experience seems even more difficult because we also lack a way to articulate what it means for actualities to be knowable in the manner that he has in mind. He does conclude that there are good reasons why we wish for a more complete explanation of qualitative experiences. One very significant reason is that consciousness appears to only manifest where mentality is demonstrated in physical systems that are quite highly organized. This, of course, may be indicative of a human capacity for reasoning that is no more than the result of organized functions. Levine expresses that it seems counterintuitive to accept this implication that the human brain, so highly organized as it is, could be no more than a routine executor. He notes that although, at minimum, Materialism appears to entail reducibility of anything that is not physically primary to an explanation of its dependence on a mechanism that can be described in terms of physical fundamentals, that kind of reductionism doesn't attempt to reduce psychology to physical science. However, it still entails that there are inexplicable classes of facts which are not treated as relevant to statements pertinent to psychology.

Folklore studies

Folklore studies, also known as folkloristics, and occasionally tradition studies or folk life studies in Britain, is the formal academic discipline devoted to the study of folklore. This term, along with its synonyms, gained currency in the 1950s to distinguish the academic study of traditional culture from the folklore artifacts themselves. It became established as a field across both Europe and North America, coordinating with Volkskunde (German), folkeminner (Norwegian), and folkminnen (Swedish), among others.

Functional psychology

Functional psychology or functionalism refers to a psychological school of thought that was a direct outgrowth of Darwinian thinking which focuses attention on the utility and purpose of behavior that has been modified over years of human existence. Edward L. Thorndike, best known for his experiments with trial-and-error learning, came to be known as the leader of the loosely defined movement. This movement arose in the U.S. in the late 19th century in direct contrast to Edward Titchener's Structuralism which focused on the contents of consciousness rather than the motives and ideals of human behavior. Functionalism denies the principle of introspection which tends to investigate the inner workings of human thinking rather than understanding the biological processes of the human consciousness.

While functionalism eventually became its own formal school, it built on structuralism's concern for the anatomy of the mind and led to greater concern over the functions of the mind and later to the psychological approach of Behaviorism.

Functionalism

Functionalism may refer to:

Functionalism (architecture), the principle that architects should design a building based on the purpose of that building

Functionalism in international relations, a theory that arose during the inter-War period

Functionalism (philosophy of mind), a theory of the mind in contemporary philosophy

Functionalism versus intentionalism, a historiographical debate about the origins of the Holocaust

Structural functionalism, a theoretical tradition within sociology and anthropology

Biological functionalism, an anthropological paradigm

Index of philosophy of mind articles

This is a list of philosophy of mind articles.

Alan Turing

Alexius Meinong

Anomalous monism

Anthony Kenny

Arnold Geulincx

Association for the Scientific Study of Consciousness

Australian materialism

Baruch Spinoza

Biological naturalism

Brain in a vat

C. D. Broad

Chinese room

Conscience

Consciousness

Consciousness Explained

Critical realism (philosophy of perception)

Daniel Dennett

David Hartley (philosopher)

David Kellogg Lewis

David Malet Armstrong

Direct realism

Direction of fit

Disquisitions relating to Matter and Spirit

Donald Davidson (philosopher)

Dream argument

Dualism (philosophy of mind)

Duration (Bergson)

Edmund Husserl

Eliminative materialism

Embodied philosophy

Emergent materialism

Evil demon

Exclusion principle (philosophy)

Frank Cameron Jackson

Fred Dretske

Functionalism (philosophy of mind)

G. E. M. Anscombe

Georg Henrik von Wright

George Edward Moore

Gilbert Harman

Gilbert Ryle

Gottfried Leibniz

Hard problem of consciousness

Henri Bergson

Hilary Putnam

Idealism

Immaterialism

Indefinite monism

Instrumentalism

Internalism and externalism

Intuition pump

J. J. C. Smart

Jaegwon Kim

Jerry Fodor

John Perry (philosopher)

John Searle

Karl Popper

Kendall Walton

Kenneth Allen Taylor

Ludwig Wittgenstein

Mad pain and Martian pain

Mental property

Methodological solipsism

Michael Tye (philosopher)

Mind

Mind-body dichotomy

Monism

Multiple Drafts Model

Multiple realizability

Naming and Necessity

Naïve realism

Neurophenomenology

Neutral monism

Noam Chomsky

Parallelism (philosophy)

Personal identity

Phenomenalism

Philosophy of artificial intelligence

Philosophy of mind

Philosophy of perception

Physicalism

Pluralism (philosophy)

Privileged access

Problem of other minds

Property dualism

Psychological nominalism

Qualia

Reflexive monism

René Descartes

Representational theory of mind

Richard Rorty

Ron McClamrock

Self (philosophy)

Society of Mind

Solipsism

Stephen Stich

Subjective idealism

Supervenience

Sydney Shoemaker

Tad Schmaltz

The Concept of Mind

The Meaning of Meaning

Thomas Nagel

Turing test

Type physicalism

Unconscious mind

Wilfrid Sellars

William Hirstein

William James

Interactionism (philosophy of mind)

Interactionism or interactionist dualism is the theory in the philosophy of mind which holds that matter and mind are two distinct and independent substances that exert causal effects on one another. It is one type of dualism, traditionally a type of substance dualism though more recently also sometimes a form of property dualism.

Jerry Fodor

Jerry Alan Fodor (; April 22, 1935 – November 29, 2017) was an American philosopher and cognitive scientist. He held the position of State of New Jersey Professor of Philosophy, Emeritus, at Rutgers University and was the author of many works in the fields of philosophy of mind and cognitive science, in which he laid the groundwork for the modularity of mind and the language of thought hypotheses, among other ideas. He was known for his provocative and sometimes polemical style of argumentation and as "one of the principal philosophers of mind of the late twentieth and early twenty-first century. In addition to having exerted an enormous influence on virtually every portion of the philosophy of mind literature since 1960, Fodor's work has had a significant impact on the development of the cognitive sciences."Fodor argued that mental states, such as beliefs and desires, are relations between individuals and mental representations. He maintained that these representations can only be correctly explained in terms of a language of thought (LOT) in the mind. Furthermore, this language of thought itself is an actually existing thing that is codified in the brain and not just a useful explanatory tool. Fodor adhered to a species of functionalism, maintaining that thinking and other mental processes consist primarily of computations operating on the syntax of the representations that make up the language of thought.

For Fodor, significant parts of the mind, such as perceptual and linguistic processes, are structured in terms of modules, or "organs", which he defines by their causal and functional roles. These modules are relatively independent of each other and of the "central processing" part of the mind, which has a more global and less "domain specific" character. Fodor suggests that the character of these modules permits the possibility of causal relations with external objects. This, in turn, makes it possible for mental states to have contents that are about things in the world. The central processing part, on the other hand, takes care of the logical relations between the various contents and inputs and outputs.

Although Fodor originally rejected the idea that mental states must have a causal, externally determined aspect, in his later years he devoted much of his writing and study to the philosophy of language because of this problem of the meaning and reference of mental contents. His contributions in this area include the so-called asymmetric causal theory of reference and his many arguments against semantic holism. Fodor strongly opposed reductive accounts of the mind. He argued that mental states are multiple realizable and that there is a hierarchy of explanatory levels in science such that the generalizations and laws of a higher-level theory of psychology or linguistics, for example, cannot be captured by the low-level explanations of the behavior of neurons and synapses. He also emerged as a prominent critic of what he characterized as the ill-grounded Darwinian and neo-Darwinian theories of natural selection.

Knowledge argument

The knowledge argument (also known as Mary's room or Mary the super-scientist) is a philosophical thought experiment proposed by Frank Jackson in his article "Epiphenomenal Qualia" (1982) and extended in "What Mary Didn't Know" (1986). The experiment is intended to argue against physicalism—the view that the universe, including all that is mental, is entirely physical. The debate that emerged following its publication became the subject of an edited volume—There's Something About Mary (2004)—which includes replies from such philosophers as Daniel Dennett, David Lewis, and Paul Churchland.

Lawrence Shapiro

Lawrence Shapiro is a professor in the Department of Philosophy at the University of Wisconsin–Madison in the United States. His research focuses in the philosophy of psychology. He also works in both the philosophy of mind, and philosophy of biology.

Mad pain and Martian pain

"Mad Pain and Martian Pain" is a philosophical article written by David Kellogg Lewis. Lewis argued, that a theory of pain must be able to reflect the most basic intuitions of both functionalism and identity theory. Because of such, he proposes the existence of two beings both in pain – one whose physical explanation of pain differs from ours and one whose reaction to pain differs from ours. Lewis states that any complete theory of the mind should be able to explain how each being is in pain.

Models of Consciousness

Models of consciousness are used to illustrate and aid in understanding and explaining distinctive aspects of consciousness. Sometimes the models are labeled theories of consciousness. Anil Seth defines such models as those that relate brain phenomena such as fast irregular electrical activity and widespread brain activation to properties of consciousness such as qualia. Seth allows for different types of models including mathematical, logical, verbal and conceptual models.

Philosophical zombie

The philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind and philosophy of perception that imagines a being that, if it could conceivably exist, logically disproves the idea that physical stuff is all that is required to explain consciousness. Such a zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain. The argument sometimes takes the form of hypothesizing a zombie world, indistinguishable from our world, but lacking first person experiences in any of the beings of that world.

Philosophical zombie arguments are used in support of mind-body dualism against forms of physicalism such as materialism, behaviorism and functionalism. It is an argument against the idea that the "hard problem of consciousness" (accounting for subjective, intrinsic, first person, what-it's-like-ness) could be answered by purely physical means. Proponents of the argument, such as philosopher David Chalmers, argue that since a zombie is defined as physiologically indistinguishable from human beings, even its logical possibility would be a sound refutation of physicalism, because it would establish the existence of conscious experience as a further fact. However, physicalists like Daniel Dennett counter that philosophical zombies are logically incoherent and thus impossible.

Property dualism

Property dualism describes a category of positions in the philosophy of mind which hold that, although the world is composed of just one kind of substance—the physical kind—there exist two distinct kinds of properties: physical properties and mental properties. In other words, it is the view that non-physical, mental properties (such as beliefs, desires and emotions) inhere in or supervene upon certain physical substances (namely brains). As a doctrine, 'property dualism' is epistemic, as distinct from ontic.

Substance dualism, on the other hand, is the view that there exist in the universe two fundamentally different kinds of substance: physical (matter) and non-physical (mind or consciousness), and subsequently also two kinds of properties which adhere in those respective substances. Substance dualism is thus more susceptible to the mind-body problem. Both substance and property dualism are opposed to reductive physicalism. As a doctrine, 'substance dualism' is ontic, as distinct from epistemic.

Supervenience

In philosophy, supervenience refers to a relation between sets of properties or sets of facts. X is said to supervene on Y if and only if some difference in Y is necessary for any difference in X to be possible. Equivalently, X is said to supervene on Y if and only if X cannot vary unless Y varies. Here are some examples.

Whether there is a table in the living room supervenes on the positions of molecules in the living room.

The truth value of (A) supervenes on the truth value of (¬A).

Molecular properties supervene on atomic properties.

The quality of Nixon’s moral character supervenes on how he is disposed to act.These are examples of supervenience because in each case the truth values of some propositions cannot vary unless the truth values of some other propositions vary.

Supervenience is of interest to philosophers because it differs from other nearby relations, for example entailment. Some philosophers believe it possible for some A to supervene on some B without being entailed by B. In such cases it may seem puzzling why A should supervene on B and equivalently why changes in A should require changes in B. Two important applications of supervenience involve cases like this. One of these is the supervenience of mental properties (like the sensation of pain) on physical properties (like the firing of ‘pain neurons’). A second is the supervenience of normative facts (facts about how things ought to be) on natural facts (facts about how things are).

These applications are elaborated below. But an illustrative note bears adding here. It is sometimes claimed (and has been claimed in earlier versions of this entry) that what is at issue in these problems is the supervenience claim itself. For example, it has been claimed that what is at issue with respect to the mind-body problem is whether mental phenomena do in fact supervene on physical phenomena. This is incorrect. It is by and large agreed that some form of supervenience holds in these cases: Pain happens when the appropriate neurons fire. The disagreement is over why this is so. Materialists claim that we observe supervenience because the neural phenomena entail the mental phenomena, while dualists deny this. The dualist’s challenge is to explain supervenience without entailment.

The problem is similar with respect to the supervenience of normative facts on natural facts. It is agreed that facts about how persons ought to act are not entailed by natural facts but cannot vary unless natural facts vary, and this rigid binding without entailment might seem puzzling.

The possibility of "supervenience without entailment" or "supervenience without reduction" is contested territory among philosophers.

Theories
Concepts
Related topics

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.