Meaning and World Order
Introduction
Fodor is aiming for a naturalization of meaning. Why:
- If the semantic and the intentional are real properties of things [re: he needs both for the RTM], it must be in virtue of their identity with (or maybe their superveniece on?) properties that are themselves neither intentional nor semantic. If aboutness is real, it must really be about something else.
- Motivation for intentional irrealism: the ontology of a physicalist view of the world has no place for intentional categories.
Problems encountered with a naturalistic account of meaning:
(1) It is assumed that for an explanation of the attitudes it is semantic evaluation that needs to be explained.
a. Conditions for semantic evaluation of an attitude: fix a context for the tokenings of certain symbols.
But what is it to fix a context for a system of mental representations?
b. It is necessary to fix an interpretation (in nonlogical vocabulary), from which we can then construct a truth definition
‘In short: Given RTM, the intentionality of the attitudes reduces to the content of mental representations. Given a truth definition, the content of mental representations is determined by the interpretation of their primitive nonlogical vocabulary.’ (p98)
Problem: specifying the interpretation and content (twin earth, brisket examples).
Aim of chapter 4: - Minimum: to answer some objections to the causal theory.
- Maximum: to show how causal theory would pan out fully.
Ground Rules:
I want a naturalized theory of meaning; a theory that articulates, in nonsemantic and nonintentional terms, sufficient conditions for one bit of the world to be about (to express, represent, or be true of) another bit. I don’t care – not just now at least – whether this theory holds for all symbols or for all things that represent.
In other words: semantic evaluation of mental representations need not be universalisable to the rest of the world. So smoke (sign) may indicate fire (source), but it need not be about anything, where as the mental representation ‘dog’, will be about the source dog.
The Crude Causal Theory
A necessary and sufficient condition for reliable causation is the nomological relation (same as nomic? He seems to use them interchangeably) between the ‘being an instance of the property horse and the property of being a tokening of the symbol ‘horse’.
Linguistic acts are not dealt with in the CCT, example on page 99, where John sees Mary’s hair is on fire, he tokens ‘Mary’s hair is on fire’, he is however, not obliged to utter it. There may be many reason to say X or refrain from saying X, the relation between mental representations and linguistic acts is nomic (but not necessary, i.e. it is contingent, but lawful), i.e.:
Cause à(1) Mental representation à(2) linguistic act. The connection is causal, but not necessary with the second arrow.
So for CCT: ‘a symbol expresses a property if it’s nomologically necessary that all and only instances of the property cause tokenings of the symbol.
However, it is precisely with the all and only conditions of the CCT that it meets potentially knock down objections. Fodor uses the following method:
- To deal with the only part we assume all horses cause ‘horses’, to reach the conclusion ‘that causal theory doesn’t need to require that only horses do consonant with ‘horse’ meaning HORSE.’
- To deal with the all part Fodor reaches the conclusion that neither all horses cause ‘horses’, nor only horses do.
Misrepresentation: (the disjuction problem)
A à ‘A’ (à = tokens) this token is veridical (truthful)
Obvious reply: sometimes: (1) A à ‘A’
(2) (B à ‘A’) & (B ¹ A)
(2) is not veridical, so therefore it is misrepresentation.
Problem: but then the relation is not A à ‘A’, but (3) AvBà ‘A’, in order to maintain the nomic relation.
A viable causal theory needs to admit both (1) and (3). In (1) B à ‘A’ is misrepresentation, in (3) B à ‘A’ is true. The CCT is unable to distinguish the two.
Solutions
Instances of F-ness ‘can be triggered by signals that lack the appropriate piece of information’. This leads to meaning without truth. (p. 102)
Problem: ‘If Dretske insists upon the learning-period gambit, he thereby limits the applicability of his notion of misrepresentation to learned symbols. For Fodor this means there is no way for innate information to be false. For Dretske’s theory of information it indicates ‘the dichotomy between natural representation…and the intentionality of mental states’.
Learning-period gambit:
Learning period misrep. Learning period not misrep.
A ¬ ‘A’ B à ‘A’ A à ‘A’ AvB à ‘B’
B à ‘A’
AvB à ‘A’
The arrow in the above diagram is information, rather than the conditional arrow.
(2) Teleological Solution: Assume causal paths A à ‘A’ and B à ‘A’, ‘A’ is caused by A and not-B in the ‘optimal circumstances. ‘…appeals to optimality should be buttressed by appeals to teleology: optimal circumstances are the ones in which mechanisms that mediate symbol tokenings are functioning ‘as they are supposed to’. By this Fodor means the mechanisms of belief fixation.
Problem: mechanisms must be designed to deliver truths, but they could repress truths.
Problem: optimal conditions vary (see psychophysics section, he seems to argue against optimality, but retains it later for psychophysics, does anyone have an explanation as to how or why Fodor does this). Consequently, to say which conditions are optimal for the fixation of a belief, we will need to know the content of the belief. In other words optimality requires a theory of content, which is an unstable result.
(possible reply: yes, the optimal conditions do vary, but perhaps it is right as it seems impossible to quantify/qualify over types of objects, the content does not have to be fixed, only the method for detemining the conditions, which give rise to the content needs to be fixed.)
‘We need a way to break the symmetry between A-caused ‘A’ tokenings (which are, by hypothesis, true) and B-caused ‘A’ tokenings (which are, by hypothesis, false). Re: this must be expressed in nonintentional and nonsemantic terms. (p106)
Aside: Is what Fodor says about truth and falsity of belief
correct? He says: ‘you can only have
false beliefs about what you can have true beliefs about (whereas you can have
true beliefs about anything you can have beliefs about at all).’ But surely it is possible to negate any
belief, and ex hypothesi you can just negate true beliefs and end up with the
different conclusion, that you can have false beliefs about anything at
all. Or is it something like this:
D(belief/world
mapping): belief: A = t world = A is in fact false, so it is not
out there.
belief:
A = f world = A is in fact true,
so part of the ontology.
For misrepresentation we need different
semantic connections for AvB-caused ‘A’ tokenings. If Fodor can justify saying that in some way A à ‘A’, but not B à ‘A’
without compromising the causal connections which are central to the CCT then
he has solved the disjunction problem.
His solution: Asymmetric Dependence
‘horse’
(= ‘A’)
cows (= B)
horses
(= A)
Fodor’s way to deal with the disjunction problem then is to say: ‘misidentifying a cow as a horse wouldn’t have led me to say ‘horse’ except that there was independently a semantic relation between ‘horse’ tokenings and horses’
What does this really say? Well, given the causal arrows we still have a disjunction AvB can cause ‘A’, but it is in virtue of the semantic relation that A to ‘A’ is the correct connection, rather than B to ‘A’. When B’s cause ‘A’s Fodor calls this ‘wild’ tokenings.
Asymmetric dependence must be synchronic: ‘my present disposition to apply ‘horse’ to horses does not depend on any corresponding current disposition to apply it to cows.’ (Re: synchronic – concerned with events existing in a limited time period and ignoring historical antecedents. Merriam Webster’s Collegiate Dictionary)
The asymmetric dependence is necessary for B à ‘A’ tokens to be wild. Now with a disjunction AvB à ‘A’ there is a symmetric dependence. Question: are asymmetric and symmetric dependencies necessary and sufficient conditions to solve the disjunction problem?
To sum up:
Wildness: B à ‘A’ wild only if B à ‘A’ tokenings are asymmetrically dependent on the causation of ‘A’ tokenings by non-B’s.
Disjunction: B à ‘A’ expresses AvB through symmetric dependence.
Ambiguity: ‘A’ means A and ‘A’ means B – symmetric independence.
Conclusion on the ‘only’ clause: we need to revise the clause to: not ‘only A’s cause ‘A’s’, but ‘only A’s are such that ‘A’s depend upon the asymmetrically.
Remember, a naturalistic account cannot use intentional or semantic vocabulary. So in naturalistic terms it must be the case that:
(a) Instantiations of horse would cause ‘horse’ to be tokened in my belief box were the circumstances to obtain.
(b) ‘horse’ expresses the property horse (in my ideolect of Mentalese) in virtue of the truth of (a).
Main problem: specifying the circumstances referred to in (a). The rest of the chapter deals with this issue and whether a satisfactory answer is reached is doubtful (Fodor himself seems to admit as much). It does seem to be that for most concepts it is necessary for them to be explained in semantic and intentional terms, and as such violating one of the initial conditions for naturalizing content. Fodor aims to deal with this on the Psychophysical basis.
Psychophysics: ‘the scientific study of relationships between physical stimuli and perceptual phenomena. For example, in the case of vision, one can quantify the influence of the physical intensity of a spot of light on its detectability, or the influence of its wavelength on its perceived hue.’ (The MIT Encyclopedia of the Cognitive Sciences).
‘Psychophysics is the science that tells us how the content of an organsim’s belief box varies with the values of certain physical parameters in its local environement. And it does so in nonintentional, nonsemantical vocabulary: in the vocabulary of wavelengths, candlepowers…’ (p. 113)
In short: the idea is that only particular concepts (such as the colour red) actually fulfill the ‘all’ condition. If the lighting, positioning and hue is right, you have your eyes open, and you are not colour blind, then you will (without fail) ‘receive’ the mental representation ‘RED’ from the red wall. The problem arises for the many other objects in the world, such as horses and protons.
[Psychophysics need not fix beliefs: it enunciates sufficient conditions for the fixation of appearances. p113 what does this mean?]
Possibilities:
(1) Reduction (to show that all concepts are logical constructions of a certain set of concepts).
Something like: any Mentalese formula that can express an intentional content at all is equivalent to some Mentalese formula all of whose nonlogical vocabulary is observational. In short: everything can be reduced to sensory concepts.
Problem: proton and horse are not concepts that can be reduced to sensory concepts, they are not ‘the concepts of a set of actual or possible experiences’ (p. 115)
(2) Psychophysical Imperialism
The idea is that horses must cause ‘horse’ tokenings whenever there is an observer on the spot.
But: psychophysics cannot guarantee intentional content – i.e. it does not guarantee that you’ll see it as a horse. Why is red a psychophysical concept then? Seeing something red/seeing something as red: difference vanishes if the point of view is psychophysically optimal. Only such concepts are accountable by psychophysics. Therefore, it cannot be extended to be a base for the rest of Mentalese.
(3) A Demure Foundationalism
We want circumstances such that:
(1) They are naturalistically specifiable;
(2) Horses (/protons) reliably cause ‘horses’ (/‘protons’) in those circumstances;
(3) It’s plausible that ‘horse’ (/‘proton’) expresses horse (/ proton) because it’s the case that (2).
Flouts (1), to show that (2) & (3) hold. Fodor then aims to establish (1).
Dobbin not-psychophysical à horsey look. Horsey look à psychophysical properties (e.g. colour, shape, size, smell, etc)
The psychophysical properties of proton are more difficult to establish. We do it through experimental means, i.e. on the basis of changes to the photographic plate, cloud chamber, etc. The main indicator of psychophysical properties in this case is covariation.
Psychophysical properties
The arrows are covariation arrows
Proton
‘Proton’
So now: we require only that instances of proton effect the belief box iff.
(a) They take place in an ‘experimental environment’, i.e. they are causally responsible for the instantiation of psychophysical properties; and
(b) The experimental environment is viewed by an observer who is in an optimal psychophysical position with respect to that environment.
Conclusion: Not all instantiations of proton have to be in the belief box.
Problem: this makes the difference between a concept and the psychophysical properties of a concept. Proton is still not admissible to the belief box, only the covarying properties.
Benefit of this approach: it is nonintentional and nonsemantic. All a natural semantics needs is that the causal control should actually obtain, however it is mediated.
Reaching Foundationalism: ‘for purposes of semantic naturalization, it’s the existence of a reliable mind/world correlation that counts, not the mechanisms by which that correlation is effected.
(a) Nomologically sufficient and semantically relevant conditions for tokening are specifiable ‘purely externally’.
(b) Reliable chain: horse in w à horsey look in w à ‘horse’ in belief box.
Random points:
(1) Theories or background beliefs function in ‘fixing the semantics of mental representations’ through computations.
(2) Conditions in psychophysics are sufficient, but not necessary.
(3) ‘what we want to say for ‘proton’ meaning proton is that there be at least one kind of environment in which there are psychophysical traces of protons when, when detected, cause the tokening ‘proton’ in the belief box.
(4) Could still have the concept PROTON by reliably connecting ‘protons’ with protons via psychophysical traces (accounts for false theories).
Even though in psychophysics at times a concept can be connected with other concepts (e.g. water/cat) this does not make it holistic: WATER does not mean water in virtue of its relation with CAT, but its covariance with water in (optimal) appropriate circumstances
Conclusion:
For (1) read: ‘All instances of A’s cause ‘A’s when (i) the A’s are causally responsible for psychophysical traces to which (ii) the organism stands in a psychophysically optimal relation.
For (2) read: ‘If non-A’s cause ‘A’s, then their doing so is asymmetrically dependent upon A’s causing ‘A’s’
Benefits of SLCCTC:
- It provides a sufficient condition for one part of the world to be semantically related to another part (e.g. a certain mental representation expresses a certain property).
- This is done nonintentionally, nonsemantically, nonteleologically and in ‘generally non-question begging vocabulary.
- It is reasonably plausible.
Open for discussion:
As mentioned above Fodor uses the notion of optimality in objecting to the teleological solution of the disjunction problem, yet uses it to expound a (possible) solution under the heading of the psychophysical basis. Why does he do this, is he correct in making the distinction?
Optimality in teleological solution |
Optimality in psychophysics |
He is talking about the ‘only’ clause in the teleological approach. |
In psychophysics he is talking about the ‘all’ clause. |
In the teleological solution we use optimality to fix beliefs. (intentional) |
Computational properties arise in psychophysically optimal circumstances: do not decide beliefs, but do decide what to have beliefs about. (computations are intermediary between input and belief box). |
Optimality should only fix truths to be suitable for a theory of mental representation, but in fact at times it may be beneficial to report falsehoods in the teleological account. |
From above: in psychophysically optimal conditions the resulting belief tokenings will be true. |
The conditions are variable in the teleological solution. |
Conditions are not variable. If you detect a substance in an experiment by a particular means, you will always use that means. |