Coherence: The Price is Right

Paul Thagard

Philosophy Department

University of Waterloo

Waterloo, Ontario, N2L 3G1

pthagard@watarts.uwaterloo.ca

First draft, Feb. 10, 2000. DO NOT QUOTE.

As Elijah Millgram reports, the idea of coherence has received widespread use in many areas of philosophy.(1) He argues that coherentist approaches have suffered from a woeful lack of specification of what coherence is and of how we can tell whether one theory in science, or ethics, or everyday life is more coherent than another. The exception he recognizes is the computational treatment of coherence problems that Karsten Verbeurgt and I have developed.(2) On this account, a coherence problem consists of a set of elements connected by positive and negative constraints, and a solution consists of partitioning the elements into two sets (accepted and rejected) in a way that maximizes satisfaction of the constraints. Algorithms have been developed that efficiently compute coherence by maximizing constraint satisfaction.

Millgram, however, doubts that this characterization of coherence is fully adequate for philosophical purposes. His main objection is that it is not appropriate for epistemology because it provides no guarantee that the most coherent theory will be true. He contends that if a philosopher wants to invoke coherence, the price of the ticket is to provide a specification as concrete and precise as the constraint-satisfaction, but surmises that alternative characterizations will have similar flaws to the ones he thinks he as identified.

I will argue that the constraint satisfaction account of coherence is not at all flawed in the ways that Millgram describes, and in fact satisfies the philosophical, computational, and psychological prerequisites for the development of epistemological and ethical theories. Not only have Verbeurgt and I paid the price of the ticket, it's the right price.

First it is necessary to clear up some confusions about the relation between the general characterization of coherence and the more particular coherence theories that can be applied to philosophical problems. Verbeurgt and I gave the following definition of the general problem of coherence:

COHERENCE. Let E be a finite set of elements {ei} and C be a set of constraints on E understood as a set {(ei, ej)} of pairs of elements of E. C divides into C+, the positive constraints on E, and C-, the negative constraints on E. With each constraint is associated a number w, which is the weight (strength) of the constraint. The problem is to partition E into two sets, A(accepted) and R (rejected), in a way that maximizes compliance with the following two coherence conditions:

1. if (ei, ej) is in C+, then ei is in A if and only if ej is in A.

2. if (ei, ej) is in C-, then ei is in A if and only if ej is in R.

Let W be the weight of the partition, that is, the sum of the weights of the satisfied constraints. The coherence problem is then to partition E into A and R in a way that maximizes W.

By itself, this characterization has no philosophical or psychological applications, because it does not state the nature of the elements, the nature of the constraints, or the algorithms to be used to maximize satisfaction of the constraints. In my new book, I propose that there are six main kinds of coherence: explanatory, deductive, conceptual, analogical, perceptual, and deliberative, each with its own array of elements and constraints.(3) Once these elements and constraints are specified, then the algorithms that solve the general coherence problem can be use to compute coherence in ways that apply to philosophical problems. Epistemic coherence is a combination of the first five kinds of coherence, and ethics involves deliberative coherence as well.

Millgram's main objection to coherence as constraint satisfaction concerns the acceptance of scientific theories, which is largely a matter of explanatory coherence.(4) The theory of explanatory coherence is stated informally by the following principles:

Principle E1. Symmetry. Explanatory coherence is a symmetric relation, unlike, say, conditional probability. That is, two propositions p and q cohere with each other equally.

Principle E2. Explanation. (a) A hypothesis coheres with what it explains, which can either be evidence or another hypothesis; (b) hypotheses that together explain some other proposition cohere with each other; and (c) the more hypotheses it takes to explain something, the lower the degree of coherence.

Principle E3. Analogy. Similar hypotheses that explain similar pieces of evidence cohere.

Principle E4. Data priority. Propositions that describe the results of observations have a degree of acceptability on their own.

Principle E5. Contradiction. Contradictory propositions are incoherent with each other.

Principle E6. Competition. If P and Q both explain a proposition, and if P and Q are not explanatorily connected, then P and Q are incoherent with each other. (P and Q are explanatorily connected if one explains the other or if together they explain something.)

Principle E7. Acceptance. The acceptability of a proposition in a system of propositions depends on its coherence with them.

I will not elucidate these principles here, but merely note how they add content to the abstract specification of coherence as constraint satisfaction. For explanatory coherence, the elements are propositions; and the negative constraints are between propositions that are incoherent with each other, either because they contradict each other or because they compete to explain some other explanation. Positive constraints involve propositions that cohere with each other, linking hypotheses with evidence and also tying together hypotheses that together explain some other proposition. In addition, there is a positive constraint encouraging but not demanding the acceptance of propositions describing the results of observation. Explanatory coherence can be computed using a variety of algorithms that maximize constraint satisfaction, including ones using artificial neural networks.

Now we can consider Millgram's argument that "coherentism does not make a whole lot of sense as an approach to science." (p. 90) For evaluating this claim, we need not merely to look at the abstract characterization of coherence as constraint satisfaction, but also at the theory of explanatory coherence that shows how competing explanatory theories can be evaluated. Millgram's worry is this: Verbeurgt proved that the general problem of coherence is computationally intractable, so the algorithms for computing coherence are approximations; hence the algorithms do not guarantee that we will get the most coherent theory, and the theory that is really the most coherent one might be very different from the one chosen by the algorithms.

That this is not a serious objection to explanatory coherence is evident from the fact that it would apply to virtually any serious attempt to understand scientific inference. Consider the view that scientific theory choice is based, not on explanatory coherence, but on Bayesian reasoning.(5) On this view, the best theory is the one with the highest subjective probability given the evidence, as calculated by Bayes theorem. It is well known that the general problem of Bayesian inference is computationally intractable, so the algorithms used for computing posterior probabilities have to be approximations.(6) Hence the Bayesian algorithms do not guarantee that we will get the most probable theory, and the theory that is really the most probable one might be very different from the one chose by the algorithms. If Millgram's argument were effective against the explanatory coherence account of theory choice, it would also be effective against the Bayesian account, and against any other method based on approximation.

What kind of method would come with a guarantee that it produces true theories? Here are two options:

Rationalist: Start only with a priori true propositions, and deductively derive their consequences.

Empiricist: Start only with indubitable sense data, and only believe what follows directly from them.

Both these would methods would guarantee truth, but we know that neither can even begin to account for scientific knowledge. Millgram's argument is not a criticism of a particular theory of coherence, but it is rather an unsatisfiable demand to avoid the inherent uncertainty that accompanies non-deductive inference.

Truth is not the only aim of science, which also values explanatory unification and practical application. Consider the following two propositions:

1. Species evolve by natural selection.

2. Lemurs have toes.

Both of these propositions are true, but the first is far more important to science because, as Darwin himself stated, it explain a great number of previously unconnected facts. It is obvious that explanatory coherence furthers the adoption of theories that encourage explanatory unification, because its principle of data priority, E4, encourages the acceptance of observation propositions, and its principle of explanation, E2, encourages the acceptance of hypotheses that explain many observation propositions.

Does explanatory coherence also further the adoption of true theories? I think the answer is yes, but the argument has to be indirect:

1. Scientists have achieved theories that are at least approximately true.

2. Scientists use explanatory coherence.

3. So explanatory coherence leads to theories that are at least approximately true.

This is not the place to defend premises 1 and 2, which I have done at length elsewhere.(7) We should not expect a direct defense of the truth achievement of any non-trivial inductive method, so the indirectness of the defense of explanatory coherence as a generator of scientific truths does not undermine coherentism.

Let me now address several of Millgram's minor objections to coherence construed as constraint satisfaction. He contends (p. 85) that COHERENCE is insensitive to the internal structure of scientific theories, but the principles of explanatory coherence show that this is false. According to principle E2b, hypotheses that together explain a proposition cohere with each other, so there is a positive constraint between them. Hence computing coherence using algorithms for maximizing constraint satisfaction will encourage that they be accepted together or rejected together, showing that the coherence calculation will indeed be sensitive to internal structure. Millgram's mistake is misconstruing COHERENCE as a general theory of inference, rather than as the mathematical scaffolding that needs to be filled in with an account of the nature of the elements and constraints that are relevant to inference of a particular sort.

Millgram states (p. 89) that "approximating an index of coherence - the weight of the best partition - does not mean approximating the best partition." Mathematically he is correct, but practically we can acquire inductive evidence that coherence algorithms, including ones proven to approximate an index, do in fact approximate the best partition. This is what computational experiments are for. We can run different coherence algorithms on many different examples, large and small, and see whether they yield partitions that seem to be reasonable given the input. Explanatory coherence has been evaluated for many important cases in the history of science by seeing whether it captures the judgments of important scientists. Given the explanatory relations recognized by scientists such as Copernicus, Newton, Lavoisier, and Darwin, a simulation of explanatory coherence should yield the same judgment as the scientist, and it does.

For complex theories, the demand to simulate the reasoning of scientists is computationally non-trivial. Chris Eliasmith and I did an explanatory-coherence analysis of the acceptance of the wave theory of light, and challenged advocates of Bayesian inference to produce an analysis that is as historically detailed and computationally feasible.(8). As far as I know, this challenge has not been met, and there are serious limitations on Bayesian reasoning that make me doubt that it can be met.(9)

There are thus good reasons to believe that a coherentist approach that employs both the abstract characterization of coherence as constraint satisfaction and its concrete instantiation in terms of principles for explanatory coherence is adequate for understanding scientific theory choice. Millgram's arguments do not undermine coherentist epistemology: the price of the ticket for invoking coherence has already been paid.

Notes

(1) E. Millgram, "Coherence: The price of the ticket," Journal of Philosophy 97.82-93 (2000).

(2) P. Thagard and K. Verbeurgt, "Coherence as constraint satisfaction," Cognitive Science 22 (1998): 1-24.

(3) P. Thagard, Coherence in thought and action (Cambridge, MA: MIT Press, forthcoming).

(4) P. Thagard, Conceptual revolutions (Princeton: Princeton University Press, 1992).

(5) P. Maher, Betting on theories (Cambridge: Cambridge University Press, 1993). C. Howson and P. Urbach, Scientific reasoning: The Bayesian tradition (Lasalle, IL: Open Court, 1989).

(6) G. Cooper, "The computational complexity of probabilistic inference using Bayesian belief networks," Artificial Intelligence 42 (1990): 393-405. J. Pearl, Probabilistic reasoning in intelligent systems (San Mateo: Morgan Kaufman, 1988).

(7) P. Thagard, Computational philosophy of science (Cambridge, MA: MIT Press/Bradford Books, 1988). P. Thagard, Conceptual revolutions (Princeton: Princeton University Press, 1992). P. Thagard, How scientists explain disease (Princeton: Princeton University Press, 1999).

(8) C. Eliasmith and P. Thagard, "Waves, particles, and explanatory coherence," British Journal for the Philosophy of Science 48 (1997): 1-19.

(9) Thagard, Coherence in thought and action, ch. 8.

 

Return to coherence papers.