Societies of Minds:

Science as Distributed Computing

 

Paul Thagard

 

Philosophy Department

University of Waterloo

Waterloo, Ontario, N2L 3G1

pthagard@watarts.uwaterloo.ca

© Paul Thagard, 1993


To go directly a particular section of this paper, click on a section title below.

 1. Current Approaches to Science Studies
 2. Distributed Artificial Intelligence
 3. Two Objections
 4. Reductionism and Methodological Individualism
 5. The Division of Cognitive Labor

To return to the Science and Disease Articles Table of Contents page, click here.


Science is studied in very different ways by historians, philosophers, psychologists, and sociologists. Not only do researchers from different fields apply markedly different methods, they also tend to focus on apparently disparate aspects of science. At the farthest extremes, we find on one side some philosophers attempting logical analyses of scientific knowledge, and on the other some sociologists maintaining that all knowledge is socially constructed. This paper is an attempt to view history, philosophy, psychology, and sociology of science from a unified perspective.

Researchers in different fields have explicitly or implicitly operated with several models of the relations between different approaches to the study of science. For reasons described below, I am primarily concerned with the relation between the psychology and the sociology of science. Reductionist models contend either that sociology can be reduced to the more explanatorily fundamental field of psychology, or that psychology can be reduced to sociology. Slightly less extreme are "residue" models, according to which psychology or philosophy or sociology takes priority, with the other fields explaining what is left over. Less imperialistically, still other models see the different fields of science studies as cooperating or competing to explain aspects of the nature of science in relative autonomy from other fields.

I shall sketch an alternative view that rejects reduction, residue, and autonomy models of science studies. After reviewing these models and their proponents, I outline a new model that views scientific communities from the perspective of distributed artificial intelligence (DAI). DAI is a relatively new branch of the field of artificial intelligence that concerns how problems can be solved by networks of intelligent computers that communicate with each other. Although I assume the cognitivist view that individual scientists are information processors, I shall argue that the view of a scientific community as a network of information processors is not reductionist and does not eliminate or subordinate the role of sociologists or social historians in understanding science. I shall also show that a DAI approach provides a helpful perspective on the interesting social question of the cognitive division of labor.

 

CURRENT APPROACHES TO SCIENCE STUDIES

For several decades, philosophy of science was dominated by the logical empiricist approach. Exemplified by such philosophers as Carnap and Hempel, the logical empiricists used the techniques of modern formal logic to investigate how scientific knowledge could be tied to sense experience. (1, 2) They emphasized the logical structure of science rather than its psychological and historical development. In 1962, Kuhn published his influential Structure of Scientific Revolutions which championed a more historical approach. (3) Along with historically inclined philosophers such as Hanson and Feyerabend, Kuhn charged the logical empiricists with historical irrelevance. Logical analyses of the structure of science continue, but there has also been much research in philosophy of science that takes an explicitly historical approach.

More recently, the philosophy of science has taken a cognitive turn, drawing on ideas from cognitive psychology and artificial intelligence to help understand how science develops. Kuhn and Hanson did use psychological ideas, particularly from the tradition of Gestalt psychology, but the intellectual resources of current cognitive science are much greater than what was available in the 1950s. The cognitive approach complements traditional history of science by providing an enriched view of how scientists generate and evaluate new ideas. Darden has discussed the cognitive strategies that contributed toward the development of Mendelian genetics. (4, 5) Nersessian has drawn on ideas from cognitive psychology to help understand the development of physics. (6, 7) Giere has used psychological and sociological ideas to increase understanding of recent developments in geology and physics. (8) Solomon has concluded from the case of plate tectonics that cognitive heuristics play a crucial role in scientific decision making. (9) Churchland has discussed the nature of theories and explanations from the perspective of computational neuroscience. (10) My own research has used computational models and cognitive theories to help understand the structure and growth of scientific knowledge. (11-14) Allied research has been conducted by psychologists and researchers in artificial intelligence. A new volume in Minnesota Studies in the Philosophy of Science presents a good sample of research by philosophers, psychologists, and historians who have employed cognitive ideas. (15)

While philosophers of science have increasingly employed historical and psychological ideas, the historiography of science has taken a more sociological turn, paying increasing attention to the social context of science. Historians have thus found common cause with sociologists such as Barnes, Bloor, Collins, and Latour. (16-20) Philosophers of science such as Brown, Giere, and Fuller have also begun to pay closer attention to sociological issues. (21, 22)

The question of the relation between philosophical, historical, psychological, and sociological accounts of science is thus live and important. According to the logical empiricist tradition, philosophical analyses that provide rational reconstructions of science should proceed without psychological taint, but I shall not here repeat arguments that a psychological approach to science and epistemology is compatible with sufficiently complex models of rationality. (23, 24) Principles of rationality are not to be derived a priori, but should co-evolve with increasing understanding of human cognitive processes. The cognitive approach, like the sociological one, is fully aware that science cannot be understood without attention to its history, so the fundamental issue that stands out is the relation between cognitive (psychological/ philosophical) and sociological models of the development of scientific knowledge.

In current discussions, I see four implicit models of the relation between cognitive and sociological approaches. Reductionist models presume that the sociological can be reduced to the cognitive or vice versa. Research on scientific inference has been accused of attempting reduction from the cognitive direction and assuming that social aspects of science can be reduced to psychological aspects. But I do not know of any researcher who has advocated this reductionist position. Perhaps there are still economists who think that macroeconomics can be reduced to microeconomics, but I know of no current advocate of the view that sociology can or should be reduced to psychology. Not only are social phenomena too complex to be reduced to psychological ones, psychological explanations themselves must make reference to social factors. Slezak might be read as arguing that AI can explain scientific discovery so that sociological explanations are redundant, but his position does not entirely reject sociological explanations, only the claims by some sociologists of scientific knowledge to be able to explain everything. (25)

In contrast, sociological reductionism has advocates. It surfaced in the 1930s in Marxist history of science that overgeneralized Marx's statement that social existence determines consciousness. More recently, Collins takes a sociological reductionist position when he claims: "what we are as individuals is but a symptom of the groups in which the irreducible quantum of knowledge is located. Contrary to the usual reductionist model of the social sciences, it is the individual who is made of social groups." (26)

What I call residue approaches do not claim to reduce the cognitive to the social or vice versa, but nevertheless claim that one approach takes priority over the other. Laudan considers, but does not endorse, restricting the sociology of science by an arationality assumption: "The sociology of knowledge may step in to explain beliefs if and only if those beliefs cannot be explained in terms of their rational merits." (27) This principle says that sociology is relevant only to the residue of scientific practice that remains after models of rationality have been applied. But I see no advance reason why in trying to understand science we should give special preference to explanations of belief change in terms of rationality. Instead, we should attempt to give the best explanation we can of particular scientific episodes, judging for each episode what factors were paramount.

Whereas the arationality principle sees the social as the residue of the cognitive, some sociologists see the cognitive as the residue of the social. Latour and Woolgar have repeatedly proposed a ten-year moratorium on cognitive explanations of science. (28, 29) Explanations in terms of cognitive capacities are to be resorted to only if there is anything left to explain at the end of the period of sociological investigation. In contrast, Bloor's most recent account of the "strong programme" in the sociology of science welcomes background theories of individual cognitive processes. (30)

It is tempting to give a sociological explanation for advocacy of reductionist and residue models. Arguments that one's own field is the main road to understanding of science can be viewed as tactics for increasing the influence and resources of researchers in that field. A more charitable interpretation of such arguments takes into account the frequent efficacy of single-mindedness as a research strategy. Sociologists can be seen as trying to push social explanations as far as possible to see how much they can accomplish, while psychologists and philosophers push a cognitive approach as far as possible. We could thus see sociological and psychological approaches as relatively autonomous from each other, overlapping occasionally to cooperate in explaining some developments in science, while sometimes making competing explanations. This autonomy model of the psychology and sociology of science provides a reasonably accurate picture of the current state of science studies. By focusing on different aspects of science, researchers in different fields have increased understanding of different contributors to the development of scientific knowledge. I want to go beyond an autonomy model, however, and develop one that provides an integrated perspective on the psychological and social elements of science, without attempting to reduce in either direction or to relegate one approach to being the residue of the other.

 

DISTRIBUTED ARTIFICIAL INTELLIGENCE

To accomplish the necessary integration, the cognitive framework needs to be expanded to fit it into a broader explanatory scheme that encompasses the social. According to cognitivism, thinking is computation. Hence individual scientists can be viewed as computers that communicate with each other through various means. Cognitivism does not, of course, assume that people are just like any of the computers we currently have. Human intelligence still far outstrips computer intelligence in most respects, and it is reasonable to expect that radically different hardware and software from what is currently available will be necessary before computers reach human levels of intelligence. In the past decade, much of the most interesting research in cognitive science has used models inspired in part by the sort of parallel computation among highly connected units that occurs in the brain. I view these connectionist approaches as cognitivist, even though they assume a very different view of computation than some approaches traditional to AI. If, as argued by various critics of artificial intelligence and cognitive science, computational ideas miss fundamental aspects of human intelligence, then cognitivism will ultimately fail as an approach to the psychology of thinking. But without reviewing the past thirty-five years of work in cognitive psychology and artificial intelligence, I contend that cognitivism has greatly enhanced our understanding of numerous kinds of thinking.

Recently, the proliferation of networks of computers has spawned a new subfield of artificial intelligence concerned with how problems can be solved cooperatively by means of interacting computers. (31-34) In a simple computer network, each computer is a node, and communication takes place between computers through rapid transmission of information digitally encoded. Distributed artificial intelligence (DAI) investigates principles by which computers that each possess some degree of intelligence can collectively have accomplishments that no individual computer could easily have on its own. Even among digital computers, communication is restricted because of bandwidth limitations or the high computational cost of sending information. Computer networks are becoming increasingly complex; for example, the Andrew network at Carnegie Mellon University includes about 5000 nodes distributed over 55 buildings. The computers communicate with each other by means of over 100 subnetworks including Ethernet and Apple Talk connections. Many AI applications are inherently distributed, for example controlling a set of intelligent robots working together or bringing together a number of expert systems with complementary areas of expertise. Distributed computing differs from parallel computing in that the latter typically involves simple nodes of similar kinds communicating with each other in straightforward ways. For example, in connectionist systems, each neuron-like node is an uncomplicated device that updates its activation based on the activation of the nodes to which it is linked and the weights on those links. (35) Intelligence is an emergent property of the operation of numerous interacting nodes, not of each individual node. In contrast, in distributed artificial intelligence, it is assumed that each node has much greater computational power than the simple units in connectionist systems, including the capacity to communicate in more complicated ways with other nodes.

Remarkably, researchers in DAI have been turning to sociology for ideas about how to describe organization and functioning of computer networks. One early paper developed ideas about distributed computing by using scientific communities as a metaphor. (36 ) Recent papers by Hewitt and Gasser include respectful references to such sociologists as Garfinkel, Gerson, and Latour. The use of sociological concepts by researchers in DAI is consistent with the kind of unified model of science that I envision, since my intention is not to reduce the sociology of science to distributed computation, but to provide a unifying framework that ties together sociological insights with cognitive ones. But whereas Kornfeld and Hewitt used scientific communities as an analog to help understand parallel and distributed computing, I shall work the analogy in the opposite direction and construct a much more elaborate model of scientific communities. The bidirectional application of the analogy between societies and computer networks is similar to the bidirectional use of the analogy between minds and computers, which has been exploited in different ways at different times depending on the current state of knowledge in the two domains. Early ideas about computers were inspired in part by early models of neurons, and important ideas in artificial intelligence were inspired by studies of human problem solving. On the other hand, computational ideas have proven invaluable in developing and specifying models of human cognition. During the 1980s, the direction of the analogy was reversed again, as connectionists found computational inspiration in brain-like operations. Bidirectional analogies enable two different fields to progress together by exploiting advances in one field to bring the other forward in a process of mutual bootstrapping.

To make plausible a view of science as distributed computing, we need to identify the main nodes and communication channels that occur in scientific communities. To avoid a purely abstract characterization, I shall sketch a single example concerning the field of experimental cognitive psychology. A similar account could easily be given of many other fields; my choice is not intended to give my account any kind of cognitive bias, but only reflects my familiarity with the field.

Let us start with individual scientists, treating each as a node in a communication network. Communication between scientists is not so easy as the digital communication that can take place between conventional computers. We cannot simply transmit information from the brain of one scientist to another. Information does, however, get transmitted, through personal contact between scientists or more indirectly through journals and other publications. Communication requires extensive coding and decoding, as scientists attempt to put what they know into words or pictures that other scientists must attempt to understand. Obviously, each scientist must be viewed as a very complex computational system, capable not only of solving problems but also of producing and understanding speech, diagrams, and written text. Each scientist communicates directly with only a relatively small number of other scientists, although publication increases the possible lines of communication greatly.

Starting with scientists as nodes, we can try to draw graphs that identify the communication links between them. Nodes will form clusters, since there will be subgroups of scientists that are more tightly interconnected than the group as a whole. Among cognitive psychologists, for example, there are at least the following kinds of subgroups:

1. Collaborators. Scientists at the same or different institution who are working on common projects will communicate frequently.

2. Students and teachers. Communication links exist between scientists and scientists-in-training. If, as in cognitive psychology and many other fields, the students function as collaborators on research projects, the links are particularly tight. Research methods and skills are communicated along with more easily described verbal information.

3. Colleagues. Scientist working in the same university department may see each other regularly and exchange ideas.

4. Acquaintances. Scientists who regularly attend the same conferences and workshops will get to know each other and may exchange information irregularly.

One powerful source of ongoing communication links involves students who were previously students together. A very high proportion of the most influential current practitioners in cognitive psychology received their graduate training at a small number of universities such as Stanford, Michigan, and Harvard. More intermittent direct communication can occur between scientists attending conferences or visiting campuses to give colloquia.

Indirect communication links between individual scientists can exist by virtue of publications. Although there is a very large number of psychology journals, some are far more influential than others. Cognitive psychologists know that a paper is far more likely to be read if it appears in Psychological Review or Cognitive Psychology than if it appears in some more obscure location. Figure 1 depicts a small part of a processing network in a scientific community. The scientists at the top form a cluster, perhaps because they are colleagues at the same institution. They have direct links to each other, but only one has a direct link with a member of another cluster. However, by publishing in and reading in the journal, indirect communication is established between other scientists. Given the years it can take to get a paper written, published, and read, this is a much slower form of communication than direct exchange, but its importance is undeniable.

 

Insert Figure 1 here.

 

Somewhat faster exchange takes place through communication by presentation at conferences. For experimental cognitive psychologists, the most important meeting place is the annual conference of the Psychonomics Society, but conferences of the Cognitive Science Society, the American Psychological Society, and ad hoc groups also provide occasions for dissemination of research results and personal interactions. The sort of network I am describing is much richer than that of Latour, who talks of networks of scientists and their inscriptions without allowing that each node is a highly complex processing system. My discussion is consistent with the view that scientific communities constitute "invisible colleges", (37) but integrates considerations of social organization with the obvious fact that the individual scientists are cognitive beings. In short, scientific communities are societies of minds. Other vehicles for indirect communication exist. Book chapters, especially ones providing authoritative reviews of work in particular fields, can be important. Monographs and textbooks can also become widely read and important for communicating with a broad group of scientists. In order to carry out research involving the paying of subjects, computer equipment, and other expenses, scientists need to obtain grants. They must therefore send proposals to funding agencies such as the National Science Foundation, which send the proposals to other scientists for review, with the other scientists communicating their opinion back to the agency which makes a decision. Since over time new scientists enter each field and new lines of communication arise between established scientists, the system of computational nodes and links is constantly evolving.

Not all communication is best described as the sharing of information. In distributed artificial intelligence, it is not unusual for computers to be in conflict with each other, and DAI researchers have emphasized the importance of how computers can negotiate with each other to overcome conflicts. Negotiation among members of scientific communities includes discussions between journal editors and authors concerning whether articles will be published. Leading psychology journals rarely accept manuscripts as submitted, and more than one round of reviewing is common. An author will often be asked to revise and resubmit a manuscript and then to revise it further if it is accepted conditionally. Negotiations also take place in decisions concerning funding. As part of a theory of adversarial problem solving, I have sketched some of the cognitive mechanisms relevant to negotiation including developing a model of an opponent. (38) Latour has described sociological aspects of the adversarial side of science: scientists attempt to mobilize allies and resources to increase their strength relative to opposing groups. It would be a mistake, however, to emphasize the competitive nature of science to the neglect of the cooperative side. Hardwig has described the enormous extent to which science is based on trust, not just within particular research groups but across whole fields. (39) No researcher can check all experimental and theoretical conclusions alone, so there is enormous dependency both within and across groups.

Using the framework so far developed, we could in principle draw a graph specifying the connections among all members of a field such as cognitive psychology. Of course, cognitive psychology does not compose a closed system, and there would have to be links to scientists and journals in allied areas. For example, many cognitive psychologists pay attention to work in related areas of psychology such as social and developmental, and a smaller number read in or communicate with practitioners of other disciplines such as AI and philosophy of mind. In sum, my proposal is that we view a scientific community as a system for distributed computation.

 

TWO OBJECTIONS

The most obvious objection to using distributed computation as a social model of science is that it presupposes the use of computation as a model of individual psychology. Whether intelligence should be understood computationally and whether there is any hope for the development of artificial intelligence has been challenged by critics such as Collins and Dreyfus. (40) Such criticisms have often been useful for pointing out gaps and oversimplifications in computational approaches to cognition. But it must be appreciated that artificial intelligence and the computational modeling of human cognition are highly dynamic fields. Collins' recent critique is largely based on examination of rule-based expert systems, a 1970s technology that gained widespread industrial use in the 1980s. In the past decade, there have been various developments that show promise of helping AI overcome the inflexibility and poor performance that has characterized many past systems. I have already alluded to one development, distributed AI, which may help to overcome AI's traditional neglect of social questions. AI has also tended to neglect questions of embodiment of intelligent systems and real world interactions, but there has been increasing research on incorporating AI ideas in robotic systems to produce autonomous mechanical agents. (41) Simple AI programs have neglected the fact that human intelligence depends on vast amounts of common sense knowledge, but an attempt is being made to compile a huge and well organized data base of common sense knowledge that can provide background information for complex inferencing. (42) In 1980, there was only scattered work on how computer programs could learn, but machine learning is now a very active subfield within AI. (43, 44) The 1980s also saw several approaches to developing AI systems that differ substantially from rule-based expert systems. Connectionist systems that consist of numerous interconnected neuron-like nodes have proved powerful for a variety of tasks that are difficult for traditional AI. Another non-rule-based approach that has received growing attention is using analogies with particular cases to solve problems and provide explanation. (45, 46) In the area of expert systems, there have been important theoretical developments in using Bayesian belief networks for diagnostic reasoning. (47) Other AI researchers would probably point to additional developments as significant in AI work in the past decade.

This is not of course to say that the problems of developing artificial intelligence and computational models of mind have been solved. AI's most enthusiastic cheerleaders such as Lenat and Moravec expect that the fundamental computational problems of developing computational systems will be solved within a matter of decades. My own guess is that we are still missing a number of key conceptual ingredients for understanding natural and artificial intelligence, so I would project centuries of research rather than decades. It is also possible that the problem of understanding the mind is too complex for the mind to solve, or that the computational approach is fundamentally flawed, perhaps because thought is ineluctably tacit or situated in the world. But AI has at least the possibility of modeling tacit knowledge (through connectionist distributed representations or cases used analogically) and of situating cognition in the world (through distributed AI and robotics). Progress has been sufficient in the past few decades that the only reasonable collective scientific strategy is to wait and see how the ideas play themselves out.

One obvious requisite of a serious model of science as distributed computation is that each processor be highly complex and capable of performing at least a simplified version of the cognitive operations of scientists, including forming hypotheses, designing experiments, and evaluating theories. Not so far considered however are cognitive processes involved in deciding when and how to communicate. When a scientist has completed a paper, for example, what determines the journal to which the paper is submitted for publication? Most likely, if the scientist is experienced and acute, that question will have already affected the writing of the paper, since journals vary in their subject matter, audience, and length of papers accepted. An astute publishing strategy requires the scientist to have a sophisticated view of the publication venues available, including their relative influence and accessibility.

After the question of the possibility of individual computational intelligence, the second most likely objection to my distributed computation model is that it is not really social, since the interactions of processors in distributed networks are so primitive compared to the interactions of humans. DAI is normally concerned with computers that are linked to each other by efficient networks that allow virtually instantaneous transmission of digitized information in standard formats. Transmission between humans even via conversations is much slower and involves far more encoding and decoding of knowledge than is required in digital transmission. In practice, however, distributed AI faces real problems in communication too, since two computers may use different representational schemes and inference engines, in which case communication may also require considerable encoding and decoding even though all information is in electronic form. It might prove desirable, for example, to bring about collaboration between a rule-based expert system, a Bayesian network system, and a case-based reasoner, each with expertise in related domains. Such systems use very different ways of representing information so producing communication and cooperation among them is highly non-trivial and would require considerable processing.

Collins specifies several ways in which science is social. (48) One is the "routine servicing of beliefs," in which a scientific group affects how an individual checks the validity of beliefs. It is easy to see how this could be modeled within DAI, for example when one processor generates a hypothesis that is communicated to another processor that may reply by communicating additional evidence, counterevidence, or alternative hypotheses. Such communication also shows how Collins' second way in which science is social can be understood computationally: conclusions to scientific debates will be matters of social consensus, with potential equilibria being reached by the passing of messages back and forth until either all processors agree or are undergoing no further change. More problematic for my DAI model of science is Collins' other way in which science is social. He claims that the transfer of new experimental skills requires social intercourse rather than simply reading journal articles. This claim is certainly true for experimental cognitive psychology, where students typically learn experimental methods not from articles, textbooks, or methods courses, but from apprenticing with an experienced researcher. Collins point does not, however, undermine the DAI perspective, for it can be interpreted as showing the need for some kinds of communication to be particularly intense. Intensity can be a matter of amount of information transmitted - articles typically report much less than the experiment knows - and format of information. One advantage of face-to-face human interaction is the ease of using visual representations: one person can draw something or simply point to a key piece of experimental equipment. Artificial intelligence has been slow to exploit the power of visual representations, but this may be changing. 49-52 There is no reason in principle, however, that AI processors equipped with capacities for visual representation and interaction with the world could not communicate skills in the complex ways that Collins describes. Whether AI will accomplish such communication will depend on the success of the whole research program and cannot be decided a priori.

A DAI model of science is also compatible with the view of Longino that scientific knowledge is inherently social. (53) She advocates an antireductionist social view of science on the grounds that scientific knowledge is constructed through interactions among individuals and that individual scientists' knowledge is made possible by their social and cultural contexts. Although DAI is more easily applicable to modeling interactions within scientific communities rather than entire societies, it is rich enough to allow the interactions and social effects that Longino identifies as ineliminable parts of science.

My discussion of how the social aspects of science can be interpreted within the framework of distributed computation will be viewed, I hope, as taking those aspects seriously rather than as explaining them away. To make this even clearer, I shall now show why it would be mistaken to view a DAI approach as demeaning the social by espousing reductionism or methodological individualism.

 

REDUCTIONISM AND METHODOLOGICAL INDIVIDUALISM

My attempt to relate some of the social dimensions of science to cognition and computation raises important questions in the philosophy of social science regarding explanation. Methodological individualism is the doctrine that all attempts to explain social and individual phenomena must refer exclusively to facts about individuals. (54) This doctrine has found most favor among conservative economists who contend that macroeconomic explanations involving nations and organizations must ultimately yield to microeconomic explanations involving decisions of individuals. According to methodological individualism, social explanations can and eventually should be reduced to psychological explanations. Viewing science as distributed computation does not, however, presuppose methodological individualism, for several reasons. First, in DAI as in the analyses of sociologists such as Durkheim, there are facts that are irreducibly social. Second, psychological explanations are dependent on sociological explanations just as sociological explanations are dependent on psychological ones. Third, even explanations of individual computational psychology may need to be couched in social terms. Fourth, social phenomena are far too complex for us to expect a reduction of sociology to psychology to be tractable. Let us consider these reasons in more detail.

1. Durkheim, one of the founders of modern sociology, tried to distinguish social facts from ones that are only psychological. He wrote: "When I perform my duties as a brother, a husband or a citizen and carry out the commitments I have entered into, I fulfill obligations which are defined in law and custom and which are external to myself and my actions." (55) He argued that the compellingness of certain kinds of behavior and thinking derives from their external, social character. "What constitutes social facts are the beliefs, tendencies, and practices of the group taken collectively." (56) Mandelbaum characterizes "societal facts" more generally as "any facts concerning the forms of organization present in a society." (57) Does the view of science as distributed computation involve social facts in this sense? One might think that the organization of a computer network could be described in terms concerned only with the relations between individual processors. We can define the whole network by simply noting for each processor what other processors it is connected to. Such a characterization might work for simple computer networks, but would clearly be inadequate for a scientific community viewed as a network of communicating computers. We can only understand why a cluster of scientists communicate intensely with each other by noting that the cluster is some particular kind of social organization: the scientists may be part of the same research group, teaching institution, or scientific organization. An account of why communication is asymmetric, with more information passing from one node to another than vice versa, will often require noting the differential social status of the scientists. Students usually learn more from professors than vice versa. Thus while a DAI analysis can go some way to characterizing the communication structure of a scientific community, it does not attempt to eliminate social facts about its organization. To understand why there is a communication link between two scientists, we need to know that they belong to the same group, institution, or scientific society, so the DAI approach does not pretend to reduce these collectives to the individuals and their links.

2. Much of the plausibility of methodological individualism comes from the ontological point that societies consist of their members, but this does not imply that we can explain the operation of societies by attending only to the behavior of their members. Consider the explanation of why a particular scientist thinks and behaves in certain ways. I presume that the explanation will take the form of a description of the computational apparatus producing the scientist's thinking, which includes a representation of what he or she knows. Some of this knowledge will be about the world, and representation of it may suffice to explain much of the scientist's thinking, for example when restricted to forming and evaluating hypotheses. But much of the scientist's thinking will require the representation of social organizations, for example when:

(a) the scientist decides what funding agency to approach or how to frame planned research so it will be appealing to the agencies;

(b) the scientist decides to pursue one project rather because it has greater potential for being appreciated by his or her academic department or research institution;

(c) the scientist decides to submit a paper to a particular journal because it is widely read and respected by the field; or

(d) the scientist decides to attend a particular conference because relevant new work is likely to be presented there.

Thus if we were building a simulation of the cognitive processes used by the scientist in decision making, we would have to represent information about social entities such as agencies, departments, fields, and conferences. Thus psychological explanations of scientist's thinking are in part sociological. My point here is supplemental to the one about social facts: a full model of science must not only take into account the ineliminable organization of science, but also the mental representation of that organization by scientists.

3. Psychological explanations may turn out to be social in an additional sense. One of the leading figures in artificial intelligence, Marvin Minsky has contended that individual minds should be analyzed in terms of many smaller processes that can be conceptualized as "agents" organized in "societies". (58) If Minsky's provocative but untested theory is right, concepts drawn from sociology and distributed artificial intelligence will prove crucial for understanding the operations of individual minds. The title I chose for this paper is an evocation of Minsky's "society of minds" theory of individual psychology.

4. Finally, even if social phenomena could in principle be reduced to psychological ones, there are several technical reasons for doubting that the reduction would every be carried out. First, cognitive phenomena may be indeterministic with ineliminably random elements. For example, some connectionist models implement asynchronous updating of nodes by choosing randomly what node will next be updated. Hence characterization of the thinking of individuals can only be done statistically, making it difficult to envision how operations of whole societies could be accounted for on the basis of variable behavior of their members. Second, the combinatorics of describing the operation of societies in terms of their members are likely to make any reduction computationally intractable. If the operation and organization of a network of communicating scientists were made precise, I expect it would be easy to show that predicting the overall behavior of the network from the behavior of individual processors is a problem that belongs to a large class of computational problems believed to require more time and storage space than any computer that could ever be built. (59) My third and final reason for doubting that reduction of the social to the individual in computer networks will ever be practical derives from chaos theory. It is known that natural systems involving as few as three independent variables that are related nonlinearly can undergo chaotic behavior that is fully deterministic but utterly unpredictable because minuscule differences in initial conditions of the system generate exponentially diverging behavior. Even a simple damped and driven pendulum turns out to be subject to chaotic behavior, and more complex systems such as the weather have shown bounds to predictability. Huberman and Hogg have detected phase transitions in simple parallel computing networks and it would be amazing if much more complex systems of interacting intelligent processors were not also subject to chaos, making explanation of the operation of whole networks in terms of their parts impracticable. (60)

In sum, the point of conceiving of scientific communities as systems of distributed computation is not to reduce the sociological to the psychological, but to increase coevolving understanding of societies and minds. I will now show that this perspective is useful for considering important questions concerning group rationality and the division of cognitive labor.

 

THE DIVISION OF COGNITIVE LABOR

Sarkar raised the provoking possibility that group rationality in science is not merely the sum of individual rationality. (61) If all scientists made identical judgments about the quality of available theories and the value of possible research programs, science would become homogeneous. Novel ideas and potentially acceptable new theories would never be developed sufficiently to the point where they would in fact become rationally acceptable by all. Scientific revolutions are by no means instantaneous and often require a period of years for a new theory to be sufficiently developed that it can pose a strong challenge to an entrenched view. Some years ago I suggested that viewing scientific communities as heterogeneous processors operating in parallel could provide a model of the division of labor that would enhance scientific progress. (62) I conjectured that differences in motivation among scientists could lead them to work on different projects and thus provide better overall scientific progress than if everyone abided by identical standards of rationality. Kitcher used mathematics drawn from population biology to analyze how scientific progress may optimally require the division of cognitive labor and suggested that psychological and institutional factors often thought to be detrimental to cognitive progress might turn out to play a constructive role. (63) Similarly, Hull has considered how individual differences can contribute to the development of science viewed as a selection process. (64) Solomon has suggested how cognitive heuristics and biases can play an important role in decision making and promote the division of cognitive labor. (65)

The social perspective that goes with viewing a scientific community as a system for distributed computation provides a different way of seeing how the division of cognitive labor that seems necessary for rapid scientific progress can come about. Differences in motivation of individual scientists and their being subject to cognitive biases can undoubtedly lead to different scientists pursuing different projects, but the same result can also arise from the nature of distributed computation. Solomon suggests that in the development of geological theories "different beliefs are largely explained at the level of individual cognition, as due to the heuristics of availability, representativeness, and salience, which lead to different results with different individual experience and prior belief, even when all the data are public knowledge." (66 ) A DAI perspective makes it clear that we cannot expect all scientists in a field to be operating with the same information. There are several impediments to the universal spread of information in a computational network. First, such networks may be sparsely connected, with only circuitous routes between two given processors. Thus a theory or datum generated at one node will not immediately be communicated to all other nodes. Sparseness of connectivity can be a function of national and institutional factors, as well as the psychological fact that no one can read everything or talk to everyone. Second, communication is asynchronous, with no overall "clock" governing the processing that occurs at each node. Some scientists may sit on new data, theories or explanations for long periods of time before sending them along to others. Third, the overall process of transmission is slow. It can take months or years to get results into a journal, and additional time may be required before even a habitual reader of the journal gets around at looking at it. Fourth, the overall process of transmission is incomplete. As we saw above in the discussion of tacit knowledge, not everything that a scientist knows gets communicated orally or in print to others, and natural language processing is such that scientists often fail to encode much of what they read for reasons that are independent of motivation and biases.

All these impediments contribute to the heterogeneousness of different nodes in the network of scientific computation. Not surprisingly, scientists who have received different information from the different groups with whom they communicate may tend to make different decisions. Individual scientists each start with a different stock of personal knowledge and experience so even if they have exactly the same rational processes we should expect that them to have knowledge bases that do not completely converge because of the impossibility of perfect communication. One of the advantages of viewing science from the perspective of distributed AI is that it becomes possible to imagine doing experiments to test out the efficacy of various social strategies. In a recent computational experiment, Clearwater, Huberman, and Hogg found that a group of cooperating agents engaged in problem solving can solve a task faster than either a single agent or the same group of agents working in isolation from each other. (67) Ideally, having a number of problem solvers working together should provide faster and better solutions. Their methodology was to have a number of processors, each capable of simple heuristic search, work simultaneously on cryptarithmetic problems, i.e., problems in which digits must be substituted for letters such that an arithmetic operation is correctly instantiated. Clearwater et al. measured the search time required for each processor to find the solution. Solution times were faster by an order of magnitude when processors passed partial results as hints to each other than when each processor worked in isolation. I look forward to further computational experiments concerning the efficacy of viewing science as a cooperative and competitive process.

Socially and cognitively, science involves a tension between cooperation and competition, and researchers are only beginning to understand how social organization can contribute to the overall goal of increasing scientific knowledge. But we can agree that science is social knowledge without neglecting the role of individual cognition in its development. By combining a computational understanding of individual cognition with an analysis of scientific communities in terms of distributed computation, we can start to see how sociological and psychological accounts of science can be integrated.

 

NOTES

This research is supported by the Social Sciences and Humanities Research Council of Canada. Thanks to Miriam Solomon and Daniel Hausman for comments.

(1) Carnap, R., Logical foundations of probability, (Chicago: University of Chicago Press, 1950) .

(2) Hempel, C. G., Aspects of scientific explanation, (New York: The Free Press, 1965) .

(3) Kuhn, T., Structure of scientific revolutions, 2 ed. (Chicago: University of Chicago Press, 1970).

(4) Darden, L., "Artificial intelligence and philosophy of science: Reasoning by analogy in theory construction," PSA 1982, ed. P. Asquith, and T. Nickles. (East Lansing: 1983) 2: 147-165.

(5 ) Darden, L., Theory change in science: Strategies from Mendelian genetics, (Oxford: Oxford University Press, 1991) .

(6 ) Nersessian, N., Faraday to Einstein: Constructing meaning in scientific theories., (Dordrecht: Martinus Nijhoff, 1984) .

(7) Nersessian, N., "How do scientists think? Capturing the dynamics of conceptual change in science," Cognitive Models of Science., ed. R. Giere. Minnesota Studies in the Philosophy of Science, (Minneapolis: University of Minnesota Press, 1992) vol. 15: 3-44.

(8) Giere, R., Explaining science: A cognitive approach, (Chicago: University of Chicago Press, 1988) .

(9) Solomon, M., "Scientific rationality and human reasoning," Philosophy of Science 59 (1992): 439-455.

(10) Churchland, P., A neurocomputational perspective, (Cambridge, MA: MIT Press, 1989) .

(11) Thagard, P., "Frames, knowledge, and inference," Synthese 61 (1984): 233-259.

(12) Thagard, P., Computational philosophy of science, (Cambridge, MA: MIT Press/Bradford Books, 1988) .

(13) Thagard, P., "Explanatory coherence," The Behavioral and Brain Sciences 12 (1989): 435-502.

(14) Thagard, P., Conceptual revolutions, (Princeton: Princeton University Press, 1992) .

(15) Giere, R., ed., Cognitive models of science, (Minneapolis: University of Minnesota Press., 1992) 15: .

(16) Barnes, B., About science, (Oxford: Blackwell, 1985) .

(17) Bloor, D., Knowledge and social imagery, Second edition. ed. (Chicago: University of Chicago Press., 1991) .

(18) Collins, H., Changing order: Replication and induction in scientific practice, (London: Sage Publications, 1985) .

(19) Collins, H., Artificial experts: Social knowledge and intelligent machines, (Cambridge, MA: MIT Press, 1990) .

(20) Latour, B., Science in action: How to follow scientists and engineers through society., (Cambridge, MA: Harvard University Press., 1987) .

(21) Brown, J. R., The rational and the social, (London: Routledge, 1989) .

(22) Fuller, S., Social epistemology, (Bloomington, IN: Indiana University Press, 1988) .

(23) Goldman, A., Epistemology and cognition, (Cambridge, MA: Harvard University Press, 1986).

(24) Harman, G., Change in view: Principles of reasoning, (Cambridge, MA: MIT Press/Bradford Books, 1986) . Thagard, Computational philosophy of science.

(25) Slezak, P., "Scientific discovery by computer as empirical refutation of the strong programme," Social Studies of Science 19 (1989): 563-600.

(26) Collins, Artificial experts, p. 6.

(27) Laudan, L., Progress and its problems, (Berkeley: University of California Press, 1977) .

(28) Latour, B., and S. Woolgar, Laboratory life: The construction of scientific facts., (Princeton, N.J.: Princeton University Press., 1986) . Latour, Science in action, p. 256.

(29) Woolgar, S. , "Representation, cognition, and self: What hope for an integration of psychology and philosophy? ," The cognitive turn: Sociological and psychological perspectives on science., ed. S. Fuller et al. (Dordrecht: Kluwer, 1989) 210-223.

(30) Bloor, Knowledge and social imagery, p.168.

(31) Bond, A., and L. Gasser, ed., Readings in distributed artificial intelligence., (San Mateo: Morgan Kaufmann, 1988).

(32) Durfee, E., V. Lesser, and D. Corkill, "Cooperative distributed problem solving," The handbook of artificial intelligence, Volume IV., ed. A. Barr, P. Cohen, and E. Feigenbaum. (Reading, MA: Addison-Wesley, 1989) 83-147.

(33) Gasser, L., "Social conceptions of knowledge and action: DAI and open systems semantics.," Artificial Intelligence 47 (1991): 107-138.

(34) Hewitt, C., "Open information systems semantics for distributed artificial intelligence," Artificial Intelligence 47 (1991): 79-106.

(35) Rumelhart, D., J. McClelland, and PDP Research Group, Parallel distributed processing: Explorations in the microstructure of cognition, (Cambridge MA: MIT Press/Bradford Books, 1986) .

(36) Kornfeld, W., and Hewitt C., "The scientific community metaphor," IEEE Transactions on Systems Man and Cybernetics SMC-11 (1981): 24-33.

(37) Crane, D., Invisible colleges: Diffusion of knowledge in scientific communities, (Chicago: University of Chicago Press, 1972) .

(38) Thagard, P., "Adversarial problem solving: Modelling an opponent using explanatory coherence," Cognitive Science 16 (1992): 123-149.

(39) Hardwig, J., "The role of trust in knowledge," Journal of Philosophy 88 (1901): 693-708.

(40) Dreyfus, H., What computers can't do, 2 ed. (New York: Harper, 1979) .

(41) Moravec, H., Mind children: The future of robot and human intelligence., (Cambridge, MA: Cambridge University Press, 1988) .

(42) Lenat, D., and R. Guha, Building large knowledge-based systems, (Reading, MA: Addison-Wesley, 1990) .

(43) Thagard, P., "Philosophy and machine learning.," Canadian Journal of Philosophy 20 (1990): 261-276.

(44) Carbonell, J., ed., Machine learning: Paradigms and methods, (Cambridge, MA: MIT Press, 1990).

(45) Riesbeck, C., and R. Schank, Inside case-based reasoning, (Hillsdale, NJ: Erlbaum, 1989) .

(46) Thagard, P., et al., "Analog retrieval by constraint satisfaction," Artificial Intelligence 46 (1990): 259-310.

(47) Pearl, J., Probabilistic reasoning in intelligent systems, (San Mateo: Morgan Kaufman, 1988) .

(48 ) Collins, Artificial experts, pp. 4-5.

(49) Larkin, J.H., and H.A. Simon, "Why a diagram is (sometimes) worth ten thousand words," Cognitive Science 11 (1987): 65-99.

(50) Glasgow, J., and D. Papadias, "Computational imagery," Cognitive Science (1992): in press.

(51) Chandrasekaran, B., and N. Narayanan, "Integrating imagery and visual representations," Proceedings of the 12th Annual Conference of the Cognitive Science Society., (Hillsdale, NJ: Erlbaum, 1990) 670-677.

(52) Thagard, P., D. Gochfeld, and S. Hardy, "Visual analogical mapping," Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, (Hillsdale, NJ: Erlbaum, 1992) 522-527.

(53) Longino, H., Science as social knowledge: Values and objectivity in scientific inquiry, (Princeton: Princeton University Press, 1990) .

(54) Lukes, S., "Methodological individualism reconsidered," The philosophy of social explanation., ed. A. Ryan. (Oxford: Oxford University Press, 1973) 119-129.

(55) Durkheim, E., The rules of sociological method, , ed. S. Lukes (New York: Free Press, 1982).

(56) Ibid., p.54.

(57) Mandelbaum, M., "Societal facts," The philosophy of social explanation, ed. A. Ryan. (Oxford: Oxford University Press, 1973) 105-118.

(58) Minsky, M., The society of mind, (New York: Simon and Schuster, 1986)

(59) Thagard, P., "Computational tractability and conceptual coherence: Why do computer scientists believe that P NP?" Canadian Journal of Philosophy (forthcoming).

(60) Huberman, B., "Phase transitions in artificial intelligence systems," Artificial Intelligence 33 (1987): 155-171.

(61) Sarkar, H., A theory of method, (Berkeley, California: University of California Press, 1983) . Perhaps the possibility of divergence of group and individual rationality should not be so surprising, since such divergences are familiar in decision theory. For example, in prisoner's dilemma, what appears to be the dominant strategy for each prisoner leaves them both worse off. Similarly, in the "tragedy of the commons" the maximization of individual utility leads in the long run to everyone being worse off.

(62) Thagard, Computational philosophy of science, ch. 10.

(63) Kitcher, P., "The division of cognitive labor," Journal of Philosophy 87 (1990): 5-22.

(64) Hull, D., Science as a process, (Chicago: University of Chicago Press, 1989) .

(65) Solomon, "Scientific rationality and human reasoning."

(66) Ibid., p. 450.

(67) Clearwater, S., B. Huberman, and T. Hogg, "Problem solving by committee," Science 254 (1991): 1181-1183.