Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

The Unity of Science

First published Thu Aug 9, 2007

The topic of the unity of science includes the following questions: Is there one privileged, most basic kind of stuff, and if not, how are the different kinds of material in the universe related? Can the various physical sciences (physics, astronomy, chemistry, biology) be unified into a single overarching theory, and can theories within a single science (e.g., general relativity and quantum theory in physics) be unified? Does the unification of these parts of science involve only matters of fact or are matters of value involved as well? Moreover, what kinds of unity in the sciences are there: is unification a relation between concepts or terms (i.e., a matter of semantics), or about theories they make up? And is the relation one of reduction, translation, explanation, or logical inference?

These are the kinds of questions that will be addressed in this article. In addressing these questions, I will consider the often-assumed preference for physics as a privileged locus and consider whether anything follows about the unity of science from the fact that physics is the study of the most fundamental elements such as matter and energy. We shall also consider biology, and the extent to which biological entities, from organisms to genes and processes, really are just chemical in nature. And the question of unity extends to explanatory concepts in psychology and the social sciences.

Finally, we consider a very different move, namely, to challenge the very hierarchy presupposed by the question of unity and to redraw the boundaries and lines of interaction and integration that best describe actual scientific practice. How should we evaluate the evidence for disunity and pluralism in science? To what extent should we supplement the attention to logic and language with an interest in practices, images and objects. Finally, it is worth pointing out that positions about the unity of science have important consequences, and affect the way we formulate and solve problems in philosophy (e.g., questions of naturalism), science (e.g., design of education and research projects) and policy (e.g., allocation of resources).


1. Historical development in philosophy and science from Greek philosophy to Logical Empiricism in America

1.1 From Greek thought to Western science

The general questions should be carefully distinguished from any of the different specific theses addressing them and should be noted as the linking thread of a time-honored philosophical debate. The questions about unity belong to a tradition of thought that can be traced back to pre-Socratic Greek cosmology, in particular to the preoccupation with the question of the one and the many. In what senses are the world and, thereby, our knowledge of it, one? A number of representations of the world in terms of a few simple constituents considered fundamental emerged: Parmenides' static substance, Heraclitus' flux of becoming, Empedocles' four elements, Democritus' atoms, or Pythagoras' numbers, Plato's forms, and Aristotle's categories. The underlying question of the unity of our types of knowledge was explicitly addressed by Plato in the Sophist as follows: “Knowledge also is surely one, but each part of it that commands a certain field is marked off and given a special name proper to itself. Hence language recognizes many arts and many forms of knowledge.”(Sophist, 275c) Aristotle asserted in On the Heavens that knowledge is of what is primary, and different ‘sciences’ know different kinds of causes; it is metaphysics that comes to provide knowledge of the underlying kind.

With the advent and expansion of Christian monotheism, the organization of knowledge reflected the idea of a world governed by the laws dictated by God, creator and legislator. From this tradition emerged encyclopedic efforts such as the Etymologies, compiled in the sixth century by the Andalusian Isidore, Bishop of Seville, the works of the Catalan Ramon Llull in the Middle Ages and of the French Petrus Ramus in the Renaissance. Llull introduced iconic tree-diagrams and forest-encyclopedias representing the organization of different disciplines (including law, medicine, theology and logic). He also introduced more abstract diagrams—not unlike some found in Cabbalistic and esoteric traditions—in an attempt to encode combinatorially the knowledge of God's creation in a universal language of basic symbols; their combination would then generate knowledge of the secrets of creation. Ramus introduced diagrams representing dichotomies and gave prominence to the view that the starting point of all philosophy is the classification of the arts and sciences. The search for a universal language would continue to be a driving force behind the project of unifying knowledge.

The emergence of a distinctive tradition of scientific thought addressed the question of unity through science's designation of a privileged method, set of concepts and language. In the late 16th century Francis Bacon held that one unity of the sciences was the result of our organization of discovered material facts in the form of a pyramid with different levels of generalities; these would be classified in turn according to disciplines linked to human faculties. In accordance with at least three traditions, the Pythagorean tradition, the Bible's dictum in the Book of Wisdom and the Italian commercial tradition of bookkeeping, Galileo proclaimed at the turn of the 17th century that the Book of Nature had been written by God in the language of mathematical symbols and geometrical truths; and that in it the story of Nature's laws was told in terms of a reduced set of objective, quantitative primary qualities: extension, quantity of matter and motion. In the 17th century, mechanical philosophy and Newton's systematization from basic concepts and first laws of mechanics became the most promising framework for the unification of natural philosophy. After the demise of Laplacian molecular physics in the first half of the 19th century, this role was taken over by ether mechanics and energy physics.

1.2 Rationalism and Enlightenment

Descartes and Leibniz gave this tradition a rationalist twist centered on the powers of human reason; it became the project of a universal framework of exact categories and ideas, a mathesis universalis. Like Llull's, their conception of unity is determined by rules of analysis of ideas into elements, and their synthesis into combinations. According to Descartes the science of geometry, with its demonstrative reasoning from the simplest and clearest thoughts, constitutes the paradigm for the goal of unifying all knowledge. Leibniz proposed a General Science in the form of a Demonstrative Encyclopedia. This would be based on a ‘catalogue of simple thoughts’ and an algebraic language of symbols, characteristica universalis, which would render all knowledge demonstrative and allow disputes to be resolved by precise calculation. Both defended the program of founding much of physics on metaphysics.

Belief in the unity of science or knowledge, along with the universality of rationality, was at its strongest during the European Enlightenment. The most important expression of the encyclopedic tradition came in the mid-eighteenth century from Diderot and D'Alembert, editors of the Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (1751-1772). Following earlier classifications by Nichols and Bacon, their diagram presenting the classification of intellectual disciplines was organized in terms of a classification of human faculties. Diderot stressed in his own entry, ‘Encyclopaedia’, that the word Encyclopedia signifies the unification of the sciences. The function of the encyclopedia was to exhibit the unity of human knowledge. Diderot and D'Alembert, in contrast with Leibniz, made classification by subject primary, and introduced cross-references instead of logical connections.

1.3 German tradition since Kant

For Kant the unity of science is not the reflection of a unity found in nature; rather, it has its foundations in the unifying character or function of concepts and of reason itself. Nature is precisely our experience of the world under the universal laws that include some such concepts. Kant saw one of the functions of philosophy as determining the precise unifying scope and value of each science. For instance, he contrasted the methods employed by the chemist, organized by empirical regularities, with those employed by the mathematician or physicist, organized by a priori laws, and held that biology is not reducible to mechanics—as the former involves explanations in terms of final causes—(see Critique of Pure Reason, Critique of Judgment and Metaphysical Foundations of Natural Science). A devoted but not exclusive follower of Newton's achievements and insights, he maintained through most of his life that mathematization and a priori universal laws were preconditions for genuine scientific character (like Galileo and Descartes earlier, and Carnap later, Kant believed that mathematical exactness constituted the main condition for the possibility of objectivity). By the end of his life, after having become acquainted with Lavoisier's achievements in chemistry, Kant thought of the unification of physics and chemistry not so much in terms of mathematization but, rather, in terms of the a priori principles regarding the properties of a universal ether (Friedman 1992). With regards to biology—insufficiently grounded in the fundamental forces of matter—its inclusion requires the introduction of the idea of purposiveness. More generally, for Kant unity was a regulative principle of reason, namely, an ideal guiding the process of inquiry toward a complete empirical science with its empirical concepts and principles grounded in the so-called concepts and principles of the understanding that constitute and objectify empirical phenomena.

Kant's ideas set the frame of reference for discussions of the unification of the sciences in German thought throughout the nineteenth century. He gave philosophical currency to the notion of world-view (Weltanschauung) and, indirectly, world-picture (Weltbild), thereby establishing among philosophers and scientists unity of science as an intellectual ideal. In Great Britain this idealist unifying spirit (and other notions from an idealist and romantic turn) took form in William Whewell's philosophy of science, e.g., his notion of consilience of induction.

This German intellectual current culminated in philosophers such as Windelband, Rickert and Dilthey. In their views and others similar, a world-view often included elements of evaluation and life meaning. Kant also distinguished between several types of judgments that, in turn, characterized different intellectual disciplines. In this way he established the basis for the famous distinction between the natural sciences (Naturwissenschaften) and the cultural, or social, sciences (Geisteswissesnschaften) introduced by Wilhelm Dilthey. For Dilthey, according to his Life-philosophy (Lebensphilosophie), science, philosophy, art and religion are on par as world-views expressing different attitudes in life. Followers of Dilthey's distinction, such as Wilhelm Windelband, Heinrich Rickert and Max Weber (although the first two preferred Kulturwissenschaften, which excluded psychology), claimed that the difference in subject matter between the two kinds of sciences forced a distinctive difference between their respective methods. Their preoccupation with the historical dimension of the human phenomena, along with the Kantian emphasis on the conceptual basis of knowledge led to the suggestion that the natural sciences aimed at generalizations about abstract types and properties, but the human sciences studied concrete individuals and complexes. The human case suggested a different approach based on valuation and personal understanding (Weber's verstehen).

This approach stood in opposition to the more empiricist view dominant since Hume, Comte and Mill, that the moral or social sciences had relied on conceptual and methodological analogies with the natural sciences, from Newtonian and statistical mechanics, such as Condorcet's ‘social mathematics’ and also from biology, such as Saint-Simon's ‘social physiology’. But then the question arose of how the sciences of man were themselves organized—how social sciences such as sociology and economics were related to, say, the psychology of individuals.

Empiricists assumed methodological individualism, but not without qualifications. Comte followed his Enlightenment predecessors in combining an analytical sense of conceptual order and a historical sense of progress. He emphasized a pyramidal hierarchy of disciplines in his ‘encyclopedic law’ or order, from the most general sciences about the simplest phenomena, to the most specific about the most complex, each depending on knowledge from its more general antecedent: from inorganic physical sciences (arithmetic, geometry, mechanics, astronomy, physics and chemistry) to the organic physical ones, biology and the new ‘social physics’, soon to be renamed sociology (Comte 1830-1842). Mill's approach to the relation between the sciences was methodological. He followed Comte as well as Newton in his project to formulate the logic of the different sciences, natural and human. In particular he adopted for the human sciences these author's atomism, law-orientedness and inductivism, and the interest in history of the natural sciences. He found a number of different methods of inference, even within the natural sciences, and endorsed the a posteriori methodology of ‘inverse deduction’, or historical method, as the way to generalizations in the social sciences (Mill 1843, Book VI).

The generation of German social scientists in the second half of the 19th century recapitulated the British debates between a priori and a posteriori methods. Gustav Schmoeller, representing the Historical School, defended the empirical causal methodology with the empiricist predecessors' interests in the complexity of human phenomena and an interest in social and political reform (Tribe 2005). Besides criticizing Dilthey and the Weltbild or Weltanschaung tradition, he opposed Carl Menger's deductive, a priori, and quasi-Platonic, ahistorical, approach to economic concepts and truths.

The Weltbild tradition influenced the physicists Max Planck and Ernst Mach, who engaged in a heated debate about the precise character of the unified scientific world-picture, and culminated in the first two decades of the twentieth century with the work of Albert Einstein (Holton 1998). Mach's view, the more influential, was phenomenological and Darwinian: the unification of knowledge took the form of an analysis of ideas into elementary sensations (neutral monism) and was ultimately a matter of adaptive economy of thought. Planck adopted a realist view that took science to gradually approach complete truth about the world and adopted as fundamental the thermodynamical principles of energy and entropy (on the Mach-Planck debate see Toulmin 1970). These world-pictures constituted some of the alternatives to a long-standing mechanistic view that since Newton had affected biology as well as most branches of physics.

In the same German tradition, amidst the proliferation of books on unity of science the German energeticist Wilhelm Ostwald declared the 20th century the ‘Monistic century’. During the 1904 World's Fair in St. Louis, the German psychologist and Harvard professor Hugo Munsterberg organized a congress under the title ‘Unity of Knowledge’; invited speakers were Ostwald, Ludwig Boltzmann, Ernest Rutherford, Edward Leamington Nichols, Paul Langevin and Henri Poincaré. In 1911 the International Committee of Monism held its first meeting in Hamburg, with Ostwald presiding.[1] Two years later it published Ostwald's monograph, Monism as the Goal of Civilization. In 1912, Mach, Felix Klein, David Hilbert, Einstein, and others signed a manifesto aiming at the development of a comprehensive world-view. Unification remained a driving scientific ideal. In the 1890s Gotlob Frege and Hilbert had aimed at setting the mathematical sciences on rigorous foundations. The ideal had the form of an axiomatic system. Frege aimed at founding arithmetic on axioms of logic and Hilbert proposed the foundations of geometry on formal axioms (which define the basic concepts of geometry purely formally and implicitly, and, less heeded, then have to be linked back with our intuitions of experience). Subsequently he applied the same approach to unifying physics. He hoped that Einstein's General theory of Relativity could be synthesized with the theory of electromagnetism to form a foundation for all of physics. In 1920 Hilbert proposed his general formalist research project for the axiomatic formalization of mathematics, which he also extended to physics. Mathieu Leclerc du Sablon published his L'Unité de la Science (1919), exploring metaphysical foundations, and Johan Hjorst published The Unity of Science (1921), sketching out a history of philosophical systems and unifying scientific hypotheses.

1.4 Unity and reductionism in logical empiricism

The question of unity engaged science and philosophy alike. In the 20th century the unity of science became a distinctive theme of the scientific philosophy of logical empiricism. Logical empiricists—known controversially also as logical positivists—and most notably the founding members of the Vienna Circle in their Manifesto, adopted the Machian banner of ‘unity of science without metaphysics’, a container-model of unity based on demarcation between science and metaphysics: the unity of method and language that included all the sciences, natural and social. Notice that a common method does not imply a more substantive unity of content involving theories and their concepts. For instance, around the same time, the emphasis on rules and uses of language, logical, scientific and ordinary was part of the more general so-called ‘linguistic turn’.

A stronger, reductive model within the Vienna Circle was recommended by Rudolf Carnap in his Logical Construction of the World (1928). With the Kantian connotation of the term ‘constitutive system’, it was inspired in Hilbert's axiomatic approach to formulating theories in the exact sciences and in Frege's and Russell's logical constructions in mathematics: it was predicated on the formal values of simplicity, neutrality and objectivity and was characterized by logical constructions out of basic concepts in axiomatic structures, rigorous reductive logical connections between concepts at different levels.

The logical connections were provided by biconditional statements, or constitution sentences (these changed to conditionals, or reduction sentences, when Carnap encountered the problem of dispositional predicates). Different constitutive systems or logical constructions would serve different purposes. In one system of unified science the construction connects concepts and laws of the different sciences at different levels, with physics -with its genuine laws- as fundamental, lying at the base of the hierarchy. Because of the emphasis on the formal and structural properties of our representations, the individuality of concepts, like that of nodes in a railway network, was determined by their place in the whole structure, and hence, presupposed connective unity. Objectivity and unity went hand in hand. The formal emphasis developed further in Logical Syntax of Language (1934).

Alternatively, all scientific concepts could be constituted or constructed in a different system, in the protocol language, out of classes of elementary experiential concepts. The basic experiences do not provide a reductive analysis of theoretical concepts; nor are the basic empirical concepts (red, etc) the outcome of an analysis of experience. They are not atomic in the Machian sense, but derived from the field of experience as a complex whole in the manner proposed by Gestalt psychology. This construction of scientific knowledge took into account the possibility of empirical grounding of theoretical concepts and testability of theoretical claims. Unity of science in this context was an epistemological project.

Carnap was influenced by the phenomenological tradition (through Husserl himself) and empiricist tradition (especially Russell and Mach) and the ideals of simplicity and reductive logical analysis in the early works of Russell and Wittgenstein. From the formalist point of view of the logicist and neo-Kantian traditions, Carnap's models of unity expressed his concern with the possibility of objectivity of scientific knowledge. The same concern was expressed in the subsequent idea of unity of science in the form of a physicalist language, the intersubjective language that translate the subjective language of experience into an objective and universal language. Carnap's pragmatic pluralism would extend to logic with his Principle of Tolerance—in Logical Syntax of Language (1934)—and subsequently—in ‘Empiricism, Semantics, and Ontology’ (1950)—to the plurality of possible ‘linguistic frameworks’.

Otto Neurath, by contrast, favored a less idealized and less reductive model of unity predicated on the complexity of empirical reality. He spoke of an ‘encyclopedia-model’, instead of the classic ideal of the pyramidal, reductive ‘system-model’. The encyclopedia-model took into account the presence within science of uneliminable and imprecise terms from ordinary language and the social sciences and emphasized a unity of language and the local exchanges of scientific tools. Specifically, Neurath stressed the material-thing-language called ‘physicalism’, not to be confounded with the emphasis on the vocabulary of physics. His view was not constrained by Carnap's ideals of conceptual precision, deductive systematicity and logical rigor. No unified science, in addition, would sit, like a boat at sea, on secure steady foundations. This weaker model of unity emphasized empiricism and the normative unity of the natural and the human sciences.

Like Carnap's unified reconstructions, Neurath's had pragmatic motivations. Neurath's rejection of physics reductionism was based on considerations of descriptive relevance, and explanatory and predictive power involving, for instance, social phenomena. By the same token, his emphasis on the importance of unity—without reduction—was also epistemic and pragmatic. Unity was meant as a tool for cooperation and it was motivated by the need for successful treatment—prediction and control—of complex phenomena in the real world that involved properties studied by different theories or sciences (from real forest fires to social policy): unity of science at the point of action (Cat, Cartwright and Chang 1996). It is an argument from holism, counterpart of Duhem's claim that only clusters of hypotheses are confronted with experience. Neurath spoke of a ‘boat’, a ‘mosaic’, an ‘orchestration’, a ‘universal jargon’. In the wake of institutions such as the International Committee on Monism and the International Council of Scientific Unions, in 1934 Neurath spearheaded a movement for Unity of Science that encouraged international cooperation among scientists and launched the project of an International Encyclopedia of Unity of Science. Neurath wrote repeatedly on the connections between the project of unity of science and the movements for economic socialization, educational reform and peaceful and cooperative internationalization and unification of mankind.

At the end of the Eighth International Congress of Philosophy held in Prague in September of 1934 Neurath proposed a series of International Congresses for the Unity of Science. These took place in Paris, 1935; Copenhagen, 1936; Paris, 1937; Cambridge, England, 1938; Cambridge, Massachusets, 1939 and Chicago, 1941. For the organization of the congresses and related activities, in 1936 Neurath founded the Unity of Science Institute, renamed in 1937 as International Institute for the Unity of Science, a special department of his Mundaneum Institute at The Hague. Neurath had founded the Mundaneum in 1934, after the pro-Nazi fascist coup in Vienna found him away in Moscow, and it already included the International Foundation for Visual Education, founded in 1933. The Institute's executive committee was composed of Neurath, Philip Frank and Charles Morris. And the Organization Committee for the International Congresses for the Unity of Science was composed of Neurath, Carnap, Frank, Joergen Joergensen, Morris, Louis Rougier and Susan Stebbing. And supporters of the movement were widely distributed: Carnap, Feigl, Malisoff, Morris, Dewey, Lewin, Hempel, Lenzen and others in the USA; Stebbing and Woodger in England; Ajdukiewicz, Lukasiewicz, Tarski and others, in Poland; Rey, Rougier and Petzall, in France; Bohr and Joergensen, in Denmark; Naess, in Norway; Kaufman, Waismann and Zilsel, in Austria; Mises and Reichenbach, in Turkey; Enriques, in Italy. In addition, Neurath contemplated establishing branches in Latin America and China. Both Carnap and Neurath took the ideal of unified science to have deep social and political significance against metaphysics. At the same time Karl Popper was defending a criterion to demarcate science from metaphysics based on the falsifiability of all genuinely scientific propositions.

After the Second World War a discussion of unity engaged philosophers and scientists in the ‘Inter-Scientific Discussion Group’ in Cambridge, Massachusetts, (founded by Philip Frank, himself one of the founders of the Vienna Circle) which would later become the Unity of Science Institute. The group was both an extension of the Vienna Circle and a reflection of local concerns in a culture of computers and nuclear power. The characteristic feature of the new view of unity was the idea of cross-fertilization, instantiated in the creation of war-boosted cross-disciplines such as cybernetics, computation, electro-acoustics, psycho-acoustics, neutronics, game theory, and biophysics.

1.5 Unity and reduction in logical empiricism in America: Postwar orthodoxy in philosophy of science

Philosophy of science consolidated itself in the 1950s around a positivist orthodoxy roughly characterized as follows: a syntactic formal approach to theories, logical deductions and axiomatic systems, with distinction between theoretical and observational vocabularies, and empirical generalizations. Unity and reduction may be introduced in terms of the following distinctions: epistemological and ontological, synchronic and diachronic. The specific elements of the dominating accounts will stand and fall with the attitudes towards the elements of the orthodoxy mentioned above.

Two formulations claims by logical positivists in the United States again placed the question of unity of science at the core of philosophy of science: Carl Hempel's deductive-nomological model of explanation and Ernst Nagel's model of reduction. Both were fundamentally epistemological models; both, specifically, explanatory. The emphasis on logical structure makes unity of explanation and reduction chiefly of the synchronic kind. Hempel's model characterizes the scientific explanation of events as a logical argument that expressed their expectability in terms of their subsumption under an empirically testable generalization. In the 1950s, when positivism was extending to the social sciences, the model was offered as a boundary-criterion of demarcation. Unity of method was cast as a normative ideal of science, in the form of a container-model of unification. Explanations in the historical sciences too must fit the model if they were to count as scientific. The universal applicability of Hempel's model was soon challenged, notably by William Dray. This reversal of fortune opened a debate about the nature of the historical sciences that remains unresolved. In the process, some have claimed as historical some of the natural sciences such as geology and biology. It has been argued that Hempel's model, especially the requirement of empirically testable strict universal laws, is satisfied neither by the physical sciences nor the historical sciences, including biology.

Nagel's model of reduction is a model of scientific structure and explanation as well as of scientific progress. It is based on the problem of relating different theories as different sets of theoretical predicates.

Reduction poses two requirements: connectability and derivability. Connectability of laws of different theories requires meaning invariance in the form of extensional equivalence between descriptions, with bridge principles between coextensive but distinct terms in different theories.

Nagel envisaged two kinds of reductions: homogenous and heterogeneous. When both sets of terms overlap, the reduction is homogeneous. When the related terms are different, the reduction is heterogeneous. Derivability requires a deductive relation between the laws involved. In the quantitative sciences, the derivation often involved taking a limit. In this sense the reduced science is considered an approximation to the reducing, new one.

Since Nagel's influential model of reduction by derivation most discussions of unity of science have been cast in terms of reductions between concepts and between theories. A distinctive ontological model is this: The hierarchy of levels of reduction is fixed by part-whole relations. The levels of aggregation of entities all the way down to atomic particles, rendering microphysics the fundamental science.

A classic reference in this kind, away from the syntactic model, is Oppenheim and Putnam's ‘The Unity of Science as a Working Hypothesis’ (Oppenheim and Putnam 1958). Oppenheim and Hempel had worked in the 1930s on taxonomy and typology (a question of broad intellectual, social and political relevance in Germany at the time). Oppenheim and Putnam intended to articulate an idea of science as a reductive unity of concepts and laws to those of the most elementary elements. They also defended it as an empirical hypothesis—not an a priori ideal, project or precondition—about science the claim that its evolution manifested a trend in that unified direction out of the smallest entities and lowest levels of aggregation. In an important sense the evolution of science recapitulates, in the reverse, the evolution of matter, from aggregates of elementary particles to the formation of complex organisms and species. Unity, then, is manifested not just in mereological form, but also genealogically or historically.

2. Recent and contemporary debates in science and philosophy since the 1960s

The rejection of such models and their emendations has occupied the last four decades of philosophical discussion about unity in and of the sciences: especially in connection to psychology and biology, and more recently chemistry. A valuable consequence has been the strengthening of philosophical projects and communities devoting more sustained and sophisticated attention to special sciences different from physics. The same spirit, and often rhetoric, guiding the reactionary and the more progressive camps have been put to use to serve professional and institutional interests, especially when considerations of funding and commercial interest are involved—the most socially visible cases have been the (terminated) construction of the Superconducting Supercollider (SSC) and the Human Genome Project (Cat 1998; Kevles and Hood 1992).

The focus since the 1930s had been on a syntactic approach, on physics as the paradigm of science, deductive logical relations as the form of cognitive or epistemic goals such as explanation and prediction, and on theory and empirical laws as paradigmatic units of scientific knowledge (Suppe 1977; Grünbaum and Salmon 1988). The historicist turn in the 1960s, the semantic turn in philosophy of science in the 1970s and a renewed interest in special sciences changed all that. Debates over the demarcating and normative value of Hempel's D-N model of explanation, even its descriptive power, have pushed philosophy of science away from the narrow positivist orthodoxy (Salmon 1989). The very centrality and universality, desirability and availability of laws and theories have become controversial. So has, as a consequence, the relevance and feasibility of global reductions based on such concepts. The status, form and interpretation of relevant bridge principles connecting different levels of conceptualization have been called into question. The very structure of hierarchy of levels has lost its validity, even for those who believe in it as a model of autonomy of levels rather than an image of fundamentalism. The focus has shifted to alternative units, elements and modes of knowledge and scientific achievement. And different models of unity emphasizing different kinds of relations between different areas of inquiry have gained importance. More generally, then, many driving agendas and background assumptions have gradually been exposed, criticized and qualified, or replaced. The debate over unity has broadened along many new dimensions as a result.

2.1 Antireductionism in the 1960s

Feyerabend, for instance, promptly rebelled and rejected the adequacy of the conditions for Nagelian reductionism. In particular, he challenged the demand of extensional equivalence as an inadequate demand of ‘meaning invariance’ and approximation, and with it the possibility of deductive connections. Mocking the positivist legacy of progress through unity, empiricism and anti-dogmatism, he decried these constraints famously as intellectually dogmatic, conceptually weak and methodologically overly restrictive. He extolled, instead, the merits of the new theses of incommensurability and methodological pluralism.

Shaffner argued that the deductive connection would be guaranteed provided that the old, reduced theory was “corrected” beforehand (Shaffner 1964). The evolution and the structure of scientific knowledge could be neatly captured, using Schaffner's expression, by “layer-cake reduction.”

But the likes of Feyerabend were not persuaded. The terms ‘length’ and ‘mass’—or the symbols l and m—,for instance, may be the same in Newtonian and Relativistic mechanics, or the term ‘electron’ the same in classical physics and quantum mechanics, or the term ‘atom’ the same in quantum mechanics and in chemistry, or ‘gene’ in Mendelian genetics and molecular genetics. But the corresponding concepts, they argued, are not. Concepts or words are to be understood as getting their content or meaning within a holistic or organic structure, even if the organized wholes are the theories that include them. From this point of view, different wholes, whether theories or Kuhnian paradigms, manifest conceptual incommensurability. Therefore, the derived, reducing theories typically are not the allegedly reduced, old ones; and their derivation sheds no relevant insight into the relation between the original old one and the new (Feyerabend 1964; Sklar 1967).

From a historical point of view, the positivist model collapsed the distinction between synchronic and diachronic reduction, or between reductive models of the structure and the evolution or succession of science. The point of Kuhn and Feyerabend's historicism was to drive a wedge between the two dimensions and reject the linear model of scientific change in terms of accumulation and replacement. In addition, the other pillar of positivist orthodoxy was challenged as well: the reduction relation between theoretical concepts and empirical descriptions of phenomena. Kuhn and Feyerabend introduced the notion of theory-ladenness in their respective accounts (Hanson 1958; Kuhn 1962;Cat forthcoming a). The description of the very world that science describes and explains is theory-relative. For Kuhn replacement becomes, then, radical, non-cumulative change in which one world—or, less literally, one world-picture, one paradigm—replaces another (after a revolutionary episode of crisis and proliferation of alternative contenders).

This image constitutes a form of pluralism, and, like the reductionism it is meant to replace, it can be either synchronic or diachronic. Here is where Kuhn and Feyerabend parted ways. For Kuhn synchronic pluralism only describes the situation of crisis and revolution between paradigms. For Feyerabend history is less monistic, and pluralism is and should remain a synchronic and diachronic feature of science and culture (Feyerabend, here, thought science and society inseparable, and followed Mill's philosophy of individualism and democracy).

2.2 Neo-reductionism in philosophy of science

Neo-Nagelian accounts have attempted to solve Nagel's problem of reduction between putatively incompatible theories. Here are a few:

2.2.1 Analogical and Partial Reductions

Schaffner modified Nagel's two-term relation account by requiring it to be satisfied not necessarily by the two original theories, T1 and T2, new and old, more and less general, respectively, but by the modified theories T'1 and T'2. Explanatory reduction is strictly a four-term relation in which T'1 is “strongly analogous” to T1 and corrects, with the insight that the more fundamental theory can offer, the older theory, T2, changing it to T'2. He also required that the bridge laws be synthetic identities, in the sense that they be factual, empirically discoverable and testable, rather than conventions (Schaffner 1967; Sarkar 1998).[2] The difficulty remained especially with the task of specifying or giving a non-contextual, transitive account of the relations between T and T’ (Wimsatt 1976).

Subsequently he has focused on the biomedical sciences and advocated the notion of partial reductions: localized (focused on parts of higher-level systems only) inter-level causal mechanical explanations (Schaffner 1993; Schaffner 2006).

2.2.2 Counterfactual Interpretation

Glymour offered a set of semantic and syntactic conditions of reduction, along with a counterfactual interpretation. For instance, syntactic conditions in the form of limit relations and ceteris paribus assumptions have the function of explaining why the reduced theory works where it does and fails where it does not (Glymour 1969).

2.2.3 Domain-relative Reductions

Nickles introduced the distinction between ‘domain preserving’ and ‘domain combining’ reductions. Domain-preserving reductions are intra-level reductions and occur between T1 and its predecessor T2. In Nickles' parlance, however, T2 “reduces” to T1. This notion of ‘reduction’ does not refer to any relation of explanation (Nickels 1973).

Other accounts have identified different and weaker sorts of reduction.

2.2.4 Type- vs. Token Physicalism and the Autonomy of the Special Sciences

Fodor countered Oppenheim and Putnam's hypothesis under the rubric ‘the disunity of science’. He defended the autonomy of the special sciences from physics arguing for a distinction between type-physicalism and token-physicalism (Fodor 1974). Type-physicalism is characterized by a type-type identity between the predicates/properties in the laws of the special sciences and those of physics. By contrast, token-physicalism is based on the token-token identity between the predicates/properties of the special sciences and those of physics; every event under a special law falls under a law of physics and bridge laws express contingent token-identities between events. Token-physicalism operates as a demarcation criterion for materialism.

Fodor argued that the predicates of the special sciences correspond to infinite or open-ended disjunctions of physical predicates, and these disjunctions do not constitute natural kinds identified by an associated law. Token-physicalism is the only alternative. All special kinds of events are physical but the special sciences are not physics.

2.2.5 Hierarchy-relative and Approximate Reductions

Sarkar distinguished different kinds of reduction in terms of four criteria, two epistemological and two ontological: fundamentalism, approximation, abstract hierarchy and spatial hierarchy. Fundamentalism implies that the features of a system can be explained in terms only of factors and rules from another realm. Abstract hierarchy is the assumption that the representation of a system involves a hierarchy of levels of organization with the explanatory factors being located at the lower levels. Spatial hierarchy is a special case of abstract hierarchy in which the criterion of hierarchical relation is a spatial part-whole or containment relation. Strong reduction satisfies the three ‘substantive’ criteria, whereas weak reduction only satisfies fundamentalism. Approximate reductions—strong and hierarchical—are those which satisfy the criterion of fundamentalism only approximately (Sarkar 1998; also Ramsey 1992; Lange 1995; Cat 2005; Cat forthcoming c).

The merit of Sarkar's proposal resides in its systematic attention to hierarchical conditions and, more originally, to different conditions of approximation:

A1. Approximation relations may be explicit or implicit.

A2. Approximations may be corrigible, incorrigible in practice, or incorrigible in principle.

A3. The maximal effects of approximations may be estimable, not estimable in practice, not estimable in principle.

A4. Approximations may involve: only mathematically justified procedures (such as taking limits), only procedures justified by rule from the fundamental realm, both, or neither.

A5. Approximations may be context-dependent or context-independent.

A6. Approximations may involve counterfactual assumptions (not permitted by the fundamental rules).

2.2.6 Semantic and model-theoretic approach

The shift in the accounts of scientific theory from syntactic to semantic approaches has changed conceptual perspectives and, accordingly, formulations and evaluations of reductive relations and reductionism. Although, examples of the semantic approach focusing on mathematical structures and satisfaction of set-theoretic relations have focused on syntactic features—including axiomatic form of theory—in the discussion of reduction (Sarkar 1998). In this sense, the structuralist approach can be construed as a neo-Nagelian account. Balzer, Sneed, Moulines and, more recently Ruttkamp, for instance, have championed the more traditional structuralist semantic approach (Balzer and Moulines 1996; Moulines 2006; Ruttkamp 2000; Ruttkamp and Heidema 2005).

The shift extends to more recent notion of models that do not fall under the strict semantic or model-theoretic notion of mathematical structures (Giere 2001; Morgan and Morrison 1999; Cat 2005). This is a more flexible framework about relevant formal relations and the scope of relevant empirical situations; and it is implicitly or explicitly adopted by most accounts of unity without reduction, below.

2.2.7 Reductionism without laws or theories

Wimsatt rejected the claim that reduction, as a relation of explanation, needs to be a relation between theories or even involving any theory. Wimsatt's account focuses on ‘inter-level’ explanations in the form of ‘compositional redescription’ and causal mechanisms. He also defended the pursuit of biconditionals or even Schaffner identities, as factual relations, in terms of its heuristic value (Wimsatt 1976). The heuristic value extends to the preservation of the higher-level, reduced concepts, especially for cognitive and pragmatic reasons, including reasons of empirical evidence.

Spector rejected, as Wimsatt did, the appeal to formal laws and deductive relations and offered, instead, a ‘replacement analysis’ for sets of concepts or vocabularies. This approach, he argued, allows for talk of entity reduction or branch reduction, even direct theory replacement without the operation of laws, and circumvents vexing difficulties raised by bridge principles and the deductive derivability condition (self-reduction, infinite regress, etc). Instead he spoke of replacement functions as meta-linguistic statements. Like Sellars argued in the case of explanation, Spector rejected for testing and reduction the need of derivation. Finally, replacement can be in practice, or in theory. Replacement in practice does not advocate elimination of the reduced or replaced entities or concepts and, following Wimsatt, he defended their heuristic value in evidential judgments (Spector 1978).

Note, however, the following: the compartimentalization of theories and their concepts or vocabulary into levels neglects the fact of empirically meaningful and causally explanatory relations between entities or properties at different levels. If they are neglected as theoretical knowledge and left outside as only bridge principles, the possibility of completeness of knowledge is seriously jeopardized. Maximizing completeness of knowledge here requires unity of all phenomena at all levels and anything between these levels. Any bounded region or body of knowledge neglecting such cross-boundary interactions is radically incomplete (not just confirmationally or evidentially so) (Kincaid 1997; Cat 1998).

2.2.8 Eliminativism

The most radical form of reduction as replacement has made a considerable impact through work by Paul and Patricia Churchland, in philosophy of psychology and philosophy of mind (Churchland 1981; Churchland 1986). On their view the vocabulary of the reducing theories replaces that of the reduced ones.

2.3 Metaphysics

The main contemporary source of conceptual development for examining the question of unification and reduction has come from metaphysical discussions. On one end, we have ontological eliminativism (Rorty 1972). Headed in the opposite direction, arguments concerning new concepts such as multiple realizability by Putnam, Kim, Fodor and others led to functionalism, a distinction between type-type and token-token reductions and the examination of the implications. The concepts of emergence, supervenience and downward causation are related metaphysical tools for generating and evaluating proposals about unity and reduction in the sciences. This literature has enjoyed its chief sources and developments in general metaphysics and in philosophy of mind and psychology (Davidson1969; Putnam 1975; Fodor 1975; Kim 1993).

Supervenience, first introduced by Davidson in discussion of mental properties, is the notion that a system with properties on one level is composed of entities on a lower level and its properties are determined by the properties of the lower-level entities or states. The relation of determination is that no changes at the higher-level occur without changes at the lower level. Like token-reductionism, supervenience has been adopted by many as the poor man's reductionism. A different case for the autonomy of the macrolevel, argued by Kincaid, is based on the notion of multiple supervenience (Kincaid 1997; Meyering 2000).

The denial of remedial weaker forms of reductionism is the basis for the concept of emergence. This concept has widely been applied in discussions of complexity.

Connected to the concept of emergence is downward causation. It captures the autonomous genuine causal power of higher-level entities or states, especially upon lower-level ones. The most extreme and most controversial version, suggested by Meehl and Sellar's discussion of emergence, include a violation of laws that regulate the lower-level (Meehl and Sellars 1956; Campbell 1974). Weaker forms require compatibility with the microlaws (for a brief survey and discussion see Robinson 2005; on downward causation without top-down causes, see Craver and Bechtel 2007).

Another general argument for the autonomy of the macrolevel in the form of non-reductive materialism has been a cognitive type of functionalism, namely, cognitive pragmatism (Van Gulick 1992). This account links ontology to epistemology. It gives representations four pragmatic dimensions: the nature of the causal interaction between theory-user and the theory, the nature of the goals to whose realization the theory can contribute, the role of indexical elements in fixing representational content, and differences in the individuating principles applied by the theory to its types. Wimsatt and Spector's arguments above are of this kind. A more ontologically substantive account of functional reduction is Ramsey's bottom-up construction by reduction: transformation reductions streamline formulations of theories in such a way that they extend basic theories upwards by engineering their application to specific context or phenomena. As a consequence, they reveal, by construction, new relations and systems antecedently absent from a scientist's understanding of the theory—independently of a top or reduced theory (Ramsey 1995).

2.4 Epistemology informal and formal: unity, explanation and understanding

Other concepts of unity have focused on its epistemological dimension.

2.4.1 Unity, explanation and understanding

Unification has been defended, originally by Friedman and Kitcher, on the cognitive grounds that unification, measured as the number of independent explanatory laws or phenomena conjoined in a theoretical structure, contributes understanding and confirmation from the fewest basic kinds of phenomena, regardless of issues of derivation (Friedman) or explanatory power in terms of a few types of derivation or argument patterns (Kitcher) (Friedman 1974; Kitcher 1981; Kitcher 1989).

Working out the scope and the conditions of validity of the epistemic project has triggered another line of literature.

The views on scientific explanation have evolved away from the Hempelian formal spirit. Accordingly, the source of understanding provided by scientific explanations has been misidentified, according to some. The genuine source often lies, instead, in causal explanation, or causal mechanism. Standard reference here is Cartwright (Cartwright 1983; Cartwright 1989. Cartwright 1983 emphasizes the gap between explanation, on the one hand, and neat unity, truth and universality, on the other. Cartwright 1989 emphasizes the causal ontology underlying both nomological and general character of scientific laws and useful knowledge. A direct and explicit criticism of Friedman and Kitcher, is in Barnes 1992. That unity and mechanisms might in some cases suffice to yield understanding but not explanation has been argued in Cat 2001 and 2010, and that mechanisms but not laws bear the burden of unity and explanation, in Glennan 1996 and Cat 2005).

Physics, as a standard bearer of unity, material and formal, has been consulted in further support of the Friedman and Kitcher line (for instance, in Wayne 1996). But it has also yielded criticism. Criticisms extend to physicists arguments along similar lines: Cat has argued that unification, even at the fundamental level, fails to yield explanation in the formal scheme based on laws and their symmetries (Cat 1998; Cat 2005). Morrison has argued explicitly against Friedman and Kitcher that unification and explanation conflict on the grounds that in biology and physics only causal mechanical explanations answering why-questions yield understanding of the connections that contribute to “true unification” (Morrison 2000).[3] Her choice of standard for evaluating the epistemic accounts of unity and explanation has not been without critics (Wayne 2002; Plutyinski 2005).[4]

Halonen and Hintikka have argued that unification is not explanation on the grounds that unification is simply systematization of old beliefs and operates as a criterion of theory-choice (Halonen and Hintikka 1999). Schurz has adopted a detailed cognitive pragmatist approach in order to rescue the unification account of explanation. The key is to think of explanations as question-answers episodes involving four elements: the explanation-seeking question P?, the cognitive state C of the questioner/agent for whom P calls for explanation, the answer A, and the cognitive state C+A in which the need for explanation of P has disappeared. In addition, Schurz models unity in the cognitive state in terms of comparative increase of coherence and elimination of spurious unity—such as circularity or redundancy. Unification is also based on information-theoretic transfer or inference relations. Unification of hypotheses is only a virtue if it unifies data. The last two conditions imply that unification yields also empirical confirmation. Explanations, he argues, are global increases in unification in the cognitive state of the cognitive agent (Schurz 1999; Schurz and Lambert 2005).

Weber and Van Dyck defend the unification-explanation link but have responded to Halonen and Hintikka and to Schurz (Weber and Van Dyck 2002). Contra Halonen and Hintikka especially they argue that laws make unifying similarity expectable (hence Hempel-explanatory) and this similarity becomes the content of a new belief. Unification is not mere systematization of old beliefs. Contra Schurz they argue that scientific explanation is provided by novel understanding of facts and satisfaction of our curiosity. In this sense, causal explanations, for instance, are genuinely explanatory and do not require increase of unification.

De Regt and Dieks have defended a contextualist and pluralist account in which understanding is a legitimate aim of science, pragmatic and not necessarily formal (or, contra Trout, a subjective psychological by-product of explanation). In their view explanatory understanding is variable and can have diverse forms, for instance, causal-mechanical and unification, without conflict (De Regt and Dieks 2005).

2.4.2 Unity, methodology and simplicity

Unity can be understood as a methodological principle (Wimsatt 1976 and Wimsatt 2006 for the case of biology and Cat 1998 for physics). One way of doing so is as a simplicity or parsimony condition. But this kind of condition can receive two different interpretations: epistemological and ontological. Sober has drawn this distinction to shed light on the methodological role of unity hypotheses. He formulates it as a distinction between Peirce's problem and Hempel's problem of deciding between competing hypotheses, one unified and the other disunified: Hempel's problem is how to choose between two descriptively true hypotheses in terms of explanatory value; Peirce's problem is how to choose between two explanations in the light of the data they explain in terms of their predictive accuracy or likelihood to be true.

Hempel's guiding principle is that the unified description is explanatory; Peirce's guiding principle is that the unified explanation is more likely to be true. Hempel's principle, Sober argues, is epistemological and the relevance of unity is contextual and pragmatic, relative to the kind of explanation we seek. Sober's epistemology, then, does not subscribe here to the unification-explanation link. By contrast, in Peirce's principle, as a principle of curve-fitting or average predictive accuracy, the relevance of unity is objective. Unity plays the role of an empirical background theory. With the example of the Akaike curve-fitting method, Sober argues that the connection between unity as parsimony and likelihood is not interest-relative, at least in the way the connection between unity and explanation is (Sober 2003; Forster and Sober 1994).

Wimsatt rejects the structural, formal approach to unity and reductionism. He notes, instead, that reductionism is another example of the functional, purposive nature of scientific practice. Within that framework, he argues—along Sober's line—that a particular non-eliminative reductionism is a powerful methodological heuristic. The metaphysical view that follows is a pragmatic and non-eliminative realism (Wimsatt 2006). As a heuristic, this kind of non-eliminative pragmatic reductionism is a complex stance. It is integrative and intransitive, compositional, mechanistic and functionally localized, approximative and abstractive—bound to false idealizations focusing on regularities and stable common behavior, circumstances and properties—constrained in its rational calculations and methods—tool-binding, problem-relative.

Wimsatt generalized, naturalized, contextualized, localized and constrained outlook doesn't lack critics. They have defended the reductive heuristic of elimination. Thus, Poirier argues that inter-level eliminative reductions have proven heuristically valuable (Poirier 2006).

2.4.3 Unity in formal epistemology

Sober's methodological defense of unity is an example of formal epistemology. His technical probabilistic model dovetails with recent discussions of unity and coherence within the formal framework of Bayesianism (Forster and Sober 1994, sect. 7; Schurz and Lambert 2005 is also a formal model, with an algebraic approach). In this approach (a formal calculus of evidential support for hypotheses), the rational comparison and acceptance of beliefs (in terms of probabilities) in the light of empirical data is constrained by Bayes' Theorem for conditional probabilities: P(hypothesis | data) = P(data | h)·P(h) / P(data).

A more recent attempt to provide an explicit Bayesian account of unification as an epistemic, methodological virtue is due to Myrvold. In his model, the measure of unity is this: a hypothesis h unifies phenomena p and q to the degree that given h, p is statistically/probabilistically relevant to (or correlated with) q (Myrvold 2003). McGrew gives a probabilistically equivalent measure of unity in Bayesian terms (McGrew 2003; on the equivalence, Schupbach 2005). Lange has subsequently put forward a criticism that this measure of unity is neither necessary nor sufficient (Lange 2004). Lange's criticism assumes the unification-explanation link. In a rebuttal, Schupbach has rejected this and other assumptions behind Lange's criticism (Schupbach 2005).

Finally, another kind of formal model for a different kind of unity straddles the boundary between formal epistemology and ontology: computational models of emergence or complexity. They are based on simulations of chaotic dynamical processes, for instance, with cellular automata (Wolfram 1984; Wolfram 2002). Crutchfield, Humphreys, Huneman and others have argued in defense of their superiority to combinatorial models based on aggregative functions of parts of wholes (Crutchfield 1994; Crutchfield and Hanson 1997; Humphreys 2004; Humphreys 2007; and a special issue in Minds and Machines 2007).

2.5 Non-reductive models of unity

Instead of boundary-based container-models, in terms of demarcation or reduction, unity is often more loosely and less narrowly conceived in terms of non-reductive connective models: such as local overlap, interaction, cooperation and exchange. The physicist Maxwell himself had remarked in the 19th century that unity of science is the ‘cross-fertilization of the sciences’. As the rest of the sections below suggest, close attention to real scientific practice and knowledge has served well the pursuit of alternatives to the extreme idea and ideal of unification; and vice versa. Unification takes place and time through different kinds of elements and practices, beyond the formalist linguistic, syntactic (in laws and theories) and methodological containers of the first six decades of the 20th century.

2.5.1 Interfield theories

In anti-reductionist quarters, models of unification without reduction have readily appeared: for instance, Darden and Maull's notion of ‘interfield theories’ (Darden and Maull 1977; Darden 2006). It is based on the idea that theories and disciplines do not match neat levels of organization within a hierarchy; rather, many, in fact, in their scope and development cut across different such levels. The different levels correspond in this view to fields. Fields are not competing (like Kuhn's paradigms) and are individuated by a focal problem, a domain of facts related to the problem, explanatory goals, methods and a vocabulary. Fields import and transform terms and concepts form others. Reduction is a relation between theories within a field, not across fields. Examples of fields are genetics, biochemistry and cytology.

2.5.2 Cross-classification categories

The conceptual dimension of cross-cutting has been developed by Khalidi in connection with the possibility of cross-cutting natural kinds. Categories of taxonomy and domains of description are interest-relative, as are rationality and objectivity (Khalidi 1998; his view shares positions and attitudes with Longino 1989; Elgin 1996; Elgin 1997). Cross-cutting taxonomic boundaries, then, is not conceptually inconsistent. Moreover, cognitive theories of typification and language learning support the hierarchy models only for terms which are in principle applicable, but not for ones which are actually applied. The same can be argued of science.

2.5.3 Downward developments

Kitcher pointed to non-reductive inter-theoretic connections: explanatory refinement and explanatory extension of the higher-level knowledge (for instance, Mendelian genetics) through elements of lower-level knowledge (for instance, molecular genetics) (Kitcher 1984).

2.5.4 Interdependence

For Kincaid, the higher-level (for instance, cell physiology) and the lower-level theories (for instance, biochemistry) are ontologically and epistemologically inter-dependent (over information and evidence) (Kincaid 1990; Kincaid 1997; Wimsatt 1976; Spector 1977). Unity is a matter of degree, a degree of interdependence.

2.5.5 Heuristic, pragmatic and categorial interrelations

Theoretical and practical unity without reductionism has been noticed, by Cat and others, through circulation of models, methods, ideals and heuristics, within physics, and between parts of different scientific disciplines, natural and human/social (Cat 1995; Cat et al. 1996 (with a discussion of Neurath's weak and pragmatic holism); Cat 1998 and forthcoming a). Heuristic evidentiary value for anti-reductionism has been defended by Wimsatt and Kincaid, above.

Meta-theoretical categories of description and interpretation of mathematical formalisms (for instance, the use of the category of causality) block and replace full reduction, especially in the context of what he calls the problem of categorial reduction, with a corresponding type of non-reductive unification (Cat 2000; Cat 2006 and Cat forthcoming c; the categorical problem of individuality in quantum physics has been discussed in Healey 1991; Redhead and Teller 1991 and Auyang 1995). Another, more general, unifying element of this kind is Holton's notion of themata. Themata are conceptual values informing a priori yet contingent (both individual and social) presuppositions that factor centrally in the growth of the science that instantiate them: continuity/discontinuity, harmony, quantification, symmetry, conservation, mechanicism, hierarchy, etc. (Holton 1973). Unity of some kind is itself a thematic element.

2.5.6 Field Interconnection

Grantham has offered a generalization of Darden and Maull's interfield theories bringing together work above by Kitcher, Kincaid, Cat, etc., against advocates of reduction-or-disunity such as Dupré, below (Grantham 2004). Unity is interconnection: unity is theoretical and practical connection between fields. Fields are unified theoretically and practically. Theoretical unification involves conceptual, ontological and explanatory relations. Practical unification involves heuristic dependence, confirmational dependence and methodological integration.

2.5.7 Historical and genealogical unification

Another model of non-reductive unification is historical. Cat has discussed the genealogical and historical identity of disciplines, which has become complex through interaction, and the methodological implications (under the rubric ‘transcendental historicism’). The interaction extends to relations between specific sciences, philosophy and philosophy of science (Cat forthcoming a). A more specific precedent is Hull's idea of science as a process, manifesting a historical unity with a Darwinian-style pattern of evolution.

2.5.8 Hybridization

The emergence and development of hybrid disciplines and theories is another instance of non-reductive cooperation or interaction between sciences. Batterman and Cat have pointed to cases in physics: semiclassical models in quantum physics or models developed around phenomena when the limiting reduction relations are singular or catastrophic (caustic optics and quantum chaos) (Cat 1998; Batterman 2002; Belot 2005). Galison has noted, above, post-war emergence on interdisciplinary areas of research such as neuro-acoustics. Yet, talk of cooperation and coordination for the purpose of forming hybrid cross-disciplines or emergent disciplines cannot omit the problem of conflicts. By extension of the discussion of value conflict in moral and political philosophy, one must acknowledge the extent to which scientific practice is based on accepting conflict and making epistemic and/or non-epistemic compromises (Cat 2005 and 2010).

The intersticial formation of small-scale syntheses and cross-boundary ‘alliances’ is common in most sciences; indeed, it is crucial to development in model building and empirical scope: from biochemistry to cell ecology, from econophysics to thermodynamical cosmology.

2.5.9 Local integration or coordination and trading zones

Many examples of unity without reduction are local rather than global. Unification is a piecemeal description and strategy. Cases are restricted to specific models, phenomena or situations. Hybridization and integrations, above, are examples. A general model of local interconnection which has acquired widespread attention and application in different sciences is his anthropological model of trading zone, where hybrid languages and meanings are developed that allow for interaction without straightforward extension of any parties' original language or framework (Galison 1998). He gives this anthropological analysis of the subcultures of experimentation. This strategy aims to explain the strength, coherence and continuity of science in terms of local coordinations of intercalated symbolic procedures and meanings, instruments and arguments, constituting trading zones.

2.5.10 Material unity

Galison's work has endorsed a material level of interaction through instruments and other material objects. Bertoloni Meli defends the unity of mechanics in the 16th and 17th centuries only in terms of the circulation, transformation and application of objects, concrete and abstract. The latter correspond to the imaginary systems—and their representation- we call models. Cat tracks the evolution of objects and images across different theories, experiments and their developments in Maxwell's natural philosophy; but the approach is not meant to illustrate reductive materialism, since the same objects and models work and are perceived as vehicles for abstract ideas, institutions, cultures, etc. (Bertoloni-Meli 2006;Cat forthcoming b).

2.6 Disunity

A more radical departure is the recent criticism of the methodological values of reductionism and unification in science and also its position, embedded in society and affecting it. This view argues for the replacement of the emphasis on global unity—including unity of method—by the emphasis on disunity, and epistemological and ontological pluralism.

2.6.1 The Stanford School

Another picture comes from the members of the so-called Stanford School, e.g., John Dupré, Ian Hacking, Peter Galison and Nancy Cartwright. For Galison disunity is simply the flip-side of his model of local unity by contrast with globalists, formal models. Hacking follows the historian A.A. Crombie's distinction a plurality of scientific styles to argue for a disunity of science in terms of plurality of unities (Hacking 1996).

2.6.2 Disunity of Science?

Dupré has argued in an emblematic reference work that the disunity of science can be given adequate metaphysical foundations that make pluralism compatible with realism (Dupré 1993). He has criticized a mechanistic paradigm he opposes to his account of disunity. It is characterized by determinism, reductionism and essentialism. The paradigm spreads the values and methods of physics to other sciences he thinks scientifically and socially deleterious. According to Dupré, science depends on metaphysical assumptions, and scientific and non-scientific empirical inquiries suggest that science does not and cannot constitute a unified single project, which is supported, in turn, by three pluralistic theses: against essentialism, there is always a plurality of classifications of reality into kinds; against reductionism, there exists equal reality and causal efficacy of systems at different levels of description, that is, the microlevel is not causally complete, leaving room for downward causation; and against epistemological monism, there is no single methodology that supports a single criterion of scientificity, nor a universal domain of its applicability, only a plurality of epistemic and non-epistemic virtues. The concept of science should be understood, following the later Wittgenstein, as a family-resemblance concept (For a criticism of Dupre's ideas, see Mitchell 2003; Sklar 2003).

2.6.3 Laws Both Universal and True?

Cartwright has argued that laws cannot be both universal and true; there exist only patchworks of laws and local cooperation. Like Dupré, Cartwright adopts a kind of scientific realism but denies that there is a universal order, whether as represented by a theory of everything or a corresponding metaphysical principle (Cartwright 1983). The empirical evidence, she argues, along the same lines as Wimsatt, suggests far more strongly the idea of a dappled world, best represented by a patchwork of laws, often in local cooperation (e.g., local identifications, causal interactions, joint actions and piecemeal corrections and correlations). Theories apply only where and to the extent that their interpretive models fit the phenomena studied (Cartwright 1999). But this is not their alleged universal factual scope. They hold only ceteris paribus. Cartwright's pluralism is not just against vertical reductionism but horizontal imperialism, or universalism and globalism.

She explains their more or less general domain of application in terms of causal capacities and arrangements she calls nomological machines (Cartwright 1989; Cartwright 1999). The regularities they bring about depend on a shielded environment. As a matter of empiricism, this is the reason that it is in the controlled environment of laboratories and experiments, where causal interference is shielded off, that factual regularities are manifested. The controlled, stable regular world is an engineered world. Representation rests on intervention (comp. Hacking 1983). On these grounds, as a matter of holism, she rejects strong distinctions between natural and social sciences, and like Neurath, between the natural and the social world. Whether as a hypothesis or as an ideal, the debates continue over the form, scope and significance of unification in the sciences.

Cartwright's theses and arguments are not without their critics (Winsberg et al. 2000; Hoefer 2003; Sklar 2003; Howhy 2003; Teller 2004; McArthur 2006).

2.6.4 Disunity and metaphysics

The question of unity has been regularly linked to issues in metaphysics, realism included. It is a reductionist position to assume that only the fundamental description of the world describes real entities, states, properties or processes. The rest had better be composites or re-descriptions, or else phenomenal illusions (think of the distinction between primary and secondary qualities). The same parsimonious attitude extends to anti-reductionism. Disunity and autonomy of levels has been associated, conversely, with antirealism -with instrumentalist or empiricist heuristics. This includes, for Fodor and Rosenberg, higher-level sciences such as biology and sociology (Fodor 1974; Rosenberg 1994). By contrast, Dupré's and Cartwright's attacks on uniformly global unity and reductionism, above, include an endorsement, in causal terms, of realism.[5] Rohrlich has defended a similar realist position about weaker, conceptual (cognitive) antireductionism, although on grounds of mathematical success of derivational explanatory reductions (Rohrlich 2001). Ruphy, however, has argued that antireductionism stands for methodological prescription and is too weak to yield uncontroversial metaphysical lessons (Ruphy 2005).

2.6.5 Pluralism

The question of the metaphysical significance of disunity and anti-reductionism takes one straight to the larger issue of the epistemology and metaphysics (and aesthetics, social culture and politics) of pluralism. And here one encounters the directly related issues of conceptual schemes, frameworks and worldviews, incommensurability, relativism, contextualism and perspectivalism (for a general discussion see Lynch 1998; on perspectivalism about scientific models see Giere 1999 and Rueger 2005). Pluralism applies widely: concepts, explanations, virtues, goals, methods, models, etc.

Consider at least three distinctions—they are formulated about concepts/facts/descriptions, for simplicity's sake; they apply also to values, virtues, methods, etc:

The preference for one kind of pluralism over another is typically motivated by epistemic virtues or constraints. Meta-pluralism, about pluralism, is obviously conceivable in similar terms. Neurath's pluralism is integrative of natural and social sciences, and it is in this holistic sense that it was a pragmatic and epistemic argument for unity of science without Carnapian reductions. Local and partial integrations, horizontal and vertical, capture ideas above by Wimsatt, Spector, Kincaid, Schaffner, Batterman, Cat, Cartwright and Mitchell. They constitute local forms of unification (see also Kellert, Longino and Waters 2006).

2.7 The Sciences

Debates on unification and reduction have taken place as part of discussions of specific sciences and also within the sciences themselves. Rather than applying or testing abstract and general philosophical models, the examination of unity in specific sciences has been motivated from science itself, insofar as unification or reduction has been a driving ideal or heuristic in the pursuit of science and in the scientists' own understanding of their practice. Some projects have aimed at articulating and defending reductive hypotheses or alternative forms of unity about theoretical concepts/predicates and laws. Other projects have focused on the value and variety of kinds of approximations, and the difficulties of extending to different theories specific methodological standards, models of explanation and even abstract categories such as rationality, lawlikeness, determinism, causality, locality or individuality and separability. Claims about synthetic or analytic reductions, antireductionism, emergence, etc. have been made and evaluated regarding phenomena or theories in specific sciences, to settle either issues in that science or more general philosophical issues, as above.

Below is just a quick survey of issues and claims, and helpful references in the context of specific sciences. Many of their abstract generalizations have appeared above. But not all: Methodologies that unify the different sciences and extend the notion of ‘scientific method’ besides formal mathematical and experimental techniques, are the use of different kinds of visual representations (Lynch and Woolgar 1990; Baigrie 1996; Jones and Galison 1998; Galison 1998; Cat 2001; Cat forthcoming b; Kaiser 2005). With regards to experimental practice, disciplines present a variety of experimental traditions and styles. Cross-boundary similarities and intra-boundary uniformity are ultimately empirical issues.

2.7.1 Physics

Physics has been for centuries the foremost exemplar of science. By the same token, it has been perceived and pursued also as the paragon of unifying and unified science, in particular, the paragon of reducing and reduced by its own most fundamental theories. This is not surprising if we consider the descriptive and explanatory success of the concepts and methods of geometry and mechanics—including the paradigm of Newtonian and Laplacean atomisms—all the way until the very early 20th century. Maxwell's mechanical theory of electromagnetism not only ushered in the theoretical unification of electricity and magnetism but also that of electromagnetism and optics. And his planetary/molecular statistical theory of thermal phenomena, furthered the cause of mechanical atomism and population-based thinking. Physics was born fundamental: first demonstrative and reductive and then, with Newton, also universal. It has since looked up to this standard; and it has applied it to domains beyond the one claimed for itself as well as to its own. But this scientific-philosophical narrative raises new philosophical concerns about the actual sort and scope of the consensus and the success.

The empirical and theoretical impact of the relativistic and quantum frameworks cemented new, deeper boundaries. Relativity captured space, time and gravitational interaction; quantum mechanics, matter and the rest of forces. Both frameworks were formulated on a continuum. There remained two jobs: to work out their reductive reach to the available data and theory, and to bring them together. The empirical and theoretical success of the specific unified theories of electromagnetic, weak fundamental “forces” in the 1970s—modeled after the use of algebraic symmetries early in the century—was followed by the Grand Unified Theories that included strong nuclear interactions, and revived the ideal of reductionism in physics. Other algebraic techniques and concepts, such as the renormalization group, added to the specific mathematical framework for the task (Weinberg 1974; Weinberg 1993; ‘t Hooft 1997; Cat 1998; Vizgin 1994.). Special relativity and quantum mechanics and field theory had been made effectively compatible and co-applicable. What about quantum gravity? What about classical discrete and continuum mechanics of macroscopic bodies? What about thermodynamics? Etc…And suddenly the mathematics of supersymmetry-thinking gave way to string theory and M-theory, the Theories of Everything—and quantum field theories became effective theories, empirically grounded but theoretically only approximate (Cao and Schweber 1993).

Optimistic reductionism has found its most vocal proponents among elementary particle physicists such as Steven Weinberg, and has been contested and qualified by condensed-matter physicists such as Philip Anderson (Anderson 1972). What is fundamental? Can there be unity without fundamentality? The form that unity, especially in physics, takes and should take is a controversial matter that has led to pluralism within the physics community. At the same time, along with a common (unifying) concern, techniques and models such as renormalization and gauge symmetries are shared even if their significance is understood differently, in particular, in support of a reductive unity as well as in opposition to it (Cat 1998). The appeal to principles such as symmetries as unifying frameworks when applied to laws of interactions (“forces”) at the subatomic level or to macroscopic phenomena of phase transition, for instance, lack the epistemic descriptive and explanatory power that they are commonly attributed, at the expense of the unifying and more explanatory concept of symmetry breaking. This problem affects both the fundamental entities and higher-level phenomena (Cat 1998; comp. Brading and Castellani 2001; Cat 2005). It brings back the issues, introduced above, of the role of laws and of the unification-explanation link.

Conceptual issues about unification at this level include at least four aspects: 1) the methodological value of unity as guiding principle; 2) the very sort of internal unification involved at the fundamental level; 3) its broader unifying connection to other theories; and 4) matters of interpretation of quantum mechanics and quantum field theory themselves.

The debate within the physics community in the last decades of the 20th century has brought out the disciplinary positioning that associated ideas with the legitimacy of sub-disciplines along fundamental-non-fundamental divides (Cat 1998). Unity, in its different forms, clearly acts as a guiding principle, an ontological or epistemological presupposition of methodology, as the systematic unity of Nature was for Kant a regulative principle of science and the precondition for making objective knowledge possible (Cat 1998; for an analogy to Kant's case, Falkenburg 1988). A great deal of physics is based on structural, geometrical, topological and non-linear problems that cannot be addressed from a reductionist viewpoint (Scott 2003; Nelson 2002). Physicists and philosophers alike have differed as to what sort of unification is in place in fundamental unified field theories (Maudlin 1996; Wayne 1996).

A number of discussions of specific sciences have addressed the question of the inconsistencies and other problematic relations between some of their theories or models, and the conditions (or formulations) required for their synthesis: for instance, the problem of the relation between Newtonian mechanics and thermodynamics, between classical mechanics and quantum mechanics or special relativity, or between special and general relativity theory, on the one hand, and quantum physics, on the other, as the conceptual foundation of quantum gravity (on the relation between classical mechanics and special relativity or quantum mechanics, Strauss 1972; Bruer 1982; Havas 1968; Torretti 1996; comp. Spector 1978 and Yoshida 1977; on thermodynamics, Sklar 1993; Krieger 1998; on the conflict of quantum mechanics with special relativity, Maudlin 1995; on the conflicts and problems related to general relativity theory, Butterfield and Isham 2001 discusses the different strategies and approaches for achieving a theory of quantum gravity; Huggett and Callender 2001). Other discussions have focused on the search for unification in cosmology: for instance, between quantum field theory and cosmology in physics: black-hole dynamics, quantum gravity, again, and cosmology and thermodynamics.

Philosophers and physicists have argued in favor of reduction in a formal sense, whether in Nagelian theoretical relations or through relations between mathematical frameworks containing the relevant theories. The defenses of anti-reductionism often have, similarly, adopted this approach and attacked the reduction relations, as noted above, or the centrality and sufficiency of the laws in terms of which the relations are typically defined (Dresden 1974; Cat 2005; Darden 2006).[6]

Defenses of the weaker position of anti-fundamentalism have challenged the alleged difference between, on the one hand, the approximate and partial character of non-fundamental theories or models, and, on the other, the truly universal and exact fundamental ones (Teller 2004; Rueger 2005; Cat 2005). Other approaches to unity in physics have adopted a material route and focused on successes and/or failures of part-whole relations and hierarchies and arguments from compositionality (Shimony 1987; Sarkar 1998; Sklar 2003). An intermediate approach opposes to the formal, derivational bottom-up method of micro-reductionism the method of synthetic microanalysis, which guides and qualifies the bottom-up deduction by the top-down view of composite systems as parts in interaction making up wholes (Auyang 1998). Another example involves causal arguments about part-whole relations and relations between macrostuctures and microstructures (Rueger 2004; Huettemann 2003; Huetteman forthcoming). They differ from the categorial argument that targets the very category of causation at the fundamental microlevel (the microcausality condition) as a premise of category-reductionism and fundamentalism (Cat 2000).

The issues of part-whole and compositionality include the literature about holism, separability and individuality in quantum correlations and entanglements: Einstein-Podolski-Rosen argument, Bell's inequalities, Kochen-Specker theorem, quantum superpositions in Hilbert space of one and more systems in different possible states, superpositions in Fock-space of states with different particle numbers, etc. (Cushing and McMullin 1989; Healey 1991; Teller 1995; Humphreys 1997; Kuhlmann, Lyre and Wayne 2002). This is directly related to interpretive issues in quantum mechanics and quantum field theory. The ontological problem of holism, supervenience and emergence extends from quantum mechanics and quantum field theory to condensed-matter physics, and thermodynamics (Howard 2006).

With regards to experimentation, two considerations are worth mentioning. Physics, more than any other scientific discipline, has had a tradition of thought experiments linked to conceptual revolutions. In addition, at the end of the 19th century physics developed a division of labor between theoretical work and communities and experimental work and corresponding communities (Darrigol 1999). This process has led to a disunity within physics of one kind but also to a more complex set of interactions within physics and, as it has become so-called Big Science, with other disciplines such as engineering and management (Galison 1998).

2.7.2 Chemistry

The preoccupation with the questions of unity and reduction has recently focused on chemistry and become the basis for a new area in philosophy of science. The debates in chemistry have promptly offered antireductionist positions about chemical concepts similar to the ones noted above: for example, the introduction of the notion of chemical bond and chemical substances, central to chemical theory and experiment (Primas 1983; Psarros 2001; Needham 2005). Another sort of argument rejects the reduction of chemistry by quantum mechanics on the grounds of the failure of exact solutions and the cognitive realism implicit in the use of the notion of chemical orbitals (Scerri 1994 and 2000). The latter line of argument has endorsed a weaker position and defended the cognitive over the ontological autonomy of chemistry (Scerri 2000).

The debate has developed around the distinction between the ontological and the epistemological nature of the alleged autonomy of chemistry from physics. Arguments for epistemological autonomy acknowledge the relevance of quantum mechanical models and, more importantly, thermodynamic and electromagnetic descriptions and explanations (Psarros 2001). Arguments for ontological emergence and realism have been put forth on the grounds of geometrical structure, complexity and downwards causation (Luisi 2002; Ramsey 1998; Lombardi and Labarca 2006; Bishop 2006).

2.7.3 Biology

The case of biology allows for a shorter discussion, for two reasons. Much in the ideology of unity—especially reductionism—rests on analogies to physics and a particular philosophical conception of physics. So it is important to give physics a more critical look. In addition, much in the literature in the general discussion above—unitarian and anti-reductionist—is based on discussions of biological cases. This is not surprising. Unlike physics, biology grew out of a tug-of-war between a tradition of vitalism and, since Decartes, one of mechanicism.

Similar ideals and debates to the ones examined in physics have also regained force in biology, especially in molecular genetics and sociobiology, in the light of successes in evolutionary and genetic explanations and the interest in projects of genetic mapping such as the Human Genome Project. Beyond the attempts earlier in the century to synthesize genetics and evolutionary biology in the so-called Evolutionary Synthesis, the renewed reductionist trend has been explained in part by the personal and intellectual connection between biology and physics (Fox-Keller 1990; Kevles and Hood 1992). In fact, the connection between the trends in both fields can be traced to the institutional as well as intellectual links starting first between the Manhattan and the Human Genome Projects. The role of physical radiation in genetics research has a long history. The study of mutations and radiation after the war received government institutional dimension in regards to the study of Japanese radiation victims and their descendants.[7] Moreover, population-based statistical methods crossed the boundary between biology, sociology and physics in the second half on the 19th century.

Proponents of reductionist views such as geneticist James Watson and sociobiologist E.O. Wilson have encountered the opposition of other biologists such as Ernst Mayr and R.C. Lewontin. Lewontin, in particular, has decried the atomistic philosophy in much genetic research and exposed the atomistic fallacy behind HGP research and rhetoric. He has emphasized, instead, the neglected importance of environmental causes and of a broader, more holistic approach to the study of genetic influences (Lewontin 1992; Lewontin 2000).

The locus classicus of the discussion of reductionism in biology is twofold: the question of functional organization in living organisms and the related question (from an evolutionary perspective) of the phenotypical expression of genes and their role in heredity and development. In the early 1970s Campbell discussed the notion of downward causation in adaptive functions and Hull challenged the type-type one-to-one reduction between population biology and molecular (Hull 1974; Campbell 1974; Sober 1984; Sarkar 1998). Rosenberg borrowed from Davidson the notion of supervenience and imported it into the ontological analysis of biological entities and properties (Rosenberg 1985). Of course, the bottom level of molecular reductionism involves a disciplinary alliance with chemistry, namely, the different areas of biochemistry (Bechtel 1986).

The notions of function and the central role of functional explanation in biology have concerned two kinds of fundamentalism in biology: vertical and horizontal reductionism, ontological and explanatory.

The general problem of vertical reduction concerns the connected relations between wholes and parts and between functions and biochemical structures and mechanisms (Richardson 1979; Kitcher 1984; Kitcher1999; Sober and Wilson 1998).

The specific problem of parts and wholes is manifested distinctively in biology in terms of the hierarchy of levels of organization. This kind of hierarchy has been discussed by Wimsatt from the point of view of compositional reductionism, above, and most recently by Kron with a defense of emergence (Wimsatt 1976; Kron 2005). Bechtel and Richardson have addressed compositional strategies in the treatment of biological complex systems, distinguishing between individuating decomposition strategies and functional task-relative localization strategies (Bechtel and Richardson 2003). The focus on functions and functional explanation has led to a rich literature on the concept of function (most notoriously, the Cummins-Wright distinction) (Allen, Bekoff and Lauder 1998).

A second problem of vertical reduction is the problem of levels of organization and units of selection. This debate is in fact a tangle of four basic questions: 1) What is the interactor? Or, what units are being actively selected in a process of natural selection? 2) What is the replicator? Or, what part of the genome is the replicating unit? 3) Who are the proximate and the ultimate beneficiaries? Or, who benefits ultimately, in the long term, from the evolution by selection process? And Who gets the benefit of possessing adaptations as a result of a selection process? (Lloyd 2001).

Positions in this debate have ranged from Dawkins' gene-reductionism, Wade's group selectionism and Sterelny and Kitcher's gene isolationist pluralism, to Lloyd's more integrative pluralism (Dawkins 1979; Wade 1978; Wade and Griesemer 1988; Wimsatt 1980; Sterelny and Kitcher 1988; Lloyd 2005).

The related question of horizontal reductionism concerns adaptationism. This global horizontal kind engages a different, non-compositional, sort of unity. It is the programmatic assumption that all traits of organisms have an adaptive value that enables the explanation of their existence and heritability in evolutionary terms. The explanatory reductionism of molecular biology is here supplemented with the fundamentalism—or imperialism—of the Darwinian evolutionary explanation. Gould and Lewontin have been the most vocal critics of this research program (Gould and Lewontin 1979).

From a methodological point of view, a distinctive feature of the biological sciences is that while they share with physics the descriptive and explanatory application of mathematical statistics—in population and probabilistic interpretations—it seems to lack proper, strict and universal laws of the sort “recognized” in physics. Since laws might be absent either in reducing or reduced theories, this feature constrains the possible forms of reductionist programs and relations of reduction. One argument against the availability in principle and in practice of laws is Rosenberg's: universal generalizations would relate to functional terms, another term that is either structural or itself also functional; the difficulty arises within the framework of natural selection to find a coextensive term of either kind that is homogeneous for the same population (Rosenberg 2001). Other philosophers, such as Sober, Lange and Mitchell have modified the notion of law and adopted more pragmatist approaches (Sober 1993; Lange 1995; Mitchell 2003).

The use of population-thinking in population genetics and the role of the environment extolled by Darwinian explanation by natural selection have been developed to study groups of animals in environments forming a new kind of complex dynamical system. It is the sciences of ecology and conservation biology. For Rosenberg and Ehereshefky the situation of lawlessness, together with the central place of evolutionary explanations, exposes biological explanations as historical explanations. Thereby it locates biology on the side of the historical human sciences (Ereshefsky 2001; Rosenberg 2001).

The biological sciences present another somewhat distinctive feature with regards to experimentation. It is the importance of natural experiments and field observations. This extends the paradigm of controlled experiments in laboratory settings that characterizes most of physics (with the exception of astrophysics). It also sets the biological sciences aside from physics and closer to the social sciences.

The conceptual and epistemological hybrid form of biology makes it an area of non-reductive synthetic challenges. The most notorious disciplinary synthesis is the so-called Evolutionary Synthesis, during the 1930s and 1940s. It brought together two competing areas of research and kinds of explanation: Darwinian evolution, whose proponents believed in natural selection as the origin of species, and Mendelian genetics, whose proponents believed in the central role of genetic mutation. The synthesis, empirically supported, placed the Darwinian environmental and functional mechanism of selection on the shoulders of the Mendelian genetic mechanism of replication and variation (Fisher 1930; Wright 1932; Haldane 1932; Dobzhansky 1937; Mayr and Provine 1980; Bechtel 1986; Smocovitis 1996).[8] The theoretical and empirical development of the Synthesis involved the participation of a number of disciplines such as population biology, genetics, systematics, paleontology and morphology.

A more recent and equally ambitious synthesis, informally known as evo-devo, has been proposed between theories of evolution (of reproduction and speciation) and theories of development (of formation and growth, such as embryology) (Goodwin, Holder and Wylie 1983; Amundson 2005; Laubichler and Maienshein 2007).

These grand unified theories, or rather research projects, of biology do require detailed conceptual unifications of a more local kind. In between both magnitudes of synthesis a great variety of hybrid local subfields have emerged crossing inter- and intra-level domains: computational genetics, microbial ecology, toxicology, immunology, etc. It is through these small units of scientific inquiry and a patchwork of their partial and local kind of interconnections that biology is both unified inwards and linked outwards to lower-level and higher-level disciplines.

2.7.4 Human sciences, psychological and social

Human special sciences were previously known as moral sciences and dismal sciences. The epithets merely acknowledged perceived conceptual and methodological differences. Human sciences are now many: psychology, cognitive science, sociology, economics, political science, anthropology, etc. But that old perception has not gone away. Vico's paradox about the fact that it is the kinds of things we are and make that are the ones we know, can predict and explain the least, remains vividly painful in the way it affects our lives. Nor has the link noted at the turn of the 20th-century by German scholars between the distinctive problem of ontology and the problem of methodology. At the same time, the human sciences have a tradition of drawing analogies with ideas from the natural world and the natural sciences (Mirowski 1989; Cohen 1994).

The human sciences show a lower degree of consensus than the natural sciences do around basic frameworks of ontology and methodology. The internal ontological and methodological differences are more radical, intellectually and institutionally. They are related to the proliferation of ontological or conceptual particularities that make up the external difference from the natural sciences, at least under some conception of the latter. Often, however, the degree of internal differences surpasses that of external ones. Consider the diversity of schools of economic thought: Austrian economists, institutionalists, Marxists, social economists, behavioral economists, chaos theorists, Keynesians and post-Keynesians, neo-Ricardians, agency theorists, Chicago school, constitutional political economists, public choice theory (rational choice theory is already at the center of the discipline, in neo-classical equilibrium microeconomics and, to many, also macroeconomics). in the light of this internal fragmentation, it is little surprising that philosophy of psychology, of cognitive science and of mind have been so philosophically active and creative, becoming the source of philosophical resources which, as seen above, have been applied well outside their initial intended scope.

Ontological issues are, for instance, the following: biological reductionism on matters of cognition, and individual and social behavior; empiricism, for instance with behaviorism, about acceptable concepts; holism about social groups and interactions, ontological emergence of human properties exhibited by human beings (consciousness, symbolic dimension, cultural meaning; rationality, individual and social, etc); limited scope of concepts, descriptions and regularities (the extreme case is the uniqueness of historical individuals addressed by 19th-century German scholars in justifying the methodological chasm between the natural and the historical sciences) (Benton and Craib 2001; Hausman 1992; Kincaid 1997).

From the ontological issues arise methodological ones. Two issues stem out of a naturalistic attitude that looks to the natural sciences for methodological guidance: formalism and empiricism.

Notorious critics of rational choice have been Sen, Green and Shapiro, on both ethical and empirical grounds (Sen 1987; Green and Shapiro 1994). The social dimension of rationality and cognition and cultural or community values, for instance, context-dependence, many-many relations, and multiple realizations are aspects of social knowledge that have been held against it (Kincaid 1997; Longino 2001; Hollis 1982). Another anti-reductionist trend integrates higher-level properties and concepts to the study of individual properties, for instance in the attention to social and cultural factors in the development of the mind and cognition (Deacon 1997; Johnson and Erneling 1997).

More generally, the social sciences display a variety of kinds of explanation: structural, functional, interpretive (Geertz-type cultural anthropology), causal, rational choice, critical rationality (Habermas and the Frankfurt School). Three attitudes are common. Pluralism and local normativity accept the diversity. Reductionist normativity such as strict positivism challenges it. Another kind of challenge stems from the causal reduction of explanatory concepts such as Davidson's defense of the causal nature of reasons or Little's argument for the causal equivalence of all these methods and kinds of explanation mentioned above (Davidson 1963; Little 1991).

2.7.5 Complexity

More recent proposals address complexity and the possibility of knowledge of complex systems in terms of their constituents in a non-additive common framework characterizing systems in physics, biology and the social sciences. I will mention three kinds of models: Streven's EPA probabilistic analysis, Auyang's synthetic microanalysis and computational complexity. I will introduce the first in more detail (the last two I've mentioned above with references; see also the corresponding entry for more information on computational complexity).

Compositional strategies approach decomposable systems that satisfy three criteria: 1) the system is decomposable in the sense that the short-term behavior of its components can be understood independently of any other component's; 2) a characterization describing the behavior or dynamics of the individual components is available ignoring the short-term dynamics in the surrounding region; 3) a characterization of the interactions between the dynamics of isolated individual behavior, which are weak and can factor into a constant background field.

EPA is concerned with explaining the successful description of decomposable systems of this kind: 1) they are non-decomposable in the sense that they contain many components, enions, that are strongly interacting (the interaction-events are frequent and have significant consequences for the interacting parts) and fairly independent (actual interactions take place infrequently for each part); and 2) a probabilistic characterization of the microdynamics is available.

The foundation of EPA is the characterization of the probabilistic conditions: 1) the probabilities depend on macrofacts; 2) stochastic independence of the dynamics of components justified on the assumption not of ignorance but of microlevel chaos (key precondition for the tractability of macrolevel behavior; and 3) microconstant evolution function.

Microlevel chaos is key and it implies the multiple realizability at both macro and microlevels. The fact that EPA, nevertheless, provides a systematic relation between macrodynamics and microdynamics is Streven's grounds for claiming it is a model and a defense of systematic reductive physicalism.

3. Conclusion: Why unity? And, What difference does it really make?

Views on matters of unity and unification make more than a cardinality difference, and they do so in both science and philosophy. In science they provide strong heuristic or methodological guidance and even justification for hypotheses and projects, and specific goals too. They also provide legitimacy, even if rhetorically, in social contexts with sources of funding and profit. They become the standard of what carries the authority and legitimacy of what it is to be scientific. They make a difference, as a result, through scientific application and extension, often merely rhetorical, to other domains such as healthcare and economic policy. Last but not least is the influence of implicit assumptions about unification can and do have on science education.

Philosophically, assumptions about unification help choose what sort of philosophical questions to pursue and what target areas to explore. For instance, fundamentalist assumptions typically lead to addressing epistemological and metaphysical issues in terms only of results and interpretations of fundamental levels of disciplines. Assumptions of this sort help define what count as scientific and shape scientistic or naturalized philosophical projects. In this sense, it determines, or at least strongly suggests, what relevant science carries authority in matters philosophical.

At the end of the day one should not lose sight of the larger context that sustains problems and projects in most disciplines and practices. We are as free to pursue them as Kant's dove is to fly, that is, not without the surrounding air resistance to flap its wings upon and against. Philosophy was once thought to stand for the systematic unity of the sciences. The foundational character of unity became the distinctive project of philosophy, in which conceptual unity played the role of standard of intelligibility. In addition, the ideal of unity, frequently under the guise of harmony, has long been a standard of aesthetic virtue (This image has been eloquently challenged by, for instance, John Bailey and Iris Murdoch; Bailey 1976; Murdoch 1992). Unities and unifications help us meet cognitive and practical demands upon our life as well as cultural demands upon our self-images, cosmic and earthly. It is not surprising that talk of the many meanings of unity, namely, fundamental level, unification, system, organization, universality, simplicity, atomism, reduction, harmony, complexity or totality, can bring an urgent grip on our intellectual imagination.

Bibliography

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

adaptationism | Aristotle | atomism: 17th to 20th century | Bacon, Roger | biocomplexity | Carnap, Rudolf | chaos | Comte, Auguste | Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, Marquis de | Democritus | Descartes, René | determinism: causal | Diderot, Denis | Dilthey, Wilhelm | economics, philosophy of | Einstein, Albert: philosophy of science | emergent properties | Empedocles | empiricism: logical | Feyerabend, Paul | Frege, Gottlob | Galileo Galilei | genetics: and genomics | Hempel, Carl | Heraclitus | Hilbert, David | Hume, David | Kant, Immanuel | Leibniz, Gottfried Wilhelm | logical positivism | Mach, Ernst | many, problem of | mereology | Mill, John Stuart | monism | multiple realizability | Neurath, Otto | Newton, Isaac | Parmenides | physicalism | physics: intertheory relations in | Plato | Pythagoras | quantum mechanics | quantum theory: quantum field theory | Ramus, Petrus | reduction, scientific: in biology | Rickert, Heinrich | supervenience | Weber, Max | Whewell, William | Wittgenstein, Ludwig