Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Quantum Theory: von Neumann vs. Dirac

First published Tue Jul 27, 2004

The purpose of this entry is to draw a comparison and contrast of the respective contributions of von Neumann and Dirac to the foundations of quantum theory. Though the title may suggest a competition of sorts, the upshot of what follows is somewhat contrary to this suggesion. In many ways their contributions are mutually complementary. For example, although von Neumann's contributions often emphasize mathematical rigor and Dirac's pragmatic concerns (such as utility and intuitiveness), it is not necessary to choose between rigor and pragmatism. Both approaches are legitimate and are worthy of pursuit. The discussion below begins with an assessment of their contributions to the foundations of quantum mechanics. Their contributions to mathematical physics beyond quantum mechanics are then considered, and the focus will be on the influence that these contributions had on subsequent developments in quantum theorizing, particularly with regards to quantum field theory and its foundations. Since philosophers of physics have only recently turned to the foundations of quantum field theory, the hope is that the discussion below will provide a broader perspective that will serve to influence the direction of future research in this area. The term quantum theory is used here to denote a generic class of theories that includes quantum mechanics, quantum field theory, and quantum statistical mechanics.


1. Von Neumann and the Foundations of Quantum Theory

In the late 1920s, von Neumann developed the separable Hilbert space formulation of quantum mechanics, which later became the definitive one (from the standpoint of mathematical rigor, at least). In the mid-1930s, he worked extensively on lattice theory (see the entry on quantum logic), rings of operators, and continuous geometries. Part of his expressed motivation for developing these mathematical theories was to develop an appropriate framework for quantum field theory and a better foundation for quantum mechanics. During this time, he noted two closely related structures, modular lattices and finite type-II factors (a special type of ring of operators), that have what he regarded as desirable features for quantum theory. These observations led to his developing a more general framework, continuous geometries, for quantum theory. Matters did not work out as von Neumann had expected. He soon realized that such geometries must have a transition probability function, if they are to be used to describe quantum mechanical phenomena, and that the resulting structure is not a generalization at all beyond the operator rings that were already available. Moreover, it was determined much later that the type-III factors are the most important type of ring of operators for quantum theory. In addition, a similar verdict was delivered much later with regards to his expectations concerning lattice theory. The lattices that are appropriate for quantum theory are orthomodular — a lattice is orthomodular only if it is modular, but the converse is false. Of the three mathematical theories, it is the rings of operators that have proven to be the most important framework for quantum theory. It is possible to use a ring of operators to model key features of physical systems in a purely abstract, algebraic setting; but, to fully exploit the power of this framework in doing physics it is necessary to choose a representation of the ring in a Hilbert space. Thus, the separable Hilbert space remains a crucial framework for quantum theory. The simplest examples of separable Hilbert spaces are the finite dimensional ones (type-In factors, n an integer) normally used to describe internal degrees of freedom  (such as spin) or “qu-bits” and their generalizations in quantum information. Readers wanting to familiarize themselves with this basic example should consult the entry on quantum mechanics.

1.1 The Separable Hilbert Space Formulation of Quantum Mechanics

Matrix mechanics and wave mechanics were formulated roughly around the same time between 1925 and 1926. In July 1925, Heisenberg finished his seminal paper “On a Quantum Theoretical Interpretation of Kinematical and Mechanical Relations”. Two months later, Born and Jordan finished their paper, “On Quantum Mechanics”, which is the first rigorous formulation of matrix mechanics. Two months after this, Born, Heisenberg, and Jordan finished “On Quantum Mechanics II”, which is an elaboration of the earlier Born and Jordan paper; it was published in early 1926. These three papers are reprinted in (van der Waerden 1967). Meanwhile, Schrödinger was working on what eventually became his four famous papers on wave mechanics. The first was received by Annalen der Physik in January 1926, the second one month later, and then the third in May and the fourth in June. All four are reprinted in (Schrödinger 1928).

Schrödinger was the first to raise the question of the relationship between matrix mechanics and wave mechanics in (Schrödinger 1926), which was published in Annalen in spring 1926 between the publication of his second and third papers of the famous four. This paper is also reprinted in (Schrödinger 1928). It contains the germ of a mathematical equivalence proof, but it does not contain a rigorous proof of equivalency: the mathematical framework that Schrödinger associated with wave mechanics is a space of continuous and normalizable functions, which is too small to establish the appropriate relation with matrix mechanics. Shortly thereafter, Dirac and Jordan independently provided a unification of the two frameworks. But their respective approaches required essential use of delta functions, which were suspect from the standpoint of mathematical rigor. In 1927, von Neumann published three papers in Göttinger Nachrichten that placed quantum mechanics on a rigorous mathematical foundation and included a rigorous proof (i.e., without the use of delta functions) of the equivalence of matrix and wave mechanics. These papers are reprinted in (von Neumann 1961-1963, Volume I, Numbers 8-10). In the preface to his famous 1932 treatise on quantum mechanics (von Neumann 1955), which is an elegant summary of the separable Hilbert space formulation of quantum mechanics that he provided in the earlier papers, he acknowledges the simplicity and utility of Dirac's formulation of quantum mechanics, but finds it ultimately unacceptable. He indicates that he cannot endure the use of what could then only be regarded as mathematical fictions. Examples of these fictions include Dirac's assumption that every self-adjoint operator can be put in diagonal form and his use of delta functions, which von Neumann characterizes as “improper functions with self-contradictory properties”. His stated purpose is to formulate a framework for quantum mechanics that is mathematically rigorous. Finally, it is worth noting that Rédei and Stöltzner have made a good case that “In the absence of the Hilbert space concept, von Neumann would most probably not have objected to Dirac's pragmatic research strategey” (from page 16 of their paper “Intuition and the Axiomatic Method”, in Other Internet Resources).

What follows is a brief sketch of von Neumann's strategy. First, he recognized the mathematical framework of matrix mechanics as what would now be characterized as an infinite dimensional, separable Hilbert space. Here the term “Hilbert space” denotes a complete vector space with an inner product; whereas von Neumann included separability (having a countable basis) in the definition of a Hilbert space. He then attempted to specify a set of functions that would instantiate an (infinite-dimensional) separable Hilbert space and could be identified with Schrödinger's wave mechanics. He began with the space of square-integrable functions on the real line. To satisfy the completeness condition, that all Cauchy sequences of functions converge (in the mean) to some function in that space, he specified that integration must be defined in the manner of Lebesgue. To define an inner product operation, he specified that the set of Lebesgue square-integrable functions must be partitioned into equivalence classes modulo the relation of differing on a set of measure zero. That the elements of the space are equivalence classes of functions rather than functions is sometimes overlooked, and it has interesting ramifications for interpretive investigations. It has been argued in (Kronz 1999), for example, that separable Hilbert space is not a suitable framework for quantum mechanics under Bohm's ontological interpretation (also known as Bohmian mechanics).

1.2 Rings of Operators, Quantum Logics, and Continuous Geometries

In a letter to Birkhoff from 1935, which is unpublished, von Neumann says: “I would like to make a confession which may seem immoral: I do not believe in Hilbert space anymore”. This fragment is from a more extended quotation that is published in (Birkhoff 1961). The confession is indeed startling since it comes from the champion of the separable Hilbert space formulation of quantum mechanics and it is issued just three years after the publication of his famous treatise, the definitive work on the subject. The irony is compounded by the fact that less than two years after his confession to Birkhoff, his mathematical theorizing about the abstract mathematical structure that was to supersede the separable Hilbert space, continuous geometries with a transition probability, turned out not to provide a generalizaton of the separable Hilbert space framework. It is compounded again with interest in that subsequent developments in mathematical physics initiated and developed by von Neumann ultimately served to strengthen the entrenchment of the separable Hilbert space framework in mathematical physics (especially with regards to quantum theory). These matters are explained in more detail in the next section.

Three theoretical developments come together for von Neumann in his theory of continuous geometries during the seven years following 1932: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. By 1934, von Neumann had already made substantial moves towards an algebraic approach to quantum mechanics with the help of Jordan and Wigner — their article, “On an Algebraic Generalization of the Quantum Mechanical Formalism”,  is reprinted in (von Neumann 1961-1963, Vol. II, No. 21). In 1936, he published a second paper on this topic, “On an Algebraic Generalization of the  Quantum Mechanical Formalism (Part I)”, which is reprinted in (von Neumann 1961-1963, Vol. III, No. 9). Neither work was particularly influential, as it turns out. A related paper by von Neumann and Birkhoff, “The Logic of Quantum Mechanics”, was also published in 1936, and it is reprinted in (von Neumann 1961-1963, Vol. IV, No. 7). It was seminal to the development of a sizeable body of literature on quantum logics. It should be noted, however, that this happens only after modularity, a key postulate for von Neumann, is replaced with orthomodularity (a weaker condition). The nature of the shift is clearly explained in (Holland 1970): modularity is in effect a weakening of the distributive laws (limiting their validity to certain selected triples of lattice elements), and orthomodularity is a weakening of modularity (limiting the validity of the distributive laws to an even smaller set of triples of lattice elements). The shift from modularity to orthomodularity was first made in (Loomis 1955). Rapid growth of literature on orthomodular lattices and the foundations of quantum mechanics soon followed. For example, see (Pavicic 1992) for a fairly exhaustive bibliography of quantum logic up to 1990, which has over 1800 entries.

Of substantially greater note for the foundations of quantum theory are six papers by von Neumann (three jointly published with Murray) on rings of operators, which are reprinted in (von Neumann 1961-1963, Vol. III, Nos 2-7). The first two, “On Rings of Operators” and a sequel “On Rings of Operators II”, were published in 1936 and 1937, and they were seminal to the development of the other four. The third, “On Rings of Operators: Reduction Theory”, was written during 1937-1938 but not published until 1949. The fourth, “On Infinite Direct Products”, was published in 1938. The remaining two, “On Rings of Operators III” and “On Rings of Operators IV” were published in 1941 and 1943, respectively. This massive work on rings of operators was very influential and continues to have an impact in pure mathematics, mathematical physics, and the foundations of physics. Rings of operators are now referred to as “von Neumann algebras” following Dixmier, who first referred to them by this name (stating that he did so following a suggestion made to him by Dieudonne) in the introduction to his 1957 treatise on operator algebras (Dixmier 1981).

A von Neumann algebra is a *-subalgebra of the set of bounded operators B(H) on a Hilbert space H that is closed in the weak operator topology and contains the identity operator — the “*” denotes the adjoint and serves as an allusion to the requirement that each element of the subalgebra must have an adjoint. There are special types of von Neumann algebras that are called “factors”. A von Neumann algebra is a factor, if its center (which is the set of elements that commute with all elements of the algebra) is trivial, meaning that each of its elements is a scalar times the identity element. Moreover, von Neumann showed in his reduction-theory paper that all von Neumann algebras that are not factors can be decomposed as a direct sum of factors. There are three mutually exclusive and exhaustive factor types: type-I, type-II, and type-III. Each type has been classified into (mutually exclusive and exhaustive) sub-types: types In (n = 1,2,…,∞), IIn (n = 1,∞), IIIz (0 ≤ z ≤ 1). As mentioned above, type-In correspond to finite dimensional Hilbert spaces, while type-I corresponds to the infinite dimensional separable Hilbert space that provides the rigorous framework for wave and matrix mechanics. Von Neumann and Murray distinguished the subtypes for type-I and type-II, but were not able to do so for the type-III factors. Subtypes were not distinguished for these factors until the 1960s and 1970s — see Chapter 3 of (Sunder 1987) or Chapter 5 of (Connes 1994) for details.

As a result of his earlier work on the foundations of quantum mechanics and his work on quantum logic with Birkhoff, von Neumann came to regard the type-II1 factors as likely to be the most relevant for physics. This is a substantial shift since the most important class of algebra of observables for quantum mechanics was thought at the time to be the set of bounded operators on an infinite-dimensional separable Hilbert space, which is a type-I factor. A brief explanation for this shift is provided below. See the well-informed and lucid account presented in (Rédei 1998) for a much fuller discussion of von Neumann's views on fundamental connections between quantum logic, rings of operators (particularly type-II1 factors), foundations of probability theory, and quantum physics. It is worth noting that von Neumann regarded the type-III factors as a catch-all class for the “pathological” operator algebras; indeed, it took several years after the classificatory scheme was introduced to demonstrate the existence of such factors. It is ironic that the predominant view now seems to be that the type-III factors are the most relevant class for physics (particularly for quantum field theory and quantum statistical mechanics). This point is elaborated further in the next section after explaining below why von Neumann's program never came to fruition.

In the introduction to the first paper in the series of four entitled “On Rings of Operators”, Murray and von Neumann list two reasons why they are dissatisfied with the separable Hilbert space formulation of quantum mechanics. One has to do with a property of the trace operation, which is the operation appearing in the definition of the probabilities for measurement results (the Born rule), and the other with domain problems that arise for unbounded observable operators. The trace of the identity is infinite when the separable Hilbert space is infinite-dimensional, which means that it is not possible to define a correctly normalized a priori probability for the outcome of an experiment (i.e., a measurement of an observable). By definition, the a priori probability for an experiment is that in which any two distinct outcomes are equally likely. Thus, the probability must be zero for each distinct outcome when there is an infinite number of such outcomes, which can occur if and only if the space is infinite dimensional. It is not clear why von Neumann believed that it is necessary to have an a priori probability for every experiment, especially since von Mises clearly believed that a priori probabilities ("uniform distributions" in his terminology) do not always exist (von Mises 1981, pp. 68 ff.) and von Neumann was influenced substantially by von Mises on the foundations of probability (von Neumann 1955, p. 198 fn.). Later, von Neumann's expressed reason for dissatisfaction with infinite dimensional Hilbert spaces changed from probabilistic to algebraic considerations (Birkhoff and von Neumann 1936, p. 118); namely, that it violates Hankel's principle of the preservation of formal law, which leads one to try to preserve modularity — a condition that holds in finite-dimensional Hilbert spaces but not in infinite-dimensional Hilbert spaces. The problem with unbounded observables arises from their only being defined on a merely dense subset of the set elements of the space. This means that algebraic operations of unbounded observables (sums and products) cannot be generally defined; for example, it is possible that two unbounded observables A, B are such that the range of B and the domain of A are disjoint, in which case the product AB is meaningless.

The problems mentioned above do not arise for type-In factors, if n < ∞, nor do they arise for type-II1. That is to say, these factor types have a finite trace operation and are not plagued with the domain problems of unbounded operators. Particularly noteworthy is that the lattice of projections of each of these factor types (type-In for n < +∞ and type-II1) is modular. By contrast, the set of bounded operators on an infinite-dimensional separable Hilbert space, a type-I factor, is not modular; rather, it is only orthomodular. These considerations serve to explain why von Neumann regarded the type-II1 factor as the proper generalization of the type-In (n < +∞) for quantum physics rather than the type-I factors. The shift in the literature from modular to orthomodular lattices that was characterized above is in effect a shift back to von Neumann's earlier position (prior to his confession). But, as was already mentioned, it now seems that this was not the best move either.

It was von Neumann's hope that his program for generalizing quantum theory would emerge from a new mathematical structure known as “continuous geometry”. He wanted to use this structure to bring together the three key elements that were mentioned above: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. He sought to forge a strong conceptual link between these elements and thereby provide a proper foundation for generalizing quantum mechanics that does not make essential use of Hilbert space (unlike his theory of rings of operators). Unfortunately, it turns out that the class of continuous geometries is too broad for the purposes of axiomatizing quantum mechanics. The class must be suitably restricted  to those having a transition probability. It turns out that there is then no substantial move beyond the separable Hilbert space framework. An unpublished manuscript that was finished by von Neumann in 1937 was prepared and edited by Israel Halperin, and then published as (von Neumann 1981). A review of the manuscript by Halperin was published in (von Neumann 1961-1963, Vol. IV, No. 16) years before the manuscript itself was published. In that review, Halperin notes the following:

The final result, after 200 pages of deep reasoning is (essentially): every such geometry with transition probability can be identified with the projection geometry of a finite factor in some finite or infinite dimensional Hilbert space (Im or II1). This result indicates that continuous geometries do not provide new useful mathematical descriptions of quantum mechanical phenomena beyond that already available from rings of operators.

This unfortunate development does not, however, completly undermine von Neumann's efforts to generalize quantum mechanics. On the contrary, his work on rings of operators does provide significant light to the way forward. The upshot in light of subsequent developments is that von Neumann settled on the wrong factor type for the foundations of physics.

1.3 Algebraic Quantum Field Theory

In 1943, Gelfand and Neumark published an important paper on an important class of normed rings, which are now known as abstract C* algebras. Their paper, (Gelfand & Neumark 1943), was influenced by Murray and von Neumann's work on rings of operators, which was discussed in the previous section. In their paper, Gelfand and Neumark focus attention on abstract normed *-rings. They show that any C* algebra can be given a concrete representation in a Hilbert space (which need not be separable). That is to say, there is an isomorphic mapping of the elements of a C* algebra into the set of bounded operators of the Hilbert space. Four years later, Segal published a paper (Segal 1947a) that served to complete the work of Gelfand and Neumark by specifying the definitive procedure for constructing concrete (Hilbert space) representations of an abstract C* algebra. It is called the GNS construction (after Gelfand, Neumark, and Segal). That same year, Segal published an algebraic formulation of quantum mechanics (Segal 1947b), which was substantially influenced by (though deviating somewhat from) von Neumann's algebraic formulation of quantum mechanics (von Neumann 1961-1963, Vol. III, No. 9), which is cited in the previous section. It is worth noting that although C* algebras satisfy Segal's postulates, the algebra that is specified by his postulates is a more general structure known as a Segal algebra. Every C* algebra is a Segal algebra, but the converse is false since Segal's postulates do not require an adjoint operation to be defined. If a Segal algebra is isomorphic to the set of all self-adjoint elements of a C* algebra, then it is special or exceptional Segal algebra. Although the mathematical theory of Segal algebras has been fairly well developed, a C* algebra is the most important type of algebra that satisfies Segal's postulates.

The algebraic formulations of quantum mechanics that were developed by von Neumann and Segal did not change the way that quantum mechanics is done. Nevertheless, they did have a substantial impact in two related contexts: quantum field theory and quantum statistical mechanics. The key difference leading to the impact has to do with the domain of applicability. The domain of quantum mechanics consists of finite quantum systems, meaning quantum systems that have a finite number of degrees of freedom. Whereas in quantum field theory and quantum statistical mechanics, the systems of special interest — i.e., quantum fields and particle systems in the thermodynamic limit, respectively — are infinite quantum systems, meaning quantum systems that have an infinite number of degrees of freedom. Dirac was the first to recognize the importance of infinite quantum systems for quantum field theory in (Dirac 1927), which is reprinted in (Schwinger 1958).

Segal was the first to suggest that the beauty and power of the algebraic approach becomes evident when working with an infinite quantum system (Segal 1959, p. 5). The advantage has to do with the existence of unitarily inequivalent representations of the algebra of observables that serves to define the infinite system. In field theory, representations of free fields are unitarily inequivalent to representations of interacting fields, and this is a serious problem. Haag brought this problem to the attention of physicists in (Haag 1955), though in doing so he notes that von Neumann first discovered ‘different’ (i.e., unitarily inequivalent) representations much earlier in (von Neumann 1938). The key advantage of the algebraic approach, according to Segal (1959, pp. 5-6), is that one may work in the abstract algebraic setting where it is possible to obtain interacting fields from free fields by an automorphism on the algebra, one that need not be unitarily implementable. Segal notes (1959, p. 6) that von Neumann had a similar idea (that field dynamics are to be expressed as an automorphism on the algebra) in an unpublished manuscript, (von Neumann 1937).

After suggesting that there is this important similarity between his and von Neumann's approaches to infinite quantum systems, Segal draws an important contrast that serves to give the advantage to his approach over von Neumann's. The key mathematical difference, according to Segal, is that von Neumann is working with a weakly closed ring (meaning that it is closed with respect to the weak operator topology), whereas Segal is working with a uniformly closed ring (closed with respect to the uniform topology). It is crucial because it has the following interpretive significance, which rests on operational considerations:

The present intuitive idea is roughly that the only measurable field-theoretic variables are those that can be expressed in terms of a finite number of canonical operators, or uniformly approximated by such; the technical basis is a uniformly closed ring (more exactly, an abstract C*-algebra). The crucial difference between the two varieties of approximation arises from the fact that, in general, weak approximation has only analytical significance, while uniform approximation may be defined operationally, two observables being close if the maximum (spectral) value of the difference is small (Segal 1959, p. 7).

Initially, it appeared that Segal's assessment of the relative merits of von Neumann algebras and C* algebras with respect to physics was substantiated by a seminal paper, (Haag & Kastler 1964). Among other things, Haag and Kastler dealt with some of the problems having to do with inequivalent representations by introducing a notion of physical equivalence that is based on Fell's mathematical idea of weak equivalence (Fell 1960). Subsequent developments in both mathematics and mathematical physics, however, run counter to Segal's assessment. There is a complete classificatory scheme for von Neumann algebras, as indicated in the previous section, but there is no such scheme for C* algebras; thus, von Neumann algebras are much more mathematically convenient. More importantly, von Neumann algebras are substantially more relevant to physics than C* algebras: type-III factors are the most relevant class for physics within the algebraic approach to quantum statistical mechanics and quantum field theory.

In algebraic quantum statistical mechanics, an infinite quantum system is defined by specifying an abstract algebra of observables. A particular state may then be used to specify a concrete representation of the algebra as a set of bounded operators in a Hilbert space. Among the most important types of states that are considered in algebraic statistical mechanics are the equilibrium states, which are often referred to as “KMS states” (since they were first introduced by the physicists Kubo, Martin, and Schwinger). There is a continuum of KMS states since there is a at least one KMS state for each possible temperature value τ of the system, for 0 ≤ τ ≤ +∞. Each KMS state corresponds to a representation of the algebra of observables that defines the system, and each of these representations is unitarily inequivalent to any other. It turns out that each representation that corresponds to a KMS state is a factor: if τ = 0 then it is a type-I factor, if τ = +∞ then it is a type-II factor, and if 0 < τ < +∞ then it is a type-III factor. Thus, type-III factors play a predominant role in algebraic quantum statistical mechanics. The algebraic approach has proven most effective in quantum statistical mechanics. It is extremely useful for characterizing many important macroscopic quantum effects including crystallization, ferromagnetism, superfluidity, structural phase transition, Bose-Einstein condensation, and superconductivity. A good introductory presentation is (Sewell 1986), and for a more advanced discussion see (Bratteli & Robinson 1979-1981).

In algebraic quantum field theory, an algebra of observables is associated with bounded regions of Minkowski spacetime (and unbounded regions including all of spacetime by way of certain limiting operations) that are required to satisfy standard axioms of local structure: isotony, locality, covariance, additivity, positive spectrum, and a unique invariant vacuum state. The resulting set of algebras on Minkowski spacetime that satisfy these axioms is referred to as the net of local algebras. It has been shown that special subsets of the net of local algebras — those corresponding to various types of unbounded spacetime regions such as tubes, monotones (a tube that extends infinitely in one direction only), and wedges — are type-III factors. Of particular interest for the foundations of physics are the algebras that are associated with bounded spacetime regions, such as a double cone (the finite region of intersection of a forward and a backward light cone). Here the results are suggestive. There are at least three special cases that occur in algebraic quantum field theory. It may be shown that the algebra associated with a double cone is a type-III factor, if the field is free or satisfies the condition known as “asymptotic dilation invariance”, or satisfies the condition known as “duality” as well as enhanced versions of isotony, locality, and additivity. The predominant view seems to be that these cases suggest that the algebra associated with a bounded spacetime region will typically be a type-III factor — see (Horuzhy 1990, p. 35) and (Haag 1996, p. 268).

One important area for interpretive investigation is the existence of a continuum of unitarily inequivalent representations of an algebra of observables. Attitudes towards inequivalent representations differ drastically along lines of research within algebraic quantum theory. The predominant view in algebraic quantum statistical mechanics is that inequivalent representations play a crucial physical role in the theory; they clearly have physical significance. In algebraic quantum field theory the predominant view is that a continuum of inequivalent representations constitutes an embarrassment; it is sometimes mitigated by appending a pragmatic twist to the effect that one should simply choose the most convenient representation for the purpose at hand. This divergence of opinion clearly deserves to be understood. It may be that each side is correct with regards to its respective domain of application, or that one side is seriously mistaken. This is an issue that merits further interpretive investigation.

Further topics of interest that are not addressed here though they are closely related to the issues discussed in this section include the interpretive significance of Haag's theorem, inequivalent representations, physical equivalence, superselection, and develepments in the 1990s involving the Tomita-Takesaki modular theory. In subsequent versions of this entry, I hope to address at least some of these topics in this section.

2. Dirac and the Foundations of Quantum Theory

Dirac's formal framework for quantum mechanics was very useful and influential despite its lack of rigor. It was used extensively by physicists and it inspired some powerful mathematical developments in functional analysis. Eventually, mathematicians developed a suitable framework for placing Dirac's formal framework on a firm mathematical foundation, which is known as a rigged Hilbert space (and is also referred to as a Gelfand Triplet). This came about as follows. A rigorous definition of the δ-function became possible in distribution theory, which was developed by Schwartz from the mid-1940s to the early 1950s. Distribution theory inspired Gelfand and collaborators during the mid-to-late 1950s to formulate the notion of a rigged Hilbert space, the firm foundation for Dirac's formal framework. This development was facilitated by Grothendiek's notion of a nuclear space, which he introduced in the mid-1950s. The rigged Hilbert space formulation of quantum mechanics was then developed by Böhm and Roberts in 1966. Since then, it has been extended to a variety of different contexts in the quantum domain including decay phenomena and the arrow of time. The mathematical developments of Schwartz, Gelfand, and others had a substantial effect on quantum field theory as well. Distribution theory was taken forward by Wightman and his co-workers (the Princeton school) in developing the axiomatic approach to quantum field theory from the mid-1950s to the mid-1960s. In the late 1960s,  the axiomatic approach was explicitly put into the rigged Hilbert space framework by Bogoliubov and co-workers (the Moscow school).

Although these developments were only indirectly influenced by Dirac, by way of the mathematical developments that are associated with his formal approach to quantum mechanics, there are other elements of his work that had a more direct and very substantial impact on the development of quantum field theory. In the 1930s, Dirac developed a Lagrangian formulation of quantum mechanics and applied it to quantum fields (Dirac 1933), and the latter inspired Feynman to develop the path-integral approach to quantum field theory (Feynman 1948). The mathematical foundation for path-integral functionals is still lacking (Rivers 1987, pp, 109-134), though substantial progress has been made (DeWitt-Morette et al. 1979). Despite this shortcoming, it remains the most useful and influential approach to quantum field theory to date. In the 1940s Dirac developed a form of quantum electrodynamics that involved an indefinite metric (Dirac 1943) — see also (Pauli 1943) in that connection. This had a substantial influence on later developments, first in quantum electrodynamics in the early 1950s with the Gupta-Bluer formalism, and in a variety of quantum field theory models such as vector meson fields and quantum gravity fields by the late 1950s — see Chapter 2 of (Nagy 1966) for examples and references.

2.1 Dirac's δ-function, Principles, and Bra-Ket Notation

Dirac's attempt to prove the equivalence of matrix mechanics and wave mechanics made essential use of the δ-function, as indicated above. The δ-function was used by physicists before Dirac, but it became a standard tool in many areas of physics only after Dirac very effectively put it to use in quantum mechanics. It then became widely known by way of his textbook (Dirac 1930), which was based on a series of lectures on quantum mechanics given by Dirac at Cambridge University. This textbook saw three later editions: the second in 1935, the third in 1947, and the fourth in 1958. The fourth edition has been reprinted many times, and it is still being used at many universities. Its staying power is due, in part, to another innovation that was introduced by Dirac in the third edition, his bra-ket formalism. He first published this formalism in (Dirac 1939), but the formalism did not become widely used until after the publication of the third edition of his textbook. There is no question that these tools, first the δ-function and then the bra-ket notation, were extremely effective for physicists practising and teaching quantum mechanics both with regards to setting up equations and to the performance of calculations. Most quantum mechanics textbooks use δ-functions and plane waves, which are key elements of Dirac's formal framework but not included in von Neumann's rigorous mathematical framework for quantum mechanics. Working physicists as well as teachers and students of quantum mechanics often use Dirac's framework because of its simplicity, elegance, power, and relative ease of use. Thus, from the standpoint of pragmatics, Dirac's framework is much preferred over von Neumann's. Despite its utility and corresponding popularity, the lack of a rigorous mathematical definition of the delta function was regarded as a serious shortcoming (as already noted). These shortcomings were made right in the 1960s soon after the notion of a rigged Hilbert space was introduced.

2.2 The Rigged Hilbert Space Formulation of Quantum Mechanics

It is to Dirac's credit that mathematicians worked very hard to provide a rigorous foundation for his formal framework. One key element was Schwartz's theory of distributions, which was developed between the mid-1940s and the early 1950s (Schwartz 1945; 1950-1951). Another key element, the notion of a nuclear space, was developed by Grothendieck in the mid-1950s (Grothendieck 1955). This notion made possible the generalized-eigenvector decomposition theorem for self-adjoint operators in rigged Hilbert space — for the theorem see (Gelfand and Vilenken 1964, pp. 119-127), and for a brief historical account of the convoluted path leading to it see (Berezanskii 1968, pp. 756-760). The decomposition principle provides a rigorous way to handle observables such as position and momentum in the manner in which they are presented in Dirac's formal framework. These mathematical developments culminated in the early 1960s with Gelfand and Vilenkin's characterization of a structure that they referred to as a rigged Hilbert space (Gelfand and Vilenkin 1964, pp. 103-127). It is unfortunate that their chosen name for this mathematical structure is doubly misleading. First, there is a natural inclination is to regard it as denoting a type of Hilbert space, one that is rigged in some sense, but this inclination must be resisted. Second, the term rigged has an unfortunate connotation of illegitimacy, as in the terms rigged election (such as the Florida election in November 2000) or rigged roulette table, and this connotation must be dismissed as prejudiced. There is nothing illegitimate about a rigged Hilbert space from the standpoint of mathematical rigor (or any other relevant standpoint). A more appropriate analogy may be drawn with the notion of a rigged ship: the term rigged in this context means fully equipped. But this analogy has its limitations since a rigged ship is a fully equipped ship, but (as the first point indicates) a rigged Hilbert space is not a Hilbert space, though it is generated from a Hilbert space in the manner now to be described.

A rigged Hilbert space is a dual pair of spaces (Φ, Φx ) that can generated from a separable Hilbert space Η using a sequence of norms (or semi-norms); the sequence of norms is generated using a nuclear operator (a good approximate meaning is an operator of trace-class, meaning that the trace of the modulus of the operator is finite). In the mathematical theory of topological vector spaces, the space Φ is characterized in technical terms as a nuclear Fréchet space. To say that Φ is a Fréchet space means that it is a complete metric space, and to say that it is nuclear means that it is the projective limit of a sequence of Hilbert spaces in which the associated topologies get rapidly finer with increasing n (i.e., the convergence conditions are increasingly strict); the term nuclear is used because the Hilbert-space topologies are generated using a nuclear operator. In distribution theory, the space Φ is characterized as a test-function space, where a test-function is thought of as a very well-behaved function (being continuous, n-times differentiable, having a bounded domain or at least dropping off exponentially beyond some finite range, etc). Φx is a space of distributions, and it is the topological dual of Φ, meaning that it corresponds to the complete space of continuous linear functionals on Φ. It is also the inductive limit of a sequence of Hilbert spaces in which the topologies get rapidly coarser with increasing n. Because the elements of Φ are so well-behaved, Φx may contain elements that are not so well-behaved, some being singular or improper functions (such as Dirac's δ-function). Φ is the topological anti-dual of Φx , meaning that it is the complete set of continuous anti-linear functionals on Φx; it is anti-linear rather than linear because multiplication by a scalar is defined in terms of the scalar's complex conjugate.

It is worth noting that neither Φ nor Φx is a Hilbert space in that each lacks an inner product that induces a metric with respect to which the space is complete, though for each space there is a topology with respect to which the space is complete. Nevertheless, each of them is closely related to the Hilbert space Η from which they are generated: Φ is densely embedded in Η, which in turn is densely embedded in Φx. Two other points are worth noting. First, dual pairs of this sort can also be generated from a pre-Hilbert space, which is a space that has all the features of a Hilbert space except that it is not complete, and doing so has the distinct advantage of avoiding the partitioning of functions into equivalence classes (in the case of functions spaces). The term rigged Hilbert space is typically used broadly to include dual pairs generated from either a Hilbert space or a pre-Hilbert space. Second, the term Gelfand triplet is sometimes used instead of the term rigged Hilbert space, though it refers to the ordered set (Φ, Η, Φx ), where Η is the Hilbert space used to generate Φ and Φx. The appropriate connotation for rigged, as noted earlier, is fully equipped.

The dual pair (Φ, Φx ) is fully equipped in the sense that it possesses the means to represent important operators for quantum mechanics that are problematic in a separable Hilbert space, particularly the unbounded operators that correspond to the observables position and momentum, and it does so in a particularly effective and unproblematic manner. As already noted, these operators have no eigenvalues or eigenvectors in a separable Hilbert space; moreover, they are only defined on a dense subset of the elements of the space and this leads to domain problems. These undesirable features also motivated von Neumann to seek an alternative to the separable Hilbert space framework for quantum mechanics, as noted above. In a rigged Hilbert space, the operators corresponding to position and momentum can have a complete set of eigenfunctionals (i.e., generalized eigenfunctions). The key result is known as the nuclear spectral theorem (and it is also known as the Gelfand-Maurin theorem). One version of the theorem says that if A is a symmetric linear operator defined on the space Φ and it admits a self-adjoint extension to the Hilbert space H, then A possesses a complete system of eigenfunctionals beloning to the dual space Φx (Gelfand and Shilov 1967, chapter 4). That is to say, provided that the stated condition is satisfied, A can be extended by duality to Φx , its extension Ax is continuous on Φx (in the operator topology in Φx), and Ax satisfies a completeness relation (meaning that it can be decomposed in terms of its eigenfunctionals and their associated eigenvalues). The duality formula for extending A to Φx is <φ|Axκ> = <Aφ|κ>, for all φ∈Φ and for all κ∈Φx. The completeness relation says that for all φ,θ∈Φ:

<Aφ|θ> = ∫v(A) λ<φ|λ><λ|θ>* dμ(λ),

where v(A) is the set of all generalized eigenvalues of Ax (i.e., the set of all scalars λ for which there is λ∈Φx such that <φ| Axλ> = λ<φ|λ> for all φ∈Φ).

The rigged Hilbert space representation of these observables is about as close as one can get to Dirac's elegant and extremely useful formal representation of them with the added feature of being placed within a rigorous, well-founded mathematical framework. It should be noted, however, that there is a sense in which it is a proper generalization of Dirac's framework. The rigging (choice of nuclear operator that determines the test function space) can result in different sets of generalized eigenvalues being assoicated with an operator. For example, the set of (generalized) eigenvalues for the momentum operator (in one dimension) corresponds to the real line, if the space of test functions is the set S of infinitely differentiable functions of x which together with all derivatives vanish faster than any inverse power of x as x goes to infinity, whereas its associated set of eigenvalues is the complex plane, if the space of test functions is the set D of infinitely differentiable functions with compact support (i.e., vanishing outside of a bounded region of the real line). If complex eigenvalues are not desired, then S would be a more appropriate choice than D — see (Nagel 1989) for a brief discussion. But there are situations in which it is desirable for an operator to have complex eigenvalues. This is so, for example, when a system exhibits resonance scattering (a type of decay phenomenon), in which case one would like the Hamiltonian to have complex eigenvalues — see (Böhm & Gadella 1989). (Of course, it is impossible for a self-adjoint operator to have complex eigenvalues in Hilbert space.)

Soon after the development of the theory of rigged Hilbert spaces by Gelfand and his associates, the theory was used to develop a new formulation of quantum mechanics. This was done independently in (Böhm 1966) and (Roberts 1966). This innovation ultimately proved more than just a curiosity. It was later demonstrated that the rigged Hilbert space formulation of quantum mechanics can handle a broader range of phenomena than the separable Hilbert space formulation. That broader range includes scattering resonances and decay phenomena (Böhm and Gadella 1989), as already noted. More recently, Böhm has extended this range to include a quantum mechanical characterization of the arrow of time (Böhm et al. 1997). The Prigogine school has developed an alternative characterization of the arrow of time using the rigged Hilbert space formulation of quantum mechanics (Antoniou and Prigogine 1993). Kronz has used this formulation to characterize quantum chaos in open quantum systems (Kronz 1998, 2000). More recently, Castagnino and Gadella have used it to characterize decoherence in closed quantum systems (Castagnino & Gadella 2003).

2.3 Axiomatic Quantum Field Theory

In the early 1950s, theoretical physicists were inspired to axiomatize quantum field theory. One motivation for axiomatizing a theory, not the one for the case now under discussion, is to express the theory in a completely rigorous form in order to standardize the expression of the theory as a mature conceptual edifice. Another motivation, more akin to the case in point, is to embrace a strategic withdrawal to the foundations to determine how renovation should proceed on a structure that is threatening to collapse due to internal inconsistencies. One then looks for existing piles (fundamental postulates) that penetrate through the quagmire to solid rock, and attempts to drive home others at advantageous locations. Properly supported elements of the superstructure (such as the characterization of free fields, dispersion relations, etc.) may then be distinguished from those that are untrustworthy. The latter need not be razed immediately, and may ultimately glean supportive rigging from components not yet constructed. In short, the theoretician hopes that the axiomatization will effectively separate sense from nonsense, and that this will serve to make possible substantial progress towards the development of a mature theory. Grounding in a rigorous mathematical framework can be an important part of the exercise, and that was a key aspect of the axiomatization of the Princeton school (Wightman and co-workers) and the Moscow school (Bogolioubov and co-workers). The mathematical framework that was chosen by both groups was Schwartz's theory of distributions. As already noted, distribution theory was later formulated in the theory of topological vector spaces by Gelfand and co-workers by means of the notion of a rigged Hilbert space. As axiomatic quantum field theory matured, theoreticians (particularly the Moscow school) adopted the rigged Hilbert space language — more on this follows.

In the mid-1950s, Schwartz's theory of distributions was used by Wightman to develop an abstract formulation of quantum field theory (Wightman 1956), which later came to be known known as axiomatic quantum field theory. Mature statements of this formulation are presented in (Wightman & Gårding 1964) and in (Streater & Wightman 1964). It was further refined in the late 1960s by the Moscow school, who explicitly place axiomatic quantum field theory in the rigged Hilbert space framework (Bogoliubov et al. 1975, p. 256). This is not the place to rehearse the key postulates of the axiomatic approach. It will suffice to mention the usual names of these axioms followed by a brief parenthetical description to provide a sense for their character. It is by now standard within the axiomatic approach to put forth the following six postulates: spectral condition (there are no negative energies or imaginary masses), vacuum state (it exists and is unique), domain axiom for fields (quantum fields correspond to operator-valued distributions), transformation law (unitary representation in the field-operator (and state) space of the restricted inhomogeneous Lorentz group — “restricted” means inversions are excluded, and “inhomogeneous” means that translations are included), local commutativity (field measurements at spacelike separated regions do not disturb one another), asymptotic completeness (the scattering matrix is unitary — this assumption is sometimes weakened to cyclicity of the vacuum state with respect to the polynomial algebra of free fields). Rigged Hilbert space entered the axiomatic framework by way of the domain axiom, so this axiom will be discussed in more detail below.

In classical physics, a field is is characterized as a scalar- (or vector- or tensor-) valued function φ(x) on a domain that corresponds to some subset of spacetime points. In quantum field theory a field is characterized by means of an operator rather than a function. A field operator may be obtained from a classical field function by quantizing the function in the canonical manner — cf. (Mandl 1959, pp. 1-17). For convenience, the field operator associated with φ(x) is denoted below by the same expression (since the discussion below only concerns field operators). Field operators that are relevant for quantum field theory are too singular to be regarded as realistic, so they are smoothed out over their respective domains using elements of a space of well-behaved functions known as test functions. There are many different test-functions spaces (Gelfand & Shilov 1968, Chapter 4). At first, the test-function space of choice for axiomatic quantum field theory was the Schwartz space Σ, the space of functions whose elements have partial derivatives of all orders at each point and such that each function and its derivatives decreases faster than x-n for any n∈Ν as x→∞. It was later determined that some realistic models require the use of other test-function spaces. The smoothed field operators φ[f ] for f ∈Σ are known as quantum field operators, and they are defined as follows

φ[f ] = ∫ d4x f (x)φ(x).

The integral (over the domain of the field opertor) of the product of the test function f (x) and the field operator φ(x) serves to "smooth out" the field operator over its domain. It is postulated within the axiomatic approach that a quantum field operator φ[f ] may be represented as an unbounded operator on a separable Hilbert space Η, and that {φ[f ]: f ∈Σ} (the set of smoothed field operators associated with φ(x)) has a dense domain Ω in Η. The smoothed field operators are often referred to as operator-valued distributions, and this means that for every Φ,Ψ∈Ω there is an element of the the space of distributions Σx, the topological dual of Σ, that may be equated to the expression <Φ|φ[ ]|Ψ>. If Ω’ denotes the set of functions obtained by applying all polynomials of elements of {φ[f ]: f ∈Σ} onto Χ (the unique vaucuum state), then the axioms mentioned above entail that Ω’ is dense in Η (asymptotic completeness) and that Ω’⊂Ω (domain axiom). The elements of Ω correspond to possible states of the elements of {φ[f ]: f ∈Σ}. Though only one field has been considered thus far, the formalism is easily generalizable to a countable number of fields with an associated set of countably indexed field operators φk(x) — cf. (Streater and Wightman 1964).

As noted earlier, the appropriateness of the rigged Hilbert space framework enters by way of the domain axiom. Concerning that axiom, Wightman says the following (in the notation intoduced above, which differs slightly from that used by Wightman).

At a more advanced stage in the theory it is likely that one would want to introduce a topology into Ω such that φ[f ] becomes a continuous mapping of Ω into Ω. It is likely that this topology has to be rather strong. We want to emphasize that so far we have only required that <Φ|φ[f ]|Ψ> be continuous in f for Φ,Ψ fixed; continuity in the pair Φ,Ψ cannot be expected before we put a suitable strong topology on Ω (Wightman and Gårding 1964, p. 137).

In (Bogoliubov et al. 1975, p. 256), a topology is introduced to serve this role, though it is introduced on Ω’ rather than on Ω. Shortly thereafter, they assert that it is not hard to show that Ω’ is a complete nuclear space with respect to this topology. This serves to justify a claim they make earlier in their treatise:

… it is precisely the consideration of the triplet of spaces Ω⊂Η⊂Ω* which give a natural basis for both the construction of a general theory of linear operators and the correct statement of certain problems of quantum field theory (Bogoliubov et al. 1975, p. 34).

Note that they refer to the triplet Ω⊂Η⊂Ω* as a rigged Hilbert space. In the terminology introduced above, they refer in effect to the Gelfand triplet (Ω, Η, Ωx ) or (equivalently) the associated rigged Hilbert space (Ω, Ωx ) .

Finally, it is worth mentioning that although algebraic quantum field theory is presented axiomatically and seems to be equally deserving of the name “axiomatic quantum field theory”, the preference here is to restrict this term in the manner indicated just above. This restriction has more to do with the term “field” than it does with term “axiomatic”. In quantum field theory, a field is an abstract system having an infinite number of degrees of freedom. Sub-atomic quantum particles are field effects that appear in special circumstances. In algebraic quantum field theory, there is a further abstraction: the most fundamental entities are the elements of the algebra of local (and quasi-local) observables, and the field is a derived notion. The term local means bounded within a finite spacetime region, and an observable is not regarded as a property belonging to an entity other than the spacetime region itself. The term quasi-local is used to indicate that we take the union of all bounded spacetime regions. In short, the algebraic approach focuses on local (or quasi-local) observables and treats the notion of a field as a derivative notion; whereas the axiomatic approach (as characterized just above) regards the field concept as the fundamental notion. Indeed, it is common practice for proponents of the algebraic approach to distance themselves from the field notion by referring to their theory as “local quantum physics”. The two approaches are mutually complementary — they have have developed in parallel and have influenced each other by analogy (Wightman 1976). The hope has been and continues to be that it will eventually be possible to form a more robust connection between the two approaches (Horuzhy 1990).

One topic of interest that is not addressed here (though it is closely related to the issues discussed in this section) is the interpretive significance of negative probabilities, which arise in connection with quantum field theory in indefinite metric spaces. In subsequent versions of this entry, I hope to address this topic in this section. The same goes for a discussion of the extent to which connections have been forged between the axiomatic and the algebraic approaches, and the prospects for a realistic interpretation of the rigged Hilbert space formulations of quantum mechanics and quantum field theory.

Bibliography

Other Internet Resources

Algebraic Quantum Field Theory

Axiomatic Quantum Field Theory

Philosophical Discussions Related to Axiomatic or Algebraic Quantum Field Theory

Articles

Related Entries

quantum mechanics | quantum mechanics: Bohmian mechanics | quantum mechanics: the role of decoherence in | quantum theory: quantum entanglement and information | quantum theory: quantum field theory | quantum theory: quantum logic and probability theory