Stanford Encyclopedia of Philosophy

Notes to The Role of Decoherence in Quantum Mechanics

1. As long as decoherence yields only ‘effective superselection rules’ (see Section 2.2), this standard framework will be enough. If, however, decoherence can yield exact superselection rules (as suggested perhaps by the discussion of charge in Section 5 below), the required framework may change. The framework for strict superselection rules is in general that of so-called algebraic quantum mechanics. Even if this is required, however, an assessment of the role of decoherence in this framework would have to wait for a systematic discussion of the interpretational implications of algebraic quantum mechanics. (My thanks to Hans Primas for discussion of this point.)

2. A version of this article was given at the Exploratory Workshop on Quantum Mechanics on the Large Scale, The Peter Wall Institute for Advanced Studies, The University of British Columbia, 17-27 April 2003, on whose website electronic versions of this and several of the other talks are linked (see under Other Internet Resources).

3. Notice that these probabilities are well-defined in quantum mechanics, but in the context of a separate experiment, with detection at the slits.

4. Realistically, in each single scattering the electron will couple to non-orthogonal states of the environment, thus experiencing only a partial suppression of interference. However, repeated scatterings will suppress interference very effectively.

5. Unfortunately, this distinction between ‘true’ collapse (whether or not it is a process that in fact happens in nature) and ‘as if’ collapse is sometimes overlooked, muddling conceptual discussions: for further discussion of this point, see e.g. Pearle (1997), and also Zeh (1995, pp. 28-29).

6. Trajectories are meant here in the sense of the theory of classical stochastic processes. For more details of the decoherent histories approach see the overview article by Halliwell (1995), and for a short discussion of some of its conceptual aspects, see Section 7 in the entry on Everett's relative-state formulation of quantum mechanics.

7. For a numerical example, see the next footnote.

8. These values are calculated based on the classic model by Joos and Zeh (1985). The same and similar calculations also reveal that the time scales for the process are minute. The above coherence length is reached after a microsecond of exposure to air, and suppression of interference on the length scale of 10-12cm is achieved already after a nanosecond. Length and time scales for more massive objects are further reduced. For a less technical partial summary of Joos and Zeh's results, see also Bacciagaluppi (2000).

9. For a review of more rigorous arguments see e.g. Zurek (2003, pp. 28-30). In particular they can be obtained from the Wigner function formalism, see e.g. Zurek (1991) and in more detail Zurek and Paz (1994), who then apply these results to derive chaotic trajectories in quantum mechanics (see below Section 5).

10. Indeed, one should expect slight deviations from Newtonian behaviour. These are due both to the tendency to spread of the individual components and to the detection-like nature of the interaction with the environment, which further enhances the collective spreading of the components (this detail will be of importance in Section 3.2). These deviations appear as ‘noise’, i.e. particles being kicked slightly off course. For a very accessible discussion of alpha-particle tracks roughly along these lines, see Barbour (1999, Chap. 20). According to the type of system, and the details of the interaction, the noise component might actually dominate the motion, and one obtains (classical) Brownian-motion-type behaviour.

11. As a numerical example, take a macroscopic particle of radius 1cm (mass 10g) interacting with air at normal conditions. After an hour the overall spread of its state is of the order of 1m. (This estimate uses equations [3.107] and [3.73] in Joos and Zeh (1985).)

12. Von Neumann's justification for espousing a collapse approach in the first place arguably relies: (a) on his ‘insolubility theorem’, showing that the phenomenological indeterminism in the measurement cannot be explained in terms of ignorance of the exact state of the apparatus (later generalised by several authors; see the discussion and references in the entry on collapse theories); (b) on his ‘no-go’ theorem for hidden variables, which in his opinion excluded this option (and later famously criticised by Bell, 1987, Chap. 2); (c) on his wish to uphold a one-to-one correspondence between mental states and physical states of the observer (see the discussion in Section 4.3 below.

13. The collapse consists in multiplying the wave function ψ(r) by a Gaussian of fixed width a, call it ax(r), with a probability distribution for the centre x of the Gaussian given by ∫|ax(r)ψ(r)|2dr. In other words, if we denote by Ax the operator corresponding to multiplication by the Gaussian ax(r), the state |ψ> goes over to one of the continuously many possible states (1/<ψ|Ax*Ax|ψ>)Ax|ψ>, with probability density <ψ|Ax*Ax|ψ>. In technical language, this is a measurement associated with a POVM (positive operator valued measure). In the original model, a=10-5cm, and collapse occurs with a probability per second 1/τ, with τ=1016s.

14. This modification was introduced because the original model would have had (unobserved) consequences for the predicted lifetime of the proton (Pearle and Squires, 1994) due to the production of energy associated with the collapse. In fact this modification is made within the related theory of continuous spontaneous localisation (CSL), a formalism that uses stochastic differential equations (Pearle, 1989, also sketched in the entry on collapse theories), but we shall oversimplify in this respect.

15. For N particles, in the case in which the frequencies are independent of mass, it is easy to contrive examples in which the theory gives results very different from decoherence. A state of a macroscopic pointer localised in a region A superposed with a state of the pointer localised in B will almost instantaneously trigger a collapse onto one of the localised states, which is analogous to what we also expect from decoherence. However, a (contrived) state of the pointer in which all its protons are localised in region A and all its neutrons in region B, superposed with a state in which the protons are in B and the neutrons are in A, would also trigger a collapse, but onto one of these very non-classical states. In the ‘mass density’ version this difference will disappear.

16. My thanks to Bill Unruh for raising this issue.

17. By the same token one can dismiss proposed variants of de Broglie-Bohm theory that are not based on the position representation, e.g. Epstein's (1953) momentum-based theory, which would utterly fail to exhibit the correct ‘collapse’ behaviour and classical limit, precisely because decoherence interactions are clearly not momentum-based! Depending on which version of the pilot-wave programme one adopts (positions, fermion number, fields), one will have to carry through the discussion in the most appropriate decoherence models. Saunders (1999) has expressed some doubts as to whether the configuration variables adopted by Valentini for field theory are appropriate to explain measurement results. Whether or not this specific criticism is correct, a pilot-wave theory using the ‘wrong’ representation will exhibit the wrong collapse behaviour (if any!). Decoherence considerations should thus be important in choosing the configuration space on which to base the theory.

18. I would reinterpret Appleby's assumption as one about the effective wavefunction of the heat bath with which the system interacts, not as an arbitrary choice one can make. That is, I suggest that one should try to justify it from a pilot-wave treatment of the bath.

19. Some authors might even deny that there is a single wave function of the universe, and maintain instead that the universe regularly splits into several universes, each described by a separate wave function. This view of Everett, often associated with DeWitt (1971), would combine any disadvantages of a von-Neumann-like collapse with an extravagant metaphysics.

20. This of course fits well into recent approaches to understand quantum mechanics from an informational perspective, which is arguably independent of the Everett interpretation. See the special issue of Studies in the History and Philosophy of Modern Physics, in which Wallace (2003b) was published (referenced in the (Other Internet Resources). For a related derivation of the quantum probabilities that does not depend on the Everett interpretation, see Saunders (2004).

21. Such a solution to the preferred basis problem appears to be only partial in the sense that there are many inequivalent ways of selecting sets of decoherent histories. See Dowker and Kent (1995) for details.

22. Notice that in Albert and Loewer's (1988) many-minds interpretation, the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own. This is postulated in order to define a stochastic dynamics for the minds and not have to introduce a novel concept of probability. Even in this case, however, decoherence is of crucial importance, since the dynamical evolution of the minds will have a physical correlate only if the corresponding physical components are decohered. (My thanks to Martin Jones for discussion of this point.)

23. For a state-of-the-art model, see Halliwell and Thorwart (2002). An analogy from standard quantum mechanics may be helpful here. Take a harmonic oscillator in equilibrium with its environment. An equilibrium state is by definition a stationary state under the dynamics, i.e. it is itself time-independent. However, one can decompose the equilibrium state of the oscillator as a mixture of localised components each carrying out one of the oscillator's possible classical motions (time-dependent!). Such a decomposition can be found e.g. in Donald (1998, Section 2).