Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Finitism in Geometry

First published Wed Apr 3, 2002; substantive revision Wed Mar 17, 2010

In our representations of the world, especially in physics, (mathematical) infinities play a crucial role. The continuum of the real numbers, ℜ, as a representation of time or of one-dimensional space is surely the best known example and, by extension, the n-fold cartesian product, ℜn, for n-dimensional space. However, these same infinities also cause problems. One just has to think about Zeno's paradoxes or the present-day continuation of that discussion, namely the discussion about supertasks, to see the difficulties (see the entry on supertasks in this encyclopedia for a full treatment). Hence, it is a very tempting idea to investigate whether it is possible to eliminate these infinities and still be able to do physics. The first step towards an answer to that question is to examine whether or not a discrete geometry is possible that can approximate classical continuous geometry as closely as possible. For, if such is the case, the latter geometry can easily be replaced by a discrete version in any physical theory that makes use of this particular mathematical background.

Straightforward as the task may seem, there are however at least two ways in which the concept of approximation can be understood. Suppose that T is a physical theory that is based on classical geometry. Then an approximation to T can mean two different things:

  1. For all concepts in T, including the geometrical concepts, a discrete analog is proposed (if such a thing exists), or
  2. An underlying theory T′ is formulated using possibly different concepts in such a way that the classical concepts can be derived from T′.

In the sections that follow an overview will be presented of (some of) the various attempts that fall under (a) or (b). However, before embarking on this journey, several caveats have to be mentioned.


1. Some general considerations

The most important thing to take into account is, given a particular proposal for a discrete geometry, what the scientific and/or philosophical background is of the author(s) and, related to that, what their intentions are. Are they logicians, mathematicians, computer scientists, physicists or philosophers (to list the five most frequently occurring cases)? Do they want to solve a mere technical, a physical or a philosophical problem? Are they worried about foundational aspects or is the object of their research to further develop existing theories? It is worthwhile to go into some more detail for each of the five types of author(s) mentioned to illustrate these questions.

1.1 Logicians

Logicians are often interested in displaying the underlying logical structure of a theory, physical or mathematical, and in exploring whether or not there are alternatives, usually by changing the underlying logical principles. One could imagine a geometry based not on classical logic, but, e.g., on intuitionistic logic, where principles such as the excluded third, i.e., p or not-p, for any statement p, or double negation, i.e., if not-not-p then p, no longer hold. Often the goal is to find a complete classification of all possibilities. This approach implies that the logician working on and developing discrete models, does not necessarily believe that these models are correct or true in some sense. They merely help to understand better what classical geometry is.

A perfect illustration of such an approach is the work on spatial logic, see Aiello et al. 2007 for an excellent overview. The authors compare their approach to the work done in temporal logic (see the entry on temporal logic in this encyclopedia). There are plenty of ways to model time: with a starting and/or final point, discrete or continuous, linear, cyclic or branching, … The logical task is to construct a language that allows one to “talk” about all these structures and to be able to differentiate between them. In temporal logic such a language uses the operators Fp (“It will be the case that p”) and Pp (“It has been the case that p”). One example: if time is linear in the future, then this property can be expressed as follows. Suppose that Fp and Fq are given, then only three things are possible: either F(p & q), i.e., p and q will be the case, or F(p & Fq), i.e., p will happen first and then q, or F(Fp & q), i.e., the other way round. In one formula: (Fp & Fq) → (F(p & q) or F(p & Fq) or F(Fp & q)). In an entirely similar way, to construct such a language is what spatial logic wants to achieve for geometry and is thus related to the proposals that we will discuss in section 3.

1.2 Mathematicians

A mathematician might be looking at or studying a discrete or finite counterpart of an existing theory just to see, e.g., what theorems remain provable in both cases. This in itself is interesting from the perspective of so-called reverse mathematics. The core question is to find out what is necessarily required to prove certain theorems. See, e.g., Simpson 2005. Proofs that also hold in a discrete geometry are thus independent from any assumption about discreteness or continuity. One might however go deeper into the foundations of mathematics and study finite geometries from a foundational perspective. One such approach is strict finitism (although sometimes the terms ultra-finitism or ultra-intuitionism are used as well) that is not meant as a subtheory of other foundational theories but as an alternative on its own. It shares with the many forms of constructivism the fundamental view that mathematical objects and concepts have to be accessible to the mathematician in terms of constructions that can be executed or performed. The various forms are distinguished from one another as to how the concepts of ‘execution’ or ‘performance’ are to be understood. Most constructivists allow for the potentially infinite, i.e., if a procedure or algorithm will (provably) terminate at some moment in the future, then the outcome is accepted as constructable. See Bridges & Richman 1997 for an overview and the corresponding entry in this encyclopedia. Strict finitism wants to go one step further and argues that an indefinite outcome is not be accepted as an outcome, since, as all computational resources are finite, it could very well be that these resources have been used up before the outcome has been reached. The additional qualification serves to make the distinction with Hilbert's finitism which, roughly speaking, can be seen as a form of finitism on the meta-level (e.g., although mathematical theories can talk about infinite structures, still the proofs in such theories must have a finite length). As might be expected, strict finitism is not a popular view in the philosophy of mathematics. Nevertheless a number of proposals have been put forward. A history and an account of the actual (though now somewhat dated) state of affairs can be found in Welti 1987. In section 2 more will be said about such proposals.

1.3 Computer scientists

In the computer sciences the theories and proposals that have been put forward are of a quite different nature than the logical and mathematical ones, although they do inspire one another. The problem one faces here is precisely to set up a translation from a classical geometrical, analogous model to a model whereof the domain (usually) consists of the finite set of pixels or cells that make up the (computer) screen. The obvious drawback (from the perspective of this entry) is that nearly all these models assume the classical (infinite) model in the background and, hence, do not have a proper foundation of their own—a situation quite analogous to numerical analysis that relies on classical analysis for proving the correctness of the procedures. Most attention is paid to the problem of proving correspondences between the original and the discrete model to make sure that the image obtained is, in certain respects, faithful to the original. A simple mathematical example concerns the number of holes in a 3-dimensional Euclidean surface. One wants to be sure that every hole that shows up in the digital picture does indeed correspond to a hole in the original mathematical object. See Borwein & Devlin 2009 for more examples.

Note also that these theories should not be confused with computer programs that have the ability to reason about geometrical objects. This is part of the research area of automated reasoning—see Chou et al. 1994 for a nice introduction—and its basic objects are proofs, not necessarily the mathematical objects the proofs are about.

1.4 Physicists

As is commonly known, one of the hot topics in physics is the search for the unification of quantum (field) theory and general relativity theory. If successful, the famous “Theory of Everything” would result. As is equally well-known, the hardest problem to solve is how to deal with space-time. Quantum (field) theory requires space and time as a background, whereas in general relativity the structure of space-time is largely determined by the masses and the energy present. One way out—and most of section 3 deals with such examples—is to find a “deeper” structure that underlies both theories and that, in a sense, generates space and time from more fundamental concepts. Clearly, if such a theory were to be found, it would not merely produce “just” a model, but it would actually be considered as a genuine representation of reality. Most of these models, speculative as they may be at the present moment, turn out to be discrete and hence these proposals, in contradistinction, e.g., with the logicians, do claim to be a correct description.

From the historical perspective, it must be added that on and off some physicist tried to find out what discrete counterparts of existing classical physical theories could look like. Usually the philosophical underpinnings of such an attempt tend to be rather idiosyncratic. In section 2 one such example will be presented. Typically such attempts did not create a major stir, they quickly disappeared into the background, but nevertheless they do contain some interesting and relevant ideas.

1.5 Philosophers

In a rather straightforward sense, all of the above involves philosophers as well. Discussions about logical systems, about foundational mathematical theories, about Zeno paradoxes, about supertasks, about what a model and a representation are, …, are typically topics that belong to the domain of the philosophers. In addition they bring in arguments from other philosophical and/or scientific domains. Suppose that there are excellent arguments from an epistemological or ontological perspective, claiming that the world should be considered discrete, then these arguments can support the search for such a discrete worldview, including the elaboration of a discrete geometry. Even if from the mathematical point of view, the theory looks rather clumsy or difficult to work with, nevertheless, because of the philosophical considerations, it has to be so. Without such supporting arguments, one's position in such a case would be much weaker. Finally, they also pay attention to the historical side of the matter. It is rather striking, but this will not be presented here, to see that many proposals have been put forward in the course of our history to demonstrate that space, time and man should be considered finite and/or discrete. See, e.g., Sorabji 1983 and Moore 1993 for excellent historical overviews and White 1992 for twentieth-century developments.

As said, these five groups are the most important ones so completeness has not been demonstrated and neither has mutual exclusiveness been shown. This short overview only meant to list the different intentions, motivations, purposes and methodologies of the parties involved.

2. Discrete geometries as direct analogs

The first question to settle is what the classical theory will be. As most of the work that has been done has been limited to the plane, this presentation will also be restricted to that particular case (in most proposals the extension to higher dimensional geometries is considered to be completely straightforward). But that is not sufficient, for there are different routes to follow as to the presentation of plane geometry. One possibility is to take any axiomatisation of the (Euclidean) plane—say, Hilbert's formulation of 1899 in his Grundlagen der Geometrie—and show what changes are required to have (a) finite models of the axiomatic theory, and (b) finite models that approximate the classical (infinite, Euclidean) models as closely as possible. One of the very first attempts dates back to the late 40s, early 50s and will therefore be presented here as an exemplar (in the sense that it has both all the positive qualities required as well as the oddities that seem to go together with such attempts). More specifically, it concerns the work of Paul Kustaanheimo in partial collaboration with G. Järnefelt in the period between 1949 and 1957. Next a recent proposal will be discussed, along totally different lines, of Patrick Suppes and a somewhat older proposal of Ludwik Silberstein, where the geometry is directly imbedded in a physical theory, special relativity theory to be precise. The concluding section of this part deals with some specific problems and tentative solutions.

2.1 A standard axiomatisation for Euclidean plane geometry

What does a Hilbert-type axiomatisation look like? The first thing one has to do is to fix a (formal) language. Usually one chooses first-order predicate logic with identity, i.e., a language containing names for variables (and, possibly, for constants), names for functions (if necessary), names for predicates including the identity predicate, logical connectives and quantifiers, and a set of grammatical rules to form sentences. The restriction to first-order logic means that only variables can be quantified over. Without going into details, it should be remarked that a more expressive language can be chosen, e.g., whereby quantification over predicates is allowed as well.

Once a language has been chosen, the next problem is to determine the primitive terms of the language. For plane Euclidean geometry, these are points and lines, although sometimes lines are defined as particular sets of points. Next the basic predicates have to be selected. There exist a number of different axiomatisations at the present moment. The most frequently used predicates are: the incidence relation (“a point a lies on a line A”), the betweenness relation (“point a lies between points b and c”), the equidistance relation (“the distance from point a to b is the same as the distance from point c to d”), the congruence relation (“a part of a line, determined by two point a and b is congruent to a part of a line, determined by two points c and d”). Note that it is not necessary that all of them occur in an axiomatisation. As an example, if lines are not introduced as primitive terms, then usually there is no incidence relation.

The next step is the introduction of a set of axioms to determine certain properties of the above mentioned relations. As an example, if the axiomatisation uses the incidence relation, then the typical axioms for that relation are:

Finally, one looks for an interpretation or a model of the axiomatisation. This means that we search for a meaning of the primitive terms, such as points and lines, of the functions (if any) and of the predicates in such a way that the axioms become true statements relative to the interpretation. Although we often have a particular interpretation in mind when we develop an axiomatisation, it does not exclude the possibility of the existence of rather unexpected models. In a sense finitist models rely on this very possibility as the next paragraph shows.

2.2 Natural geometry

Paul Kustaanheimo was a member of a group of mathematicians based at Helsinki, who were all interested in some form of finite geometry. The most prominent members were G. Järnefelt, P. Kustaanheimo, and R. Lehti. The origin of their inspiration is to be found in the work of J.T. Hjelmslev who developed so-called “natural” geometry (“Die natürliche Geometrie”, see his 1923), also referred to sometimes as “physical” geometry. In a strange way there is a connection with Suppes' approach to be discussed later in the sense that geometry is primarily seen as (almost) an experimental science, i.e., the geometer deals with rulers and compasses, creates flat surfaces to measure, and so on. Of course, since we humans can only manipulate finite objects in finite ways, a discrete geometry must result.

Kustaanheimo's proposal—we reproduce here in rough outline the excellent presentation of his proposal in Welti 1987, pp. 487–521, which is far more accessible than the original work—is based on the following line of reasoning. A standard model for the classical axiomatic theory of Euclidean geometry consists of the cartesian product of the real numbers with itself. Or, as it is usually formulated, a point in the plane is mapped onto a couple of real numbers, its coordinates. The real numbers have the mathematical structure of an infinite field. But finite fields exist as well. So why not replace the infinite real number field with a finite field, a so-called Galois field?

The best result one could obtain would be that every finite Galois field satisfies most of the axioms of Euclidean geometry. That however is not the case. The outcome of Kustaanheimo's research is slightly more complicated:

In short, one cannot claim that any finite field will do, but only some and for that matter only part of it.

This approach raises some important philosophical questions:

In defense of Kustaanheimo's approach it must be said that the connections between infinite and finite models are usually far more complex than one expects. A finite model is not merely a scaled-down version of an infinite model. Very often a different structure appears. As an analogy take the (infinite set of) natural numbers. Take a finite part, say the numbers 1 up to L. In the finite case it makes sense to talk about small and large numbers compared to L. This is classically not possible. So one finds additional structure. Metaphorically speaking, by making things finite, a more detailed or “fine-grained” structure appears, that is wiped out in the presence of infinities. Perhaps the distinction between “good” and “bad” geometrical objects is such an additional feature that disappears in the classical Euclidean model. Thus perhaps the prime numbers do have a significance. But still the question remains: is this a new sort of Pythagoreanism? More details about Kustaanheimo's approach are to be found in the following supplementary document.

Finite Fields as Models for Euclidean Plane Geometry

2.3 A constructive approach

The originality of Suppes' approach resides in the fact that he proposes to formulate geometry as a practice of constructions, comparable to Hjelmslev's work yet quite distinct. A construction is here to be understood in the elementary sense of producing drawings or diagrams, using certain instruments, such as a ruler and/or a compass, and not in the modern sense in foundational terms, i.e., a constructive, axiomatic foundation for geometry.

Two elements are important from the (strict) finitist perspective. Firstly, constructions can be formulated in a quantifier-free way; the expression “draw a line” does not imply that we need to speak about the complete set of lines in a plane. “Draw a line” will result in a specific finite object, namely a line fragment on, e.g., a piece of paper. Secondly, all models considered will be finite, because no matter what constructions are performed, the starting point will always be a finite set of points.

Suppes considers two basic operations: the operation B, that corresponds with bisecting a line ab and the operation D, that corresponds with doubling a line ab. A step Ci in a construction consists of three elements: the first element is the (new) point to be constructed, the second element is a pair of points that are already present, and the third element is either B or D, according to what operation is selected. The starting position consists of three given points, a, b, and c.

Example: consider the construction ((d, ac, B), (e, bd, D)) consisting of two steps. The first step says to start with ac and construct the mid-point d, and in the second step we take the segment bd and double it. A diagrammatic representation makes clear what is happening:

Figure 1
Figure 1

Starting from the triple a, b, and c, we have constructed the parallellogram abce.

Of course, merely listing a set of constructions is not sufficient to talk about a geometrical theory, so it has to be shown, as it is indeed done by Suppes, that a formal-axiomatic treatment is possible. It suffices to list a set of necessary axioms about the B and D operations, so that one can prove that the figure drawn in the example above is indeed a parallellogram. In addition a representation theorem is proven such that points are attributed rational coordinates.

Two important comments must be made. First, it remains to be shown that this elementary geometrical theory can be expanded all the way into a full-fledged geometrical theory that can be considered a plausible alternative to classical geometry. Suppes himself seems quite confident as he writes: “my own conviction is that one can go the entire distance, or certainly almost the entire distance, in a purely finitistic way, …” (p. 136). Secondly, the focus on constructions opens up a novel way to deal with the problem of the distance function. We do not need a general distance function, but, for each separate case, we have to be able to attribute coordinates to the points present in the diagram and nothing more. It remains to be seen however whether the basic operations, B and D, can be extended without losing this important property.

In section 2.5, we return to the distance problem to present some other solutions that have been put forward. First, however, a quite different approach from the physical side.

2.4 A direct physical example

In 1936 Silberstein proposes a fairly straightforward discrete theory. The only thing we use in physics are labels, x, y, z, t and, when discrete, these can always be labelled with integers. In the short booklet that brings together the five lectures on this theme, Silberstein restricts himself to one spatial and one time parameter. Although he acknowledges the problem (p. 15) of higher dimensions, he does not deal with it. So the distance problem becomes rather trivial since on a line, the discrete distance function and the Euclidean distance function coincide. His proposal is elementary in the sense that the smallest distance, viz. the distance between two adjacent points xi and xi+1 is equal to 1 and likewise for the time coordinate, so that 1 becomes the maximum velocity, put equal to c, so c = 1. Analogs for derivatives are defined, differential equations are replaced by difference equations, an analog in terms of finite differences is derived of the Taylor series and most of classical physics can be imitated. It is worth mentioning that the lectures include a rough calculation of the size of the chronons, i.e. the smallest unit of time, and hodons, i.e. the smallest unit of (one-dimensional) space. Suppose that a is the number of hodons in one centimetre and b the number of chronons in one second, then (1/a)/(1/b) = b/a = c = 3·1010 cm/s, or b = 3·1010·a. If we fix a lower limit for a, say 10−8 cm (this is actually Silberstein's suggestion!), then b = 3·1010·a ≥ 3·1018, being the number of chronons in one second. He further applies the discrete spacetime framework to special relativity and here too, an analog is found. Quite interesting in this approach is the fact that additional conditions appear that are not needed in the classical case. Here is one illustration.

Special relativity theory relies on the expression, here restricted to one spatial dimension, viz., x2c2t2. Thus any change to new coordinates x′, t′, has to satisfy x2c2t2 = x2c2t2. Suppose that we write x = ax′ + bt′ and t = cx′ + dt′, then the inverse relations will be x′ = (dx′ − bt′)/(ad − bc) and t′ = (ax′ − ct′)/(ad − bc). If however, x, x′, t and t′ all have to be integers, then necessarily ad − bc = 1. This last condition is a pure consequence of the fact that we are thinking in a discrete way, using integers.

2.5 Some partial solutions and problems to deal with

In this section three specific problems will be discussed that need to be solved if any proposal for a discrete geometry is to be taken seriously: the distance function problem, the dimension problem, the anisotropy problem, and the identification problem.

The distance function problem. There is a rather devastating argument that shows the impossibility of a genuine distance function for a discrete geometry. It dates back to 1949 and was first formulated by Hermann Weyl: “If a square is built up of miniature tiles, then there are as many tiles along the diagonal as there are along the sides; thus the diagonal should be equal in length to the side” (Weyl 1949, p. 43). At least two solutions to this problem have been formulated.

Van Bendegem 1987 argued that in a finite geometry it should be a basic fact that lines and points have extensions. In particular, lines are supposed to have a constant width (independent of the orientation of the line) ND. Thus ND represents a large (finite) number, corresponding to the number of squares that form ND. Given a line, the width is always defined as perpendicular to that line. Now suppose that the line has an orientation corresponding to an angle α between the line and the x-axis. Then the width ND of that line, when projected on the x-axis will be [ND/sin α] where the expression [x] indicates the greatest integer less than or equal to x.

Figure 2
Figure 2

The distance d between two points p and q is then defined as the number of squares in the rectangle formed by the line from p to q and the width ND, divided by ND. The idea is that, although in a discrete geometry lines must necessarily have a width, this is not an essential feature, so it can be divided out. Hence:

d(p,q) = NL · [ ND/sin α] (div ND).

NL here corresponds to the number of layers parallel to the x-axis between p and q and n (div m) is the quotient of the division of n by m.

As an illustration, consider the Weyl problem.

Figure 3
Figure 3

We have a right-angled triangle pqr such that for simplicity the right sides pq and qr are equal to one another and are aligned with the axes of the grid. Suppose that the number of squares in the right sides is NL. Then

d(p,q) = d(q,r)
= NL · [ND] (div ND)
= NL,

since, of course, [ND] = ND. However, the hypotenuse has an angle α such that sinα = √2/2. Thus,

d(p,r) = NL · [ND/sin α] (div ND)
  = NL · [√2 · ND] (div ND)
  = NL · [√2]n,

where [r]n means the number r up to n decimals. No calculations are needed to show that (a close approximation of) the Pythagorean theorem holds, i.e., d2(p,q) + d2(q,r) = d2(p,r). Finally, there is an easy explanation why the Weyl problem occurs: it corresponds to the limiting case ND = 1. When ND = 1, then [√2·ND] = [√2] = 1, hence d(p,r) = NL·1 = NL and Pythagoras' theorem fails.

Although the introduction of a width ND apparently solves the problem, it is equally clear what the drawbacks are. Without classical Euclidean geometry in the background, there is really no way to get the construction going. There is no definition of a line in terms of the discrete geometry itself, and, above all, the projected width on the x-axis of a line L is calculated according to a Euclidean distance function that is not explicitly mentioned. In short, there is a mixture of two distance functions.

Forrest (1995) presents another solution. He starts by introducing a family of discrete spaces En,m, where n corresponds to the “classical” dimension of space and m is a scale factor, to be understood as follows: m is a parameter to decide when two points are or are not adjacent, which is the basic (and sole) concept of his geometry. Thus in the case n = 2, points are labeled by couples of integers (i, j) and two points (i, j) and (i′, j′) are adjacent if they are distinct, and (ii′)2 + (jj′)2m2.

Once adjacency has been stipulated, a distance function can be easily derived: the distance between p and q, d(p,q), is the smallest number of “links” in a chain of points connecting p and q such that each one is adjacent to the previous one. Next there is no problem to show that a straight line passing through two points is that chain of points that has the shortest distance. If the parameter m has a small value, then the resulting distance function is not Euclidean. More specifically, if m = 1, then we have, once again, the situation presented by Weyl. But if, say, m = 1030 (the figure proposed by Forrest himself), then the situation changes. Then it is possible to show that the distance function on the discrete space will approximate the Euclidean distance function as close as one wants. Without presenting all the details, one can show that a Euclidean distance function dE and the discrete distance function d are related by a scale factor, i.e., dE(p,q)/d(p, q) = constant(m), where the constant is determined by the value of m. No calculations are needed once again, to show that the original distance function d satisfies the Pythagorean theorem.

If one is looking for a weak point in this approach, then inevitably one must end up with the basic notion of adjacency. What is the reason for defining adjacency in Euclidean terms? For, after all, a condition such as (ii′)2 + (jj′)2m2 looks as Euclidean as can be. A possible way out is suggested in Van Bendegem 1995. One of the advantages of a discrete approach—and, as a matter of fact, this seems to hold in general for strict finitist proposals—is that definitions that are classically equivalent turn out to be distinct in a strict finitist framework. Thus, more specifically, a circle can be defined in (at least) two ways:

Classically speaking, these two definitions are equivalent. However, they are not in a discrete geometry. If, e.g., the distance function is defined as the lowest number of hodons that connect two given points, then the two definitions are not equivalent. Using definition (a), the circle will have the shape of a square (a well-known fact in so-called taxicab geometry) and thus useless to define adjacency as done above. Definition (b) on the other hand produces a figure that can approximate a Euclidean circle as close as one likes. In that way Forrest's definition for adjacency is acceptable within a discrete framework, as no reference is made to a Euclidean distance function.

The dimension problem. Not much attention has been paid to this problem although it is fundamental. If the plane consists of a discrete set of elements, hodons or atoms, then this set must have dimension zero. For, in order to determine the dimension, this set must be equipped with a topology and the only possible candidate is the discrete topology. This entails that the dimension is zero. Either one takes the option to simply drop the notion of dimension on the basis of the argument that the concept of dimension presupposes a concept of continuity and topology and hence has no finitist meaning. Or one searches for an analog, but it is not clear at all what that could be. Something one should not try to do is to derive a notion of dimension from an ordering relation. Suppose that the hodons are labeled by integers (i, j) in some appropriate coordinate system, such that −Li,jL, where L is some upper bound. Then quite different ordering relations are possible. One possibility is to define (i, j) < (k, l) if and only if i + j < k + l. But another possibility is to define (i, j) < (k, l) if and only if either i < k or, if i = k, then j < l. It therefore requires additional arguments to claim that among all possible order relations on a given set, one and only one has a special status. However in section 3 we will see that, using the tools of graph theory, a definition of dimension can indeed be given.

The isotropy problem. If the plane is built up from square hodons, as in the paragraph above, then the hodons are arranged in such a way that every hodon touches four other hodons, i.e., the plane can be modeled as a square grid, then it is obvious that there are preferred directions, in this case, there will be two preferred directions. However if instead of squares, hexagons are taken as hodons, then there are three preferred directions. Thus no matter what the shape is of the hodon, there will be preferred directions and this implies that the space is anisotropic. However, for physical applications one would like to have isotropy (or at least as close as possible).

Two approaches are possible. Either the hodons have a definite shape or they do not. In the first case it has been suggested that instead of a regular periodic tiling of the plane, one should look for an irregular aperiodic tiling, such as the Penrose tiling.

Figure 4
Figure 4

Although no worked-out examples are available, it seems a promising line of attack. In the case of the Penrose tiling it is interesting to see that there are no classically straight lines anymore, precisely because of the aperiodicity. In the second case vagueness is a possible way out. As Forrest defends in his 1995, the whole idea of a specific representation of a discrete space, e.g., as built up from tiny squares is fundamentally mistaken. If a hodon has a specific form, then one cannot avoid asking questions about parts of a hodon, such as its border, but that does not make sense if hodons are the smallest spatial entities possible. An intermediate position defended in Van Bendegem 1995 is to consider a series of discrete geometries Gi, each with a hodon of a particular size, hi, such that hihj, for ij and, in addition, there are M and N such that M < hi < N, for all i. One can then apply a supervaluation technique to the series. This means that a statement will be True (False) if it is true (false) in every geometry Gi. In all other cases it is Undecided, i.e. true in some and false in some others. Now if A is the statement ‘hodons have size α’ (where α is a specific number), this will be Undecided, if a corresponds to at least one of the hi. Such an approach however introduces all problems connected with vagueness into the discussion, which is not necessarily an encouraging situation. For this problem too, an original answer can be given from within the framework of graph theory.

The identification problem. Suppose that we do have a full-fledged discrete geometry and suppose that we replace the classical geometry of a physical theory with the discrete version. We will now be talking about hodons and chronons. The “natural” question that arises is what is to be identified with what? Imagine that, in line with Silberstein, we are a bit naïve and would be tempted to identify the hodon with the Planck length, lp = 10−35 m and the chronon with Planck time, tp = 10−43 s. If one now accepts that the maximum speed is one hodon per chronon, then what comes out of this identification is that the maximum speed is indeed c = 3.108m/s. (Note: nothing amazing is happening here, since in classical physics, lp is defined as √(ℏG/c3) and tp is defined as √(ℏG/c5), so that it is immediately obvious that lp/tp = c.) Now ask the simple question what the next speed will be, just below c? The answer must be: one hodon per two chronons, but that means a velocity of c/2. We seem to have missed out the whole range between c/2 and c. There is a way out, but it supposes that ‘jerky’ motion is considered possible, an aesthetically quite ugly idea. An object moves two hodons in two chronons and then waits for one chronon and then repeats the same motion. The average velocity is then 2c/3. One possible way out, that will be briefly mentioned in Section 3.2, is to introduce an element of randomness into the structure.

3. Discrete geometries as generators of classical geometry

3.1 The general framework

As stated in section 1, here we will discuss proposals that search for a theory or model, underlying a geometrical theory, such that therefrom the classical geometrical concepts can be derived. Obviously one needs to be extremely careful as the “danger” is constantly present that infinities enter into the picture somewhere unseen or unnoticed. Suppose, to give a simple example, that only a set of finite points is allowed for, but also an operation that generates between any pair of points a third point different from all points present and there are no restrictions on the number of times the operation can be applied, then clearly we have here an in infinite number of points “in disguise”. To call such a model a discrete geometrical model seems quite inappropriate.

One also needs to be very careful about, e.g., claims that quantum mechanics deals with discrete values, usually in relation to Heisenberg uncertainty principles, hence physics at the basic level is a discrete theory. That, however, is extremely misleading. It is sufficient to consult any handbook on quantum mechanics to observe that the mathematics used requires the full use of infinities. No matter whether one uses Heisenberg's matrix approach, Hilbert's operator formalism, Schrödinger's wave equation or some other formalism, the mathematics involves integrals, derivatives, infinite (convergent) sums, spaces with infinite dimension, and so on (see the entry on quantum mechanics). Not much discreteness to be found here. This implies that for quantum mechanics as well it is a genuine problem to find a discrete counterpart.

From the historical point of view, no doubt the work of Tullio Regge can be seen as a first attempt to develop a model out of which geometrical concepts could be developed. The original paper dates from 1961. More specifically, we are concerned here with general relativity theory (GRT). Although the original intention of Regge was to construct techniques for solving the equations of GRT in the “difficult” cases, i.e., where there is no symmetry present and perturbation theory is not applicable. Rather than transcribing the differential equations of GRT into difference equations, Regge looked for a technique that leads to different equations altogether. Without presenting the full details, the core concept of his approach is the “deficit angle”. In GRT we are dealing with curved spaces. Take a two-dimensional curved surface. If it is flat, then it can be covered with triangles. If it is curved, it can be approximated with triangles, but with an important difference. Suppose that triangles meet at vertices, then we can look at one particular point and all the triangles that meet in that point. If that part of the surface is flattened out, there will be a gap somewhere. To this gap corresponds an angle and that is precisely the deficit angle. The larger the curvature, the larger the deficit angle. The same technique works for the four-dimensional case, where instead of triangles, simplexes are used. The beauty of this approach is that the equations of GRT can be rewritten in terms of deficit angles and lengths of the edges of the simplexes and solved in terms of these concepts. Misner et al. 1973 contains a chapter (42: ‘The Regge Calculus’) that explains Regge's approach in a compact and perfectly accessible way.

Today there is a rather impressive amount of attempts being made. Most of them are to be considered as highly speculative as they truly reflect the present state of affairs in full development. Such fancy names as “loop quantum gravity”, “causal dynamical triangulations”, and “quantum graphity” circulate. But it should be noted that the speculative nature has to do first and foremost with the claim that this or that model will serve as a common foundation to quantum (field) theory and general relativity theory, hence to the whole of physics, so to speak. Our concern here is more modest: do these models tell us anything about how discrete geometries can be formulated such that they generate classical geometry? So, even if the physicists were to reject such a model, it might still be of interest to the question of the possibility of discrete geometry.

A good survey paper for the present-day situation is Meschini et al. 2005. In this paper the work of Manfred Requardt is briefly presented and this will serve as a prototypical example. Although approaches of this type are not mentioned in the literature on spatial logics, nevertheless the connections between the two are very deep and close and definitely need to be explored further.

3.2 A prototypical example

The starting point is a discrete graph G = ⟨N, C⟩ consisting of a set N of nodes, ni, and a set C of connections, ci j, such that no node is connected to itself and nodes ni and nj have one connection at most. What seems most obvious is how to define a distance function and in nearly all proposals, this is indeed the strategy followed (similar to the definitions proposed in section 2.5):

D(ni, nj) = the smallest number of connections that lead from ni to nj.

It is easy to see that the classical properties of a distance function are satisfied:

At first sight it is not obvious at all how one should proceed further, but, if one reads the ni and the cij as a kind of vector, then linear combinations can be formed, where fi and gi j are, e.g., natural or rational numbers:

f = Σi fini         g = Σik gikcik

These two expressions can be read as functions over ni and cij. What one needs now is a relation between nodes and connections, so introduce a special function d:

d: ni Σk cik

It is quite interesting to see what happens if we extend the function d in a linear way so that it can be applied to arbitrary functions f:

d f = Σi fi Σk cik

If we now stipulate that cik = −cki (as a kind of vector equation, stating that the connections have a direction) then the above expression can be rewritten as follows (take into account that, since no loops are allowed, cii = 0):

d f = ½ Σik (fkfi)cik

Although it is still a long way off, this expression df already has some nice properties that reminds one of a derivative of a function f:

However, as is easy to check with the above definitions, the product rule fails, i.e., d(f·g) is not equal to df·g + f·dg.

So, to a certain extent, it is possible to construct a basic form of calculus on discrete graphs. It does require some ingenuity and creative thought to find the “right” counterparts, but this simple example shows that quite a lot of structure can be derived from a discrete graph. There is actually more. Discrete graphs allow for a nice solution to the dimension problem, that was mentioned in section 2.5. This is the rough outline of the idea:

Consider a node ni, then U1 is the set of nodes nj such that D(ni, nj) = 1, i.e., U1 brings together the closest neighbours of ni. Likewise, we can define U2 as the set of nodes nk such that D(ni, nk) is at most 2. It follows that UnUn+1 and so we obtain a nested series of neighbourhoods of ni. If one understands dimension as a measure of the “growth” of the neighbourhoods, then dimension can be defined as:

Dim = limm→∞
ln |Um|
ln m

One of the interesting features of this definition is that it need not be uniform over the whole graph, for all depends on the choice of the initial node ni. But in the case where the graph is sufficiently uniform, the dimension will be a constant. Furthermore if we take a classical case, such as 3-dimensional Euclidean space, then the dimensions match. Suppose we have a regular lattice as the underlying graph, then a particular node has a cube as the set of nearest neighbours U1 consisting of 33 = 27 points, and a neighbourhood Um will count (m+2)3 nodes. Therefore

Dim = limm→∞
ln (m+2)3
ln m
        or         Dim = limm→∞ 3·
ln(m+2)
ln m

Since for m sufficiently large, ln(m+2)/ln m approximates 1, it follows that Dim = 3. What this shows is that, starting from discrete graphs, we have obtained an extension of the concept of dimension. One may have noticed that this type of definition is quite similar to some of the definitions used to define the dimension of fractal images.

In addition, discrete graphs make it possible to handle the anisotropy problem as well. It is sufficient to introduce an element of randomness in the network by, e.g., taking averages over a connected set of nodes, to avoid any privileged directions. There are clearly similarities here with the irregular tiling scheme or the introduction of vagueness, but the important distinction is that statistical and probabilistic concepts are (fairly) well understood, whereas the tiling problem is, as mentioned, an open problem, and vagueness remains a notoriously difficult concept to grasp (see the entry in this encyclopedia).

3.3 A special case: the combinatorial hierarchy

It would be a mistake to believe that the different attempts listed above, somehow form a complete catalogue that allows to classify all possible approaches. In this paragraph such an exotic example, viz. the combinatorial hierarchy, is briefly presented. In this approach the focus is not on the equations of physics themselves, but on the physical constants occurring in them, such as the velocity of light c, the Planck constant h, the mass of the electron me, and so on. As these values are necessarily finite, it seems worthwhile to investigate whether a finitist approach can explain why these constants have the values they happen to have. Such approaches are sometimes referred to as “number games”.

Let me give one very simple example. Starting from a universe consisting of a finite number of bits, i.e., either 0 or 1, a basic operation is introduced, viz., to make a “discrimination”. To express this operation, another operation is needed: addition modulo 2: 0 + 0 = 1 + 1 = 0 and 0 + 1 = 1 + 0 = 1. If the outcome is 0, then the elements of the sum are not distinguished, else they are. Look now at sets that contain 0 and/or 1, and such that, if two elements are distinguishable, that element belongs to the set as well. There are exactly 3 (= 22 − 1) such sets: {0}, {1} and {0,1}. If these 3 elements are now taken as a new basis instead of 0 and 1, a clever construction shows that 7 (= 23 − 1) such sets exists and, in a next step, 127 (= 27 − 1) pops up. Now 3 + 7 + 127 = 137 and that number is close to the electromagnetic coupling constant.

The success of this program has been rather modest as these models do not connect easily to existing physical theories. A self-contained presentation of this program is to be found in Bastin & Kilmister 1995. There is a very strong similarity with the work of A. S. Eddington. Not very surprisingly, a presentation of the work of Eddington on his fundamental theory has been written by Kilmister 1994.

3.4 Can it be an empirical problem?

So far we have explored several theoretical possibilities of a discrete geometry as a counterpart of classical geometry. Given the examples we discussed in the preceeding sections, the relevance for physics seems obvious. Although one might be tempted to believe that the discreteness of space and/or time is a purely theoretical matter, nevertheless it is an intriguing question whether the matter could also be of an empirical nature. More concretely, is it imaginable that somehow we could design an experiment such that the outcome is either that space is discrete or that space is continuous? This might sound really far-fetched, but nevertheless the matter has drawn the attention of philosophers and a specific experiment has indeed been proposed, though at present the circumstances to execute the experiment are unfortunately not feasible.

It is quite interesting to see that already in 1961 Paul Feyerabend suggested such a possibility. However not much more is said, as “the difficulty of the present situation seems to lie in the fact that discrete alternatives for the mathematics, which at the present moment is being used in physics, are missing” (p. 160). Equally interesting is the fact that Feyerabend too mentions the standard argument that the lack of a Pythagorean theorem is a genuine problem. His proposal is that “we need only assume that measurements in different directions do not commute; and then perhaps we can retain the theorem as an operator equation” (p. 161). Here too, unfortunately, nothing more is said. Forrest (1995) maintains that such an experiment is possible. The fundamental reason is that classical mathematics uses continuous variables whereas strict finitist mathematics uses discrete variables. Thus for differentation and integration finite analogs must be found and they will approximate the classical case, but never coincide with it. Hence, there will always be small differences and it cannot be excluded that these could be detectable.

One such possibility for detection concerns the following curious phenomenon. Take the differential equation, df/dx = ax(1 − x). It is a straightforward exercise to solve it and one will find a very neat continuous solution, whereas if one takes for the discrete case the corresponding difference equation, Δf / Δx = ax(1 − x), depending on the value of the parameter a, the behaviour of the function f produces chaotic effects, which are absent in the continuous case. See Van Bendegem 2000 and Welti 1987, pp. 516–518. The outcome of such an experiment would not be as clear-cut as wished for, but to observe chaotic effects means that space is discrete, whereas to observe no chaotic effects means that either space is continuous or the hodons are far smaller than we imagined. No further progress is to be reported at the present moment.

It has been indicated several times in this lemma that different scientists with different intentions and aims and with different backgrounds have proposed or propose equally different ideas about discrete geometry as an alternative for classical geometry. Many authors do not necessarily present more or less complete theories, but rather limit themselves to making suggestions and exploring some particular idea. These papers are to be seen as sources of inspiration in the search for a full-fledged theory. A few examples are: Hahn 1934, Biser 1941, Coish 1959, Ahmavaara 1965a, 1965b, Finkelstein 1969 (this is the first of a five paper series with the same title in the same journal), Dadić & Pisk 1979, Finkelstein & Rodriguez 1986, Meessen 1989, Buot 1989, to name but a very few. For the period 1925–1936, Kragh and Carazza 1994 is an excellent overview showing that many physicists were playing around with finitist ideas.

4. What needs to be done next?

The first task to do seems rather straightforward: take any of the proposals presented here and elaborate them into a full-fledged geometry. Then it will be possible to make a comparison with, e.g., Hilbert's axiomatisation that was mentioned. The second task seems rather forbidding: using this discrete geometry, show how to do physics. In general terms, this is indeed a huge undertaking, but there is a possible way out. If it could be demonstrated that this approach works, say, for classical mechanics, then this would count as a major argument in favour of discrete proposals. As it happens, some very important work has already been done, for what we will need, is a fully formalized version of classical mechanics, not the textbook versions that leave many things unmentioned, but that might prove to be crucial for the underlying geometry. Such versions exist at the present moment, see, e.g., Ax 1978, Andréka et al. 2008, Benda 2008, for just a few examples. As it happens, one of the earliest versions involves Patrick Suppes, see McKinsey, Sugar & Suppes 1953. Thus the enterprise seems to be a genuine possibility. But all things considered, both on the mathematical as on the physical level an enormous amount of work remains to be done. Probably the best way to characterize the present-day situation is that some ‘famous’ objections to a discrete or finitist approach in geometry have been (partially) answered and that a host of mathematical, physical and philosophical suggestions and ideas have been presented, and partial models have been developed. In other words, the conditions are satisfied that make it interesting to continue this research program.

Bibliography

Other Internet Resources

Related Entries

geometry: in the 19th century | logic: temporal | mathematics: constructive | quantum mechanics | space and time: supertasks | vagueness | Zeno of Elea: Zeno's paradoxes