Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Independence Friendly Logic

First published Mon Feb 9, 2009

Independence friendly logic (IF logic, IF first-order logic) is an extension of first-order logic. In it, more quantifier dependencies and independencies can be expressed than in first-order logic. Its quantifiers range over individuals only; semantically IF first-order logic, however, has the same expressive power as existential second-order logic. IF logic lacks certain metaproperties that first-order logic has (axiomatizability, Tarski-type semantics). On the other hand, IF logic admits a self-applied truth-predicate—a property that first-order logic notoriously does not enjoy. Philosophical issues discussed in connection with IF logic include reformulating the logicist program, the question of truth in axiomatic set theory, and the nature of negation in logic and in natural languages. There have been attempts to apply the IF logical framework beyond IF first-order logic itself, notably to propositional logic and modal logic.


1. Introduction: Quantifier Dependence

In mathematical prose, one can say things such as ‘for all real numbers a and for all positive real numbers ε, there exists a positive real number δ depending on ε but not on a, such that…’ What is important here is quantifier dependence. The existential quantifier ‘there exists δ’ is said to depend on the universal quantifier ‘for all ε’ but not on the universal quantifier ‘for all a.’ It was an essential part of Karl Weierstrass's (1815–1897) work in the foundations of analysis that he defined the notions of limit, continuity, and derivative in terms of quantifier dependence.[1] For a concrete example, a function f : DR is continuous, if for all a in the set D and for all ε > 0 there exists δ > 0 such that for all x in D, if |xa| < δ, then |f(x) – f(a)| < ε. The definition of uniform continuity is obtained from that of continuity by specifying that the quantifer ‘there exists δ’ depends only on the quantifier ‘for all ε,’ not on the quantifier ‘for all a.’[2]

Independence friendly first-order logic (a.k.a. IF first-order logic, IF logic) was introduced by Jaakko Hintikka and Gabriel Sandu in their article ‘Informational Independence as a Semantical Phenomenon’ (1989); other early sources are Hintikka's booklet Defining Truth, the Whole Truth, and Nothing but the Truth (1991) and Sandu's Ph.D. thesis (1991).[3] IF first-order logic is an extension of first-order logic, involving a specific syntactic device ‘/’ (slash, independence indicator), which has at the object language level the same effect as the meta-level modifier ‘but does not depend on’ has in the example just considered. In the notation of IF logic, the logical form of the statement that a function f is uniformly continuous would be ∀a∀ε(∃δ/∀a)∀x R, to be contrasted with the form of the statement that f merely is continuous, ∀a∀ε∃δ∀x R.

If in a first-order sentence an existential quantifier ∃y lies in the syntactic scope of a universal quantifier ∀x, then by the semantics ∃y automatically depends on ∀x. This is so e.g. with the sentence ∀xy R(x, y). The dependence of ∃y on ∀x means that the witness of ∃y may vary with the value interpreting ∀x. It suffices that there be a function f such that R(a, f(a)) holds in M, for any a interpreting ∀x. Such functions, spelling out the dependencies of witnesses for existential quantifiers on interpretations of universal quantifiers, are known in logical literature as Skolem functions.[4] For comparison, in the sentence ∃yx R(x, y) the quantifier ∃y does not depend on the quantifier ∀x. One and the same witness c for ∃y must be good against any interpretation a of ∀x, so that R(a, c) holds in M. The corresponding Skolem function is a constant.[5]

In IF first-order logic, syntactic scopes no longer determine the semantic relation of dependence. In the formula ∀xy(∃z/∀y) R(x, y, z), for instance, ∃z is syntactically subordinate to both ∀x and ∀y, but is marked as independent of ∀y, and is hence dependent only on ∀x. Semantically this means that the witness of ∃z must be given by a function taking as an argument the interpretation of ∀x, but not that of ∀y. In order for ∀xy(∃z/∀y) R(x, y, z) to be true in M, there must be a function f of one argument such that R(a, b, f(a)) holds in M, for any a interpreting ∀x and any b interpreting ∀y.

What is gained with the slash notation? After all, clearly e.g.xy(∃z/∀y) R(x, y, z) is true in a model M iff[6] the first-order sentence ∀xzy R(x, y, z) is true therein. As a matter of fact, the expressive power of IF logic exceeds that of first-order logic. Consider the sentence ∀xyz(∃w/∀x) R(x, y, z, w). Its truth-condition is of the following form: there are one-argument functions f and g such that R(a, f(a), b, g(b)) holds in M, for any a interpreting ∀x and b interpreting ∀z.[7] So the sentence is true iff the following sentence (*) containing a finite partially ordered quantifier is true:

xy
(*) R(x, y, z, w)
zw

For, by definition (*) is true in a model M iff the second-order sentence ∃fgxz R(x, f(x), z, g(z)) is true therein. The latter may be termed the Skolem normal form of (*). It says that Skolem functions providing witnesses for the quantifiers ∃y and ∃w in (*) exist. Finite partially ordered quantifiers—a.k.a. Henkin quantifiers or branching quantifiers—were proposed by Leon Henkin (1961) and have been subsequently studied extensively.[8] They are two-dimensional objects

Q11 Q1n
:
Qm1 Qmn

where each Qij is either ∃xij or ∀xij. They are naturally interpreted by making systematic use of Skolem functions.

Let us denote by FPO the logic of finite partially ordered quantifiers obtained from first-order logic as follows: if φ is a first-order formula and Q is a finite partially ordered quantifier, then Qφ is a formula of FPO.[9] With FPO it is possible to express properties that are not definable in first-order logic. The first example was provided by Andrzej Ehrenfeucht (cf. Henkin 1961): a sentence that is true in a model iff the size of its domain is infinite. It turns out that FPO can be translated into IF logic (see Subsect. 4.1). Therefore IF logic is more expressive than first-order logic.

The deepest reason for IF logic, as Hintikka sees it, is that the relations of dependence and independece between quantifiers are the only way of expressing relations of dependence and independece between variables on the first-order level (Hintikka 1996: 34–35, 73–74; 2002a: 404–405; 2006: 71, 515). To understand this remark properly, recall that relations of quantifier (in)dependence are semantic relations, but syntactically expressed. More precisely, in IF logic the (in)dependence relations are syntactically expressed by the interplay of two factors: syntactic scope and the independence indicator ‘/’. In a given sentence, an existential quantifier ∃x, say, depends on precisely those universal quantifiers within whose scope ∃x lies, but of which ∃x is not marked as independent using the slash sign. What Hintikka means when he speaks of (in)dependence relations between variables, are functional dependencies between quantities in a model. The kinetic energy of a material body depends on its mass and its speed, but does not depend on the particular material body being considered. This fact can be expressed in IF logic by the sentence ∀bmv(∃e/∀b) R(b, m, v, e), where ‘b’ ranges over material bodies, ‘m’ over real numbers measuring mass, ‘v’ over real numbers measuring speed, ‘e’ over real numbers measuring energy and ‘R(b, m, v, e)’ is read ‘e is the kinetic energy of b having mass m and speed v.’ That this sentence states the existence of a functional dependence of kinetic energy on mass and speed is particularly clearly seen from the fact that if the sentence is true, any Skolem function for the quantifier ∃e actually is the physical law connecting masses, speeds and kinetic energies (cf. Hintikka 1996: 34–35).

While first-order logic can only express some relations between variables, IF first-order logic with its greater expressive power can express more. Actually, IF logic is calculated to capture all such relations (Hintikka 1996: 75–77; 2002a: 404–405; 2002b: 197; 2006: 72). This idea must, in its full generality, be seen as programmatic. It has been Hintikka's explicit goal to study irreducible mutual dependencies between quantifiers in terms of IF logic (cf. Hintikka 2002a,b; 2006: 72). However, Hintikka has come to realize that a more general framework is needed to cope with mutual dependencies (cf. Hintikka 2006: 515, 536, 752; 2008).

Among philosophical issues that have been addressed in connection with IF logic are the reconstruction of mathematical reasoning on the first-order level (Hintikka 1996, 1997), definability of a self-applied truth predicate (Hintikka 1991, 1996, 2001; Sandu 1998), truth and axiomatic set theory (Hintikka 1996, 2004a), and insights into the nature of negation in logic and in natural languages (Hintikka 1991, 1996, 2002; Hintikka & Sandu 1989; Sandu 1994). These issues will be discussed in Sections 4 and 5.

2. The Background of IF Logic: Game-theoretical Semantics

2.1. Semantic games

Inspired by Ludwig Wittgenstein's idea of a language-game, Hintikka (1968) introduced the basic framework of what came to be known as game-theoretical semantics (a.k.a. GTS). The basic lesson Hintikka adopts from Wittgenstein is that words – in particular quantifiers – are associated with activities that render them meaningful: words often have meaning only in the context of certain types of action (Hintikka 1968: 55–56). Wittgenstein said that by ‘language-game’ he means ‘the whole, consisting of language and the actions into which it is woven’ (Wittgenstein 1953: I, Sect. 7).

Considering first-order logic, it becomes natural to ask which are the activities that go together with quantifiers. As Hintikka explains (see Hintikka 2006: 41, 67), Wittgenstein had taken it as a criterion for something to be an object that it can be looked for and found. Applying this idea to quantifiers with such objects as values, Hintikka was led to formulate what he called semantic games for quantifiers. Crucially, these semantic games can be formulated as games in the strict sense of game theory; but at the same time they are exact codifications of language-games in Wittgenstein's sense, at least if one accepts that the activities associated with quantifiers are ‘looking for’ and ‘finding.’[10]

Semantic games G(φ, M) for first-order sentences φ are two-player zero-sum games of perfect information, played relative to a given model M. Let us call the two players simply player 1 and player 2.[11] The games are most easily explained for formulas in prenex normal form: a string of quantifiers followed by a quantifier-free first-order formula. Universal quantifiers mark a move of player 1, while existential quantifiers prompt a move by player 2. In both cases, the relevant player must choose an individual from the domain M of M.[12] For example, if φ = ∀xyzw R(x, y, z, w), the game is played as follows. First player 1 picks out an individual a, whereafter player 2 chooses an individual b. Then player 1 proceeds to choose a further individual c, to which player 2 responds by picking out an individual d. Thereby a play of the game has come to an end. The tuple (a, b, c, d) of individuals chosen determines the winner of the play. If the quantifier-free formula R(a, b, c, d) holds in M, player 2 wins, if not, player 1 wins.

The fact that one of the players wins a single play of game G(φ, M) does not yet tell us anything about the truth-value of the sentence φ. Truth and falsity are characterized in terms of the notion of winning strategy. The sentence ∀xyzw R(x, y, z, w) is true in M if there is a winning strategy for player 2 in the game just described: a recipe telling player 2 what to do, when (a specified amount of) information about the opponent's earlier moves is given. Technically, the requirement is that there be strategy functions f and g, such that for any choices a and c by player 1, R(a, f(a), c, g(a, c)) holds in M. Observe that strategy function f is a Skolem function for the quantifier ∃y in φ, and similarly g is a Skolem function for the quantifier ∃w. The sentence φ is false in M if there is a winning strategy (set of strategy functions) for player 1 in the corresponding game: a constant c and a function h such that for any choices b and d by player 2, R(c, b, h(b), d) fails to hold in M.[13]

Game-theoretical interpretaton of quantifiers was actually already suggested by Henkin (1961; cf. Hintikka 1968: 64). Henkin also pointed out, in effect, the connection between a full set of Skolem functions and a winning strategy for player 2. When formulating game-theoretical semantics, Hintikka (1968) pointed out that conjunctions are naturally interpreted by a choice between the two conjuncts, made by player 1; similarly, disjunctions can be interpreted by a choice by player 2 between the two disjuncts.[14] Further, Hintikka proposed to interpret negation in terms of role shift. When semantic games are formulated for arbitrary first-order formulas, in addition to the two players there are two roles to be considered: ‘verifier’ and ‘falsifier.’ In the beginning, player 1 occupies the role of ‘falsifier’ and player 2 that of ‘verifier.’ Negation is then interpreted by transposing the roles: the player whose current role is ‘verifier’ assumes the role of ‘falsifier,’ and vice versa.

The game-theoretical description of semantic games does not mention the activities of searching and finding; for such an abstract description it suffices to speak of the players making a move. Also, the characterization of truth and falsity as the existence of a winning strategy for player 2 and player 1, respectively, does not refer to efforts on the part of the players – say efforts to establish truth or to find witnesses. The truth or falsity of a sentence is a matter of ‘combinatorics’: it is an issue of the existence of a set of functions with certain properties (cf. Hintikka 1968, 1996, Hodges 2006a). So what happened to the original philosophical conception according to which the meanings of quantifiers are tied to the activities of searching and finding? Hintikka's idea is that asserting a sentence involving quantifiers is to make a claim about what can and what cannot happen when a certain language-game is played; using language involving quantifiers requires mastering the rules of the corresponding semantic games (Hintikka 1968: Sect. 8, 1996: 128, 2006: 538). What is the content of such a claim in connection with the sentence ∀xy R(x, y)? That whichever individual player 1 chooses from the domain for ∀x, player 2 can find a witness individual for ∃y. In other words, given a value for ∀x (which itself can be seen as the result of the search by player 1), if player 2 is allowed to search free from any practical constraints, she will find a value for ∃y so that player 2 wins the resulting play. Even though semantic games themselves can be defined without recourse to activities such as looking for or finding, these activities play an important conceptual role when the language user reasons about these games.

Hintikka (1973a) initiated the application of GTS to the study of natural language. This work was continued notably by Hintikka & Kulas (1983, 1985), where game-theoretical rules for such items as negation, anaphoric pronouns, genitives, tenses, intensional verbs, certain prepositional constructions and proper names were given, and the distinction between abstract meaning and strategic meaning drawn.[15]

2.2. Imperfect information

The framework of GTS, once formulated, enabled asking questions of a game-theoretical nature about semantic games. Hintikka (1973a) observed that semantic games with imperfect information are devised without any difficulty. As logical examples Hintikka used FPO sentences. (For examples related to natural languages, see Subsect. 5.4.)

From the vantage point of GTS, independence friendly first-order logic differs from first-order logic in that semantic games correlated with formulas of the former are, in general, games of imperfect information, while any game associated with a first-order formula is a game of perfect information. Consider the game for ∀xyz(∃w/∀x) R(x, y, z, w), played on a model M. A play of this game proceeds exactly as plays of the game corresponding to ∀xyzw R(x, y, z, w). First player 1 chooses an individual a, whereafter player 2 chooses an individual b. Then player 1 proceeds to pick out a further individual c, to which player 2 responds by choosing an individual d. So a play of the game has come to an end. The play is won by player 2, if indeed R(a, b, c, d) holds in M; otherwise it is won by player 1. But the quantifier ∃z was marked as independent of the quantifier ∀x – why does not this fact show in any way in the course of a play?

One might be tempted to add to the description of a play: ‘player 2 chooses a value for ∃z in ignorance of the value for ∀x.’ However, such a paraphrase would not clarify the conceptual situation. It makes no sense to speak of the independence of a move from other given moves by reference to a single play; this can only be done by reference to a multitude of plays. Quantifier independence can be conceptualized in game-theoretical terms making use of the notion of strategy. In the example, a strategy of player 2 is a set {f, g} of strategy functions, function f providing a value for ∃y depending on the value of ∀x, and function g providing a value for ∃w depending on the value chosen for ∀z but not on the value chosen for ∀x. The strategy {f, g} is, then, a winning strategy for player 2, iff R(a, f(a), c, g(c)) holds in M, for all values a chosen for ∀x and c chosen for ∀z. One precise way of implementing the idea that player 2 is ‘ignorant’ of the move player 1 made for ∀x is to say that (a) strategy functions always only take as arguments the opponent's moves, and (b) the strategy function corresponding to ∃w may not take as its argument the move player 1 made for ∀x.

A sentence of IF first-order logic is by definition true in a model M if there is a winning strategy for player 2 in the correlated game, and false if there is a winning strategy for player 1 in the correlated game. There are sentences which under these criteria are neither true nor false; they are called non-determined (see Subsect. 3.3).

In Hintikka's judgement, there is a way of looking at game-theoretical semantics of quantifiers that gives it the same rationale that was mentioned as the deepest reason for IF first-order logic at the end of Section 1: GTS is a method of representing, on the first-order level, the (in)dependence relations between variables by means of informational (in)dependence in the sense of game theory (Hintikka 1991: 12–13, 2006: 535).

3. The Syntax and Semantics of IF First-order Logic

The simple task of defining the syntax and semantics of IF first-order logic is rendered somewhat difficult by the fact that in the literature one can find essentially different formulations, all of which are referred to as ‘IF first-order logic.’ The differences are not restricted to syntax – examples of applying different semantic ideas can be found as well.

As noted in Subsection 2.1, there is a systematic connection between the Skolem functions and strategy functions of player 2. A Skolem function for an existential quantifier is a function of the values assigned to the preceding universal quantifiers, but not a function of the values assigned to the preceding existential quantifiers.[16] In game-theoretical terms, Skolem functions are strategy functions taking as arguments exclusively moves made by player 1. On the other hand, generally a strategy for a player in a two-player game can perfectly well make use of the previous choices of either player. Hodges (2007) stresses that it makes a difference on which notion – Skolem function or strategy function – the semantics of imperfect information is based. Hodges (1997a) adopted the notational convention of writing, say, (∀y/x) where Hintikka writes (∀y/∃x), hence marking the difference between semantic games formulated in terms of arbitrary strategy functions and those whose strategy functions are in effect Skolem functions. Hodges proposed to employ the former formulation, while in Hintikka (1991, 1995, 1996, 2002) and Sandu (1993, 1994) the latter formulation is employed. Hodges (2007: 119) writes:

[W]e refer to the logic with my notation and the general game semantics as slash logic. During recent years many writers in this area (but never Hintikka himself) have transferred the name ‘IF logic’ to slash logic, often without realising the difference. Until the terminology settles down, we have to beware of examples and proofs that don’t make clear which semantics they intend.

The distinction that Hodges makes between slash logic and IF logic serves to bring order to the mishmash of different formulations of IF first-order logic to be found in the literature.[17]

3.1. Syntax

IF first-order logic is an extension of first-order logic.[18] The first-order logic of vocabulary τ will be referred to as FO[τ]. A formula of FO[τ] is in negation normal form, if all occurrences of the negation symbol ~ immediately precede an atomic formula. The set of formulas of IF first-order logic of vocabulary τ can, then, be defined as the smallest set such that:

  1. If φ is a formula of FO[τ] in negation normal form, φ is a formula.
  2. If φ is a formula and in φ a token of (∃x) occurs in the syntactic scope of a number of universal quantifiers which include (∀y1),…,(∀yn), the result of replacing in φ that token of (∃x) by (∃x/∀y1,…,∀yn) is a formula.
  3. If φ is a formula and in φ a token of ∨ occurs in the syntactic scope of a number of universal quantifiers which include (∀y1),…,(∀yn), the result of replacing in φ that token of ∨ by (∨/∀y1,…,∀yn) is a formula.
  4. If φ is a formula and in φ a token of (∀x) occurs in the syntactic scope of a number of existential quantifiers which include (∃y1),…,(∃yn), the result of replacing in φ that token of (∀x) by (∀x/∃y1,…,∃yn) is a formula.
  5. If φ is a formula and in φ a token of ∧ occurs in the syntactic scope of a number of existential quantifiers which include (∃y1),…,(∃yn), the result of replacing in φ that token of ∧ by (∧/∃y1,…,∃yn) is a formula.

The syntax given by these clauses will be referred to as the official syntax.[19] Clauses (2) and (3) allow the degenerate case that the list of universal quantifiers is empty (n = 0). The resulting expressions (∃x/) and (∨/) are identified with the usual existential quantifier (∃x) and the usual disjunction ∨, respectively. Similar stipulations are made about clauses (4) and (5). Formulas of IF first-order logic will be referred to as IF first-order formulas.

In a suitable vocabulary, each of the following is a formula:

So are all of these:

Also,

(∀x)(∃y)(∀z/∃y)(∃v/x) R(x, y, z, v)

is a formula. By contrast, none of the following sequences of symbols is a formula:

If φ is an IF first-order formula, generated by the above clauses from some FO formula φ*, the free variables of φ are simply the free variables of φ*. IF first-order formulas without free variables are IF first-order sentences.[20]

Also a simplified syntax is considered in the literature: one obtained by erasing the clauses (4) and (5); the resulting syntax may be termed restricted syntax.[21] Skipping these clauses is not a restriction from the viewpoint of expressive power (see Subsect. 3.3). Even further variants of the syntax have been proposed. Some of these have turned out to be misguided; others will be discussed subsequently.[22]

3.2. Semantics

Defining the semantics of a logic using GTS is a two-step process. The first step is to define the relevant semantic games. The second step is to define the notions of ‘true’ and ‘false’ in terms of the semantic games; this happens by reference to the notion of winning strategy. In the literature, semantic games have typically been defined by specifying recursively the alternative ways in which a game associated with a given formula φ can be begun.[23] Let us define game-theoretical semantics for IF first-order logic in this way.[24]

For every vocabulary τ, IF first-order formula φ of vocabulary τ, model (τ structure) M, and variable assignment g, a two-player zero-sum game G(φ, M, g) between player 1 and player 2 is associated.[25]

  1. If φ = R(t1,…, tn) and M, gR(t1,…, tn), player 2 wins (and player 1 loses); otherwise player 1 wins (and player 2 loses).
  2. If φ = ~R(t1,…, tn) and M, gR(t1,…, tn), player 2 wins (and player 1 loses); otherwise player 1 wins (and player 2 loses).
  3. If φ = (ψ (∧/∃y1,…,∃yn) χ), player 1 chooses θ ∈ {ψ,χ} and the rest of the game is as in G(θ, M, g).
  4. If φ = (ψ (∨/∀y1,…,∀yn) χ), player 2 chooses θ ∈ {ψ,χ} and the rest of the game is as in G(θ, M, g).
  5. If φ = (∀x/∃y1,…,∃yn)ψ, player 1 chooses an element a from M, and the rest of the game is as in G(ψ, M, g[x/a]).
  6. If φ = (∃x/∀y1,…,∀yn)ψ, player 2 chooses an element a from M, and the rest of the game is as in G(ψ, M, g[x/a]).

Observe that the independence indications play no role in the game rules. Indeed, quantifier independence will be implemented on the level of strategies.

If a token of (∨/∀y1,…,∀yn) or (∃x/∀y1,…,∀yn) appears in a formula φ in the scope of the universal quantifiers ∀y1,…,∀yn,∀z1,…,∀zm (and only these universal quantifiers), a strategy function of player 2 for this token is any function f satisfying the following:

The arguments of f are the elements a1,…, am that player 1 has chosen to interpret the quantifiers ∀z1,…,∀zm. The value f(a1,…, am) is the left or the right disjunct when the token is a disjunction; and an element of the domain when the token is an existential quantifier.

The notion of the strategy function of player 1 for tokens of (∧/∃y1,…,∃yn) and (∀x/∃y1,…,∃yn) can be defined dually. Strategy functions are construed as Skolem functions—the more general notion of strategy function operative in slash logic is not considered here (cf. the beginning of Sect. 3). Note that quantifier indepenence will be implemented directly in terms of the arguments of the strategy functions.

A strategy of player 2 in game G(φ, M, g) is a set F of her strategy functions, one function for each token of (∨/∀y1,…,∀yn) and (∃x/∀y1,…,∀yn) appearing in φ. Player 2 is said to follow the strategy F, if for each token of (∨/∀y1,…,∀yn) and (∃x/∀y1,…,∀yn) for which she must make a move when game G(φ, M, g) is played, she makes the move determined by the corresponding strategy function. A winning strategy for player 2 in G(φ, M, g) is a strategy F such that against any sequence of moves by player 1, following strategy F yields a win for player 2. The notions of strategy and winning strategy can be similarly defined for player 1.[26]

The satisfaction and dissatisfaction of the IF first-order formula φ in the model M under the assignment g are then defined as follows:[27]

As with FO, variable assignments do not affect the (dis)satisfaction of sentences or formulas containing no free variables. Indeed, we may define:[28]

The fact that φ is true in M will be denoted by ‘M ⊨ φ.’ Writing M ⊭ φ indicates, then, that φ is not true in M. This does not mean that φ would in the above-defined sense be false in M. There are semantic games which are non-determined: games in which neither player has a winning strategy (see Subsect. 3.3).

3.3. Basic properties and notions

IF first-order logic has various prominent features that distinguish it from FO.

Failure of bivalence. There are sentences φ of IF first-order logic and models M such that φ is neither true nor false in M. A case in point is the sentence (∀x)(∃y/∀x) x = y evaluated relative to a model whose domain has exactly two elements, a and b. Player 1 has no winning strategy. For, if he chooses a to interpret (∀x), player 2 may choose a to interpret (∃y/∀x), and hence win. Similarly, player 2 may choose b if player 1 chooses b. Neither does player 2 have a winning strategy. Namely, her strategy functions for (∃y/∀x) are constants (zero-place functions). There are two such constants available: a and b. Whichever one of these strategy functions player 2 assumes, there is a move by player 1 defeating it. If player 2 opts for a, player 1 may choose b; and if player 2 applies b, player 1 can choose a. Game G(φ, M) is said to be non-determined: neither player has a winning strategy.[29] The notion of non-determinacy may be extended to formulas as well:

While every sentence of IF first-order logic either is or is not true in a given model, this does not reduce to bivalence. Falsity does not ensue from non-truth. Observe that bivalence does not fail in IF first-order logic due to the postulation of a third truth-value or a truth-value gap (cf. Hintikka 1991: 20, 55).[30] Rather, it is a consequence of the basic assumptions of the entire semantic theory (GTS). Non-determinacy corresponds to a structural property: the fact that certain kinds of functions do not exist on the model considered.

Logical equivalence and logical consequence. In FO[τ], sentences ψ and χ are logically equivalent when they are true in precisely the same models: ψ is true in M iff χ is true in M, for all τ structures M. So if ψ and χ are logically equivalent FO sentences, ψ is not true in M iff χ is not true in M – which by bivalence of FO means that ψ is false in M iff χ is false in M. The failure of bivalence makes things more complicated in IF first-order logic: three notions may be defined in place of the one notion of logical equivalence. Let us say that sentences ψ and χ of IF first-order logic are truth equivalent, if they are true in precisely the same models. This is the notion of equivalence that is almost exclusively studied in the literature. Sentences ψ and χ are falsity equivalent, if they are false in precisely the same models; and ψ and χ equivalent, if they are both truth equivalent and falsity equivalent.[31]

Furthermore, two notions of consequence may be distinguished. The IF first-order sentence χ can be said to be a logical consequence of the sentence ψ, if all models in which ψ is true are models in which χ is true. This is the notion of logical consequence used in the literature. A stronger notion would be obtained by defining χ to be a strong consequence of ψ, if all models in which ψ is not false are models in which χ is true.[32]

Negation. By clause (1) of the syntax, the negation sign ~ may only appear as prefixed to an atomic formula. Conceptually there is no reason for this restriction; clause (1) may be replaced by a clause laying it down that any formula φ of FO[τ] is a formula of IF first-order logic; while the rest of the clauses are kept intact (cf. Sandu 1994, 1996). Let us refer to the syntax with arbitrary occurrences of the negation sign ~ as liberalized syntax.

It remains to be told how such negation signs are interpreted in GTS. Two roles must be added as a new ingredient in the specification of the games: those of ‘verifier’ and ‘falsifier.’ Initially player 1 has the role of ‘falsifier’ and player 2 that of ‘verifier.’ The roles may get switched, but only for one reason: when negation is encountered. Due to the introduction of the roles, all clauses defining semantic games must be rephrased in terms of roles instead of players. More specifically, it is the player whose role is ‘verifier’ who makes a move for tokens of (∨/∀y1,…,∀yn) and (∃x/∀y1,…,∀yn), and similarly the player whose role is ‘falsifier’ who moves for the tokens of (∧/∃y1,…,∃yn) and (∀x/∃y1,…,∃yn). When a formula ~ψ is encountered, the players change roles and the game continues with ψ. Finally, if the encountered atomic formula is true, ‘verifier’ wins and ‘falsifier’ loses, otherwise the payoffs are reversed. For a more detailed description, see the supplementary document. The negation ~ works as one would expect: the sentence φ is false in M iff its negation ~φ is true therein. Similarly, a sentence φ is true in a model M iff its negation ~φ is false in M (cf. Sandu 1993).

The negation ~ is variably referred to as strong negation, dual negation, or game-theoretical negation.[33] Due to the failure of bivalence, the logical law of excluded middle fails as well: if φ is a sentence non-determined in a model M, then M ⊭ (φ ∨ ~φ). In what follows, the original syntax and semantics given in Subsections 3.1 and 3.2 will be applied, unless otherwise stated.

Truth, falsity and independence indications. The syntax of IF first-order logic allows formulas in which both universal and existential quantifiers appear slashed, e.g.,

φ: (∀x)(∃y/∀x)(∀z/∃y) R(x, y, z).

On the other hand, quantifier independence is implemented at the level of strategies. A moment's thought reveals that consequently independence indications following a universal quantifier are vacuous, when the satisfaction of a formula (truth of a sentence) is considered. Similarly, independence indications following an existential quantifier are vacuous when the dissatisfaction of a formula (falsity of a sentence) is at stake. The sentence φ is true in the model M iff player 2 has a winning strategy F={c} in game G(φ, M). This, again, means that whichever elements a and b player 1 chooses to interpret (∀x) and (∀z/∃y), respectively, the constant interpretation c of (∃y/∀x) given by the (zero-place) strategy function c satisfies R(a, c, b) in M. But this amounts to the same as requiring that whichever elements a and b player 1 chooses to interpret (∀x) and (∀z), respectively, the constant interpretation c of (∃y/∀x) satisfies R(a, c, b) in M. Indeed, φ is truth equivalent to a sentence containing no slashed universal quantifiers:

φ is true in a model M iff the sentence (∀x)(∃y/∀x)(∀z) R(x, y, z) is true in M.

Similarly, φ is falsity equivalent to a sentence containing no slashed existential quantifiers: φ is false in a model M iff the sentence (∀x)(∃y)(∀z/∃y) R(x, y, z) is false therein.

For each sentence φ, further sentences φ+ and φ may be defined as follows: φ+ is obtained from φ by replacing each token of (∧/∃y1,…,∃yn) by ∧, and each token of (∀x/∃y1,…,∃yn) by (∀x). Similarly, φ is obtained from φ by replacing each token of (∨/∀y1,…,∀yn) by ∨, and each token of (∃x/∀y1,…,∀yn) by (∃x). It can then be shown that φ is true in a model M iff φ+ is true in M; and φ is false in a model M iff φ is false in M. Whenever we are interested in truth (falsity) alone, we may suppose that our IF first-order sentences are of the form φ+ (resp. φ).[34] In the literature the interest almost always pertains to the truth of a sentence (cf., e.g., Hintikka 1991: 44, 1996: 150).

3.4. Extended IF first-order logic

If φ is a sentence of FO, ~φ is true in a model iff φ is not true therein. By contrast, in IF first-order logic falsity and non-truth do not coincide. An extension of IF first-order logic can be introduced, where it is possible to speak of the non-truth of sentences. To this end, let us introduce a new negation symbol, ¬, referred to as weak negation, contradictory negation or classical negation.[35] The set of formulas of extended IF first-order logic is obtained from the set of formulas of IF first-order logic by closing it under the operations ¬, ∧, and ∨:[36]

So if φ and ψ are IF first-order formulas, e.g. ¬φ and (¬φ ∨ ψ) are formulas of extended IF first-order logic; by contrast (∀x)¬φ is not. The semantics of a formula formed by contradictory negation is simply this:

M, g ⊨ ¬φ iff M, g ⊭ φ.

From the viewpoint of GTS, the connective ¬ behaves, however, in an unusual way. For all connectives of IF first-order logic, there is a game rule (which can be seen as specifying the meaning of the connective). For contradictory negation there is no game rule, and its semantics is not explained in terms of plays of semantic games. By contrast, a formula ¬φ serves to say, globally, something about an entire game G(φ, M, g). If φ is a sentence, to say that ¬φ is true in M is to say that player 2 does not have a winning strategy in game G(φ, M); otherwise ¬φ is by stipulation false in M.[37]

Not only is ¬φ not itself a formula of IF first-order logic, but it cannot in general even be expressed in IF first-order logic (see Subsect. 4.3). The law of excluded middle holds for the contradictory negation: for all sentences φ and all models M, indeed M ⊨ (φ ∨ ¬φ). In Section 5 it will be seen how Hintikka has proposed to make use of extended IF first-order logic when discussing issues in the philosophy of mathematics.

4. Metalogical Properties of IF First-order Logic

The metalogical properties of IF first-order logic have been discussed in several publications, by Hintikka as well as Sandu.[38] When presenting them, reference will be made to existential second-order logic (ESO);[39] further important notions are those of the skolemization and Skolem normal form of an IF first-order formula. For precise definitions of these notions, the supplementary document can be consulted. In brief, ESO is obtained from FO by allowing existential quantification over relation and function symbols in a first-order formula. The skolemization sk[φ] of an IF first-order formula φ is a formula of FO. It results from φ by introducing for every existential quantifier and disjunction symbol a new function symbol, eliminating all existential quantifiers of φ, replacing variables bound by the existential quantifiers in φ by function terms so that the arguments of these function terms are precisely those universally quantified variables on which the value of the corresponding existentially quantified variable depends in φ, and finally, explicating the dependence of a choice of a disjunct on the preceding universally quantified variables likewise using function terms. If φ is a formula of IF first-order logic of vocabulary τ, and sk[φ] is its skolemization of vocabulary τ′, with τ ⊆ τ′, the Skolem normal form SK[φ] of φ is the ESO[τ] formula ∃f1…∃fnsk[φ], where {f1,…, fn} = τ′ \ τ. The second-order formula SK[φ] (referred to when using capital letters) must not be confused with the first-order formula sk[φ] (referred to when using lower case letters).

4.1. First-order logic and existential second-order logic

Game-theoretical vs. Tarskian semantics of FO. The set of formulas of FO is a proper subset of the set of IF first-order formulas. The standard semantics of FO is not the one provided by GTS, but the Tarskian semantics specifying recursively the satisfaction relation M, g ⊨ φ. It turns out that if the Axiom of Choice is assumed,[40] the two semantics of FO coincide:

Theorem (assuming AC). (Hintikka & Kulas 1985: 6–7, Hodges 1983: 94) Let τ be any vocabulary, M any τ structure, g any variable assignment and φ any FO[τ] formula. Then M, g ⊨ φ holds in the standard sense iff there is a winning strategy for player 2 in game G(φ, M, g).[41]
Relation to ESO. IF first-order logic and ESO are intertranslatable:[42]
Theorem (assuming AC) ESO and IF first-order logic have the same expressive power.

That is, (1) for every IF first-order formula φ of vocabulary τ there is an ESO[τ] formula φ′ such that for all τ structures M and variable assignments g, we have: M, g ⊨ φ iff M, g ⊨ φ′. And (2) for every ESO[τ] formula ψ there is an IF first-order formula ψ′ of vocabulary τ such that M, g ⊨ ψ iff M, g ⊨ ψ′, for all τ structures M and variable assignments g. Let us consider the two directions one by one.

(1) Let φ be an IF first-order formula, and suppose M, g ⊨ φ. Then player 2 has a winning strategy F = {f1,…, fn} in game G(φ, M, g). Intuitively, a formula may be written down which says that a winning strategy for player 2 exists. Such a formula states the existence of suitable strategy functions, and so it will be an ESO formula. More precisely, consider a skolemization sk[φ] of φ. Interpreting each function symbol of sk[φ] that has been introduced by the skolemization procedure by the corresponding strategy function among f1,…, fn, and interpreting the eventual constant symbol 0 appearing in sk[φ] by an arbitrarily chosen element 0 of M,[43] yields a model satisfying sk[φ]: M,0, f1,…, fn, g ⊨ sk[φ]. Consequently the ESO formula SK[φ] = ∃0∃f1…∃fnsk[φ] is satisfied in M by g. Conversely, suppose a Skolem normal form ∃0∃f1…∃fnsk[φ] of an IF first-order formula φ is satisfied in M by g. Then there is an element 0 and functions f1,…, fn such that M,0, f1,…, fn, g ⊨ sk[φ]. Define a strategy for player 2 in game G(φ, M, g) by setting F = {f1,…, fn}. Clearly F is a winning strategy for player 2 in G(φ, M, g).

(2) That ESO can be translated into IF first-order logic follows from well-known results concerning the logic FPO. Krynicki (1993) has proven that for every finite partially ordered quantifier Q and first-order formula χ there is a natural number n and a first-order formula χ′ such that the formula Qχ is satisfied in M by g iff the formula

x1…∀xny
(*) χ′
z1…∀znt

is satisfied in M by g. That (*) is satisfied in M by g means by definition that there are functions f and h such that

M, f, h, g ⊨ (∀x1)…(∀xn)(∀z1)…(∀zn)χ′[y/f(x1,…, xn), t/h(z1,…, zn)].[44]

It was proven by Enderton (1970) and Walkoe (1970), independently of each other, that there is an effective procedure assigning to every ESO formula φ a finite partially ordered quantifier Q and a quantifier-free first-order formula ψ such that φ is satisfied in a model M by an assignment g iff M, gQψ.[45] By Krynicki's result, then, any ESO formula φ is satisfied in M under g iff an FPO formula of the form (*) is satisfied in M by g. But a formula of that form is trivially translatable into IF first-order logic, the translation of (*) being (∀x1)…(∀xn)(∃y)(∀z1)…(∀zn)(∃t/∀x1,…,∀xn)χ′.

Hintikka suggests that IF first-order logic is, substantially speaking, a first-order logic: the entities its quantified variables range over are individuals (see Subsect. 5.1). A part of the interest of the intertranslatability theorem lies in the fact that if Hintikka's controversial claim is accepted, this would mean that the expressive power of ESO can actually be achieved on the first-order level.

IF first-order logic is more expressive than FO. The following are examples of properties expressible in ESO but not in FO: infinity of the domain, non-completeness of a linear order, ill-foundedness of a binary relation, disconnectedness of a graph, equicardinality of the extensions of two first-order formulas φ(x) and ψ(x), infinity of the extension of a first-order formula φ(x), and the topological notion of open set (see, e.g., Hintikka 1996, Väänänen 2007). When attention is restricted to finite models, Ronald Fagin's famous theorem (1974) connects ESO and the complexity class NP: a computational problem is solvable by an algorithm running in non-deterministic polynomial time iff it is definable in ESO relative to the class of all finite structures. The following are NP-complete properties and hence expressible in ESO, over the class of all finite models: evenness of the domain, oddness of the domain, 3-colorability of a graph, and the existence of a Hamiltonian path on a graph.[46] Because IF first-order logic has the same expressive power as ESO, it can express all these properties.

As an example the Dedekind-infinity of the domain may be considered. A given set S is Dedekind-infinite precisely when there exists an injective function from S to its proper subset. Let φinf be the following sentence of IF first-order logic:[47]

(∃t)(∀x)(∃z)(∀y)(∃v/∀x)((x = yz = v) ∧ zt).

The Skolem normal form of φinf is ∃fgtxy((x = yf(x) = g(y)) ∧ f(x) ≠ t). This ESO sentence asserts the existence of functions f and g and an element t such that f = g, the domain of this function is the whole domain of the model (implication from left to right), and the function is injective (implication from right to left), while the element t does not appear in the range of this function, the range being therefore a proper subset of the domain. In other words, the sentence φinf is true in a model M iff the domain of M is infinite.

Properties in common with FO. IF first-order logic shares many metalogical properties with first-order logic.[48]

Compactness. A set of IF first-order sentences has a model iff all its finite subsets have a model.
Löwenheim-Skolem property. Suppose φ is an IF first-order sentence that has an infinite model, or arbitrarily large finite models. Then φ has models of all infinite cardinalities.

The separation theorem holds in IF first-order logic in a strengthened form; the ‘separation sentence’ θ is in particular a sentence of FO.

Separation theorem. Suppose φ is an IF first-order sentence of vocabulary τ, and ψ an IF first-order sentence of vocabulary τ′. Suppose further that φ and ψ have no models in common. Then there is a first-order sentence θ of vocabulary τ ∩ τ′ such that every model of φ is a model of θ, but θ and ψ have no models in common.

Let τ be a vocabulary, and P a fresh unary relation symbol. Let T be a set of IF first-order sentences of vocabulary τ ∪ {P}.[49] It is said that T implicitly defines P in terms of τ, if the following holds: If P′ is any fresh unary relation symbol, and T′ is the result of replacing P by P′ everywhere in T, then the sentence (∀x)(P(x) ↔ P′(x)) is a logical consequence of the set TT′ of IF first-order sentences. It is said that T explicitly defines P in terms of τ, if there exists an IF first-order formula of one free variable, φ(x), such that the sentence (∀x)(P(x) ↔ φ(x)) is a logical consequence of the set T. These notions are generalizable in a straightforward manner to arbitrary relation symbols. In FO the notions of implicit and explicit definability are connected by Beth's definability theorem. In IF first-order logic this result can be proven in the following strengthened form; the definiens φ(x1,…, xn) is a formula of FO.[50]

Beth's definability theorem. Let τ be a vocabulary, and let R be a fresh n-ary relation symbol. Let T be a set of IF first-order sentences of vocabulary τ ∪ {R}. If R is implicitly defined by T in terms of τ, then R is explicitly defined by T in terms of τ in the strong sense that there is a first-order formula φ(x1,…, xn) of vocabulary τ such that the sentence (∀x1)…(∀xn)(R(x1,…, xn) ↔ φ(x1,…, xn)) is a logical consequence of T.

It is well known that for FO there is a sound and complete proof procedure. Because a first-order sentence φ is inconsistent (non-satisfiable) iff its negation ~φ is valid (true in all models), it trivially follows that FO also has a sound and complete disproof procedure.[51] The latter property extends to IF first-order logic (while the former does not, see Subsect. 4.3):

Existence of a complete disproof procedure. The set of inconsistent IF first-order sentences is recursively enumerable.

An IF sentence φ of vocabulary τ is inconsistent iff there is no τ structure M in which the Skolem normal form SK[φ] = ∃f1…∃fnsk[φ] of φ is true iff there is no structure N of vocabulary τ ∪ {f1,…, fn} in which the skolemization sk[φ] of φ is true. Recall that while SK[φ] is a second-order sentence of vocabulary τ, the skolemization sk[φ] of φ is a first-order sentence of vocabulary τ ∪ {f1,…, fn}. By the equivalences just observed, together with the fact that there is a recursive enumeration of inconsistent sentences of FO, it follows that there is a recursive enumeration of inconsistent sentences of IF first-order logic.

4.2. Intricacies of negation

In Subsection 3.4, the contradictory negation ¬ was distinguished from the strong negation ~. In FO the two coincide: for any first-order sentence φ, we have M ⊨ ~φ iff M ⊨ ¬φ. But in IF first-order logic there are sentences φ and models M for which the law of excluded middle fails: M ⊭ (φ ∨ ~φ). This fact by itself still leaves open the possibility that the contradictory negation of every sentence φ of IF first-order logic could be defined in IF first-order logic, i.e., that there was a sentence neg(φ) of IF first-order logic such that Mneg(φ) iff M ⊭ φ, for all models M. All we know by the failure of the law of excluded middle is that not in all cases can neg(φ) be chosen to be ~φ. However, it turns out that generally, contradictory negation is inexpressible in IF first-order logic.

Inexpressibility of contradictory negation. IF first-order logic is not closed under contradictory negation. There are sentences φ of IF first-order logic such that ¬φ (which is a sentence of extended IF first-order logic) is not truth equivalent to any sentence of IF first-order logic. This follows from the well-known fact that ESO is not closed under negation and IF first-order logic has the same expressive power as ESO.[52]

Strong inexpressibility of contradictory negation. As a corollary to the separation theorem, the result holds in a much stronger form. If φ and ψ are sentences of IF first-order logic such that M ⊨ φ iff M ⊭ ψ, then each of φ and ψ is truth equivalent to a sentence of FO. Hence the contradictory negation ¬φ is only expressible in IF first-order logic for those IF first-order sentences φ that are truth equivalent to an FO sentence.[53]

As stressed by Väänänen (2002: 352), the fact that a sentence of IF first-order logic is truth equivalent to a first-order sentence does not mean that the former otherwise behaves like a first-order sentence. The sentence (∀y)(∃x/∀y) x = y, say, is truth equivalent to the FO sentence (∃x)(∀y) x = y. So its contradictory negation is definable in IF first-order logic as (∀x)(∃y) xy. It must be noted, however, that the contradictory negation is here found via semantic considerations.[54] There is no uniform syntactic operation that produces a contradictory negation from any sentence of IF first-order logic that in fact is truth equivalent to a sentence of FO (cf. Burgess 2003).

Determined fragment. Let us say that an IF first-order sentence φ is determined iff the semantic game G(φ, M) is determined for all models M. So φ is determined iff it satisfies: M ⊨ (φ ∨ ~φ), for all models M. The determined fragment of IF first-order logic is the set of determined IF first-order sentences. If φ is determined, then ~φ is true in M iff φ is not true in M: in the determined fragment contradictory negation is syntactically expressible by the strong negation. By the strong inexpressibility of contradictory negation, the determined fragment of IF first-order logic has the same expressive power as FO. Observe that membership in the determined fragment is a sufficient but not necessary condition for an IF first-order sentence to have its contradictory negation definable in IF first-order logic. The sentence (∀y)(∃x/∀y) x = y is not determined;[55] yet its contradictory negation (∀x)(∃y) xy is expressible in IF first-order logic.

Contradictory negation and GTS. The truth-conditions that GTS yields are of the form ‘there are strategy functions f1,…, fn such that—,’ i.e., it gives rise to truth-conditions expressible in ESO. On the other hand, by the strong inexpressibility of contradictory negation there is no single IF first-order sentence, not translatable into FO, whose contradictory negation has a truth-condition of that form. This fact makes it understandable why contradictory negation should not be expected to admit of a game-theoretical interpretation, at least not along the same lines in which the other logical operators are interpreted.

Contradictory negation and finite models. Certain major open questions in logic and theoretical computer science can be formulated in terms of IF first-order logic. It is an open question in complexity theory whether NP = coNP, that is, whether the class of NP-solvable problems is the same as the class of problems whose complement is solvable in NP. By Fagin's theorem (1974) this is equivalent to the question whether the set of all IF first-order sentences is closed under negation over finite models. It is, then, an open question whether for every IF sentence φ there is another IF sentence neg(φ) such that for any finite model M, neg(φ) is true in M iff φ is not true in M. Proving that the answer is negative would settle the notorious P = NP problem, that is, establish that not all problems solvable in non-deterministic polynomial time are solvable by algorithms running in deterministic polynomial time.[56]

Negation in natural languages. In English, negation appears in two shapes, verbal (constituent) and sentential. This distinction can be compared with the distinction between strong negation (~) and contradictory negation (¬), though the two dichotomies must not be intermingled.[57] Prefixed to an atomic logical formula there is no difference between the two negations (on standard two-valued models): player 1 has a winning strategy for the game correponding to an atomic formula R(t1,…, tn) iff player 2 does not have a winning strategy in that game. Similarly, verbal negation is relatively unproblematic.

On the other hand, there are systematic problems with the possibility of providing formation rules for sentential negation in English; an argument can even be put forward to the effect that no such recursive rules exist (Hintikka & Kulas 1983). If so, contradictory sentential negation in English can, in general, only be formed by means of artificial prefixes such as ‘it is not the case that.’ Analogously, if φ is a sentence of IF first-order logic, its contradictory negation ¬φ cannot in general be expressed in IF first-order logic. By contrast, ¬φ can be expressed in extended IF first-order logic, but there the negation sign ¬ cannot be pushed deeper into φ from its sentence-initial position, so as to appear in the scope of a quantifier. This is because there are no game rules for the contradictory negation. It can be meaningfully used in GTS only to effect a statement concerning an entire game.[58]

4.3 Failure of axiomatizability

As is well known, FO admits of a sound and complete proof procedure: there is a mechanical way of generating precisely those first-order sentences that are valid (true in all models). This fact can also be expressed by saying that FO is axiomatizable, or that the set of valid sentences of FO is recursively enumerable.[59] Due to its greater expressive power, axiomatizability fails for IF first-order logic. This may also be expressed by saying that IF first-order logic is semantically incomplete.[60]

One way to show this is as follows. Suppose for the sake of contradiction that the set of valid IF first-order sentences is recursively enumerable. Recall that the sentence φinf discussed in Subsection 4.1 is true in all and only infinite models. Note, then, that an FO sentence χ is true in all finite models iff the IF first-order sentence (φinf ∨ χ) is valid. Given a valid IF first-order sentence, it can be effectively checked whether the sentence is syntactically of the form (φinf ∨ χ), where χ is a first-order sentence. Hence the recursive enumeration of all valid IF first-order sentences yields a recursive enumeration of first-order sentences χ true in all finite models. But this contradicts Trakhtenbrot's theorem, according to which the set of FO sentences true in all finite models is not recursively enumerable.[61]

What is the relevance of the failure of axiomatizability of IF first-order logic? Discussing finite partially ordered quantifiers, Quine (1970: 89–91) suggests that we should refuse to give the status of logic to any generalization of FO which does not have a sound and complete proof procedure both for validity and for inconsistency. For Quine, any such generalization belongs to mathematics rather than logic. Since FPO is not axiomatizable, it falls outside the realm of logic, thus delineated.[62]

Hintikka finds this type of allegation unfounded. First, IF first-order logic shares many of the important metalogical results with FO (cf. Subsect. 4.1). Second, just like IF first-order logic, FO can also be translated into second-order logic. The only difference is that in the former case a larger variety of quantifier (in)dependencies must be encoded by Skolem functions than in the latter. Why would the former translation render IF first-order logic a part of mathematics while the latter would allow FO to remain a logic (Hintikka 1991: 26–27)? Third, one must make a distinction between what is needed to understand an IF first-order sentence, and what is needed to mechanically deal with the validities (logical truths) of IF first-order logic. Due to its non-axiomatizability, there are no mechanical rules for generating the set of all validities of IF first-order logic. On the other hand, to understand a sentence is to know what things are like when it is true, not to know what things are like when it is logically true (Hintikka 1995: 13–14). Fourth, because of non-axiomatizability, the valid inference patterns in IF first-order logic cannot be exhausted by any recursive enumeration. Insofar as important mathematical problems can be reduced to questions about the validity of IF first-order formulas (Subsect. 5.3), progress in mathematics can be seen to consist (not of the discovery of stronger set-theoretical axioms but) of ever more powerful rules for establishing validity in IF first-order logic (Hintikka 1996: 100; 2000: 135–136).

4.4 Compositionality and the failure of Tarski-type semantics

The principle of compositionality (a.k.a. Frege principle) states, in its most general form, that semantic attributes of a complex expression E are determined by the semantic attributes of its constituent expressions and the structure of E. In particular, the semantic attribute of interest (e.g., truth) may be determined in terms of one or more auxiliary semantic attributes (e.g., satisfaction).[63] Hintikka has argued that compositionality amounts to semantic context-independence: semantic attributes of a complex expression depend only on the semantic attributes of its constituent expressions, plus its structure – they do not depend on the sentential context in which the expression is embedded. Semantic context-independence makes it possible to carry out semantic analysis from inside out – from simpler expressions to more complex ones.[64] This is what is needed for recursive definitions of semantic attributes – such as Tarski-type definitions of truth and satisfiability – to be possible.[65] By contrast, the GTS analysis of sentences is an outside in process: a semantic game starts with an entire sentence, and stepwise analyzes the sentence into simpler and simpler components, eventually reaching an atomic formula (together with an appropriate variable assignment). Therefore GTS allows accounting for semantic context-dependencies that violate the principle of compositionality.[66]

In IF first-order logic an existential quantifier may depend only on some of the universal quantifiers in whose scope it lies. Accordingly, its interpretation depends on its relation to quantifiers outside its own scope. Such an existential quantifier is context-dependent.[67] On the face of it, then, IF first-order logic cannot but violate the principle of compositionality and does not admit of a Tarski-type truth-definition.

Hodges (1997a,b) showed, however, that IF first-order logic can be given a compositional semantics.[68] The semantics is given by recursively defining the satisfaction relation ‘MX φ’ (read: φ is satisfied in M by X), where X is a set of variable assignments. While the Tarskian semantics for FO is in terms of single variable assignments, Hodges's semantics employs sets of variable assignments. The game-theoretical semantics of IF first-order logic is captured by this compositional semantics: for every formula φ of IF first-order logic, player 2 has a winning strategy in G(φ, M, g) iff the condition M{g} φ holds.

While IF first-order logic can, then, be given a compositional semantics, it is methodologically worth pointing out that compositionality is not needed for defining IF first-order logic. The very existence of IF first-order logic proves that rejecting compositionality is no obstacle for the formulation of a powerful logic (Hintikka 1995). It should also be noted that what makes Hodges's result work is its type-theoretical ascent. Let us say that a Tarski-type compositional semantics is a compositional semantics which interprets each formula φ(x1,…, xn) of n free variables in terms of an n-tuple of elements of the domain. Hence the standard semantics of FO is Tarski-type, but Hodges's semantics employing the satisfaction relation ‘MX φ’ is not, because the latter evaluates a formula φ(x1,…, xn) of n free variables relative to an entire set of n-tuples of elements. Cameron & Hodges (2001) proved that actually there is no Tarski-type compositional semantics for IF first-order logic.

The notion of compositionality can be refined by imposing constraints on the auxiliary semantic attributes. Sandu and Hintikka (2001: 60) suggested, by analogy with FO, that ‘satisfaction by a single variable assignment’ would be a natural auxiliary attribute in connection with IF first-order logic. By the result of Cameron and Hodges, no semantics for IF first-order logic exists that would be compositional in this restricted sense.

Making use of the ideas of Hodges's compositional semantics, Väänänen (2002) showed how to define a two-player game of perfect information which captures the same semantics as the two-player game of imperfect information explained in Subsection 3.2.[69] The definition turns on evoking higher-order entities: positions in Väänänen's game of perfect information are in terms of sets of variable assignments (not single variable assignments), and existential quantifiers and disjunctions are interpreted by the players' choices of entire functions (not individuals). Hintikka remarks (2006: 65) that if one is sufficiently ruthless, one can always save compositionality by building the laws of semantic interaction of different expressions into the respective meanings of those expressions.[70]

4.5 Defining truth

The definability of truth can only be discussed in connection with languages capable of speaking of themselves. Let us consider an arithmetical vocabulary τ and restrict attention to the standard model N of Peano's axioms. Each sentence φ of vocabulary τ can then be represented by a natural number φ, its Gödel number. It is assumed that τ contains a numeral φ for each number φ. If L and L′ are abstract logics of vocabulary τ, such as FO or IF first-order logic, and TRUE(x) is a formula of L′ such that every sentence φ of L satisfies:

N ⊨ φ iff NTRUE(φ),

then TRUE(x) is said to be a truth-predicate (explicit truth-definition) of logic L in logic L′ for the model N. By Alfred Tarski's famous theorem of the undefinability of truth (Tarski 1933), there is no truth-predicate for FO in FO itself for the model N. More generally, Tarski showed that under certain assumptions, a truth-definition for a logic L can only be given in a metalanguage which is essentially stronger than L. One of the assumptions is that the negation used behaves like contradictory negation. On the other hand, Tarski also pointed out that an implicit truth-definition for FO in FO itself is possible. Let τ be an arithmetical vocabulary, and let us stay with the standard model N of Peano's axioms. Let ‘TRUE’ be a unary predicate not appearing in τ. An FO formula ψ(x) of vocabulary τ ∪ {TRUE} is an implicit truth-definition for FO[τ] in FO[τ ∪ {TRUE}] for N, if for every FO[τ] sentence φ, the following holds:

N ⊨ φ iff there is an interpretation TRUEN of the unary predicate TRUE such that (N, TRUEN) ⊨ ψ(φ).[71]

Intuitively, TRUEN is the set of those natural numbers that are Gödel numbers of true arithmetic sentences of vocabulary τ; and ψ(x) says that x is a Gödel number in the extension of the predicate TRUE. The formula ψ(x) is a conjunction including clauses that mimic, on the object-language level, the metalogical recursive clauses of the Tarski-type truth-definition for FO. E.g., one of the conjuncts is (∀y)(∀z)(y = χz = θx = χ ∧ θ → [TRUE(x) ↔ TRUE(y) ∧ TRUE(z)]). The implicit nature of the truth-definition is best seen from the fact that the predicate TRUE appears on both sides of the equivalence sign in these clauses.[72] The above implicit truth-definition of FO for N is of the form ‘there is a set S interpreting the predicate TRUE such that ψ(x).’ Thus the implicit truth-definition of FO in FO gives rise to the explicit truth-definition ∃TRUE ψ(x) of FO[τ] in ESO[τ] for N. Since ESO and IF first-order logic have the same expressive power, a truth-predicate of FO for N can be formulated in IF first-order logic.

The same reasoning can be applied to ESO itself, and thereby to IF first-order logic (Hintikka 1991, 1996; Hyttinen & Sandu 2000; Sandu 1996, 1998). Namely, an ESO formula χ(x) of vocabulary τ ∪ {TRUE} can be formulated which is an implicit truth-definition of ESO[τ] for N. Therefore the ESO[τ] formula ∃TRUE χ(x) is an explicit truth-definition of ESO[τ] for N. Here the truth-predicate is formulated in the very same language whose notion of truth is being defined: ESO. Hence ESO, and thereby IF first-order logic, is capable of explicitly defining its own truth-predicate relative to N.[73] This result does not contradict Tarski's undefinability result, because here non-determined sentences are possible; the negation used is not contradictory negation.[74]

Tarski (1983) adopted a view accoding to which truth cannot be defined for natural languages. It has been argued in Hintikka & Sandu (1999) that this was due to Tarski's belief that compositionality fails in natural languages.[75] IF first-order logic does not have a Tarski-type compositional semantics, but it admits of formulating a self-applied truth-predicate. The proposed reason why Tarski believed it not to be possible to discuss the notion of truth in the natural language itself, cannot be entertained from the viewpoint of IF first-order logic.[76]

Model theory can be seen to depend on truth-definitions: to specify when M is a model of a sentence φ, one must specify the conditions under which φ is true in M. So the possibility and presuppositions of truth-definitions are directly related to the philosophical viability of model theory. By Tarski's result a truth-definition for a first-order language can only be formulated in a second-order language. According to Hintikka, IF first-order logic is, substantially speaking, a first-order logic (see Subsect. 5.1). If so, IF first-order logic helps to define the truth-predicate on the first-order level. This would lead to a ‘declaration of independence of model theory’: model theory could be developed without set existence problems that beset second-order logic.[77] Sandu & Hyttinen (2001) call into question Hintikka's suggestion (Hintikka 1996: 97) that the truth-predicate might be useful in searching for stronger and stronger axiom systems for elementary arithmetic, while de Rouilhan & Bozon (2006: 690) argue that Hintikka's result does not suffice for removing the dependence of model theory on problems of set existence, because the model-theoretical criterion of the adequacy of the truth-predicate is not itself expressible in terms of IF first-order logic.[78]

4.6 Properties of extended IF first-order logic

Expressive power. Since IF first-order logic is not closed under contradictory negation, extended IF first-order logic is strictly more expressive than IF first-order logic (cf. Subsect. 4.2).[79] The following properties are expressible in extended IF first-order logic but not in IF first-order logic (Hintikka 1996: 188–190): finiteness of the domain, well-foundedness of a binary relation, connectedness of a graph, principle of mathematical induction, Bolzano-Weierstrass theorem, and the topological notion of continuity. In the following sense, also the notion of power set can be expressed: extended IF first-order logic can express of predicates A(z) and B(z) that the cardinality of the extension of B(z) is 2κ given that the cardinality of the extension of A(z) is κ.

Metalogical properties. The nice metatheorems that IF first-order logic shares with FO are lost: Compactness, Löwenheim-Skolem property, Separation theorem, Beth's definability theorem and the existence of a complete disproof procedure all fail for extended IF first-order logic (Hintikka 1991: 49, 1996: 189). Furthermore, no self-applied truth-predictate is possible for extended IF first-order logic. The definition of such a truth-predicate would have to contain the clause (∀y)(y = θx = ¬θ → [TRUE(x) ↔ ¬TRUE(y)]). But this clause is not a well-formed formula of extended IF first-order logic, since ¬ appears in the scope of the universal quantifier (∀y) (cf. Hintikka 1996: 151).

There are also positive results. First, there is a complete proof procedure for sentences of the form ¬φ, where φ is a sentence of (unextended) IF first-order logic. This is because ¬φ is valid iff the IF first-order sentence φ is inconsistent, and there is a complete disproof procedure for IF first-order sentences (Hintikka 1996: 152; cf. Subsect. 4.1). Second, the notion of logical consequence can be dealt with: ψ is a logical consequence of φ iff the sentence (¬φ ∨ ψ) is valid (Hintikka 1996: 149). Third, the validity problem of the full second-order can be effectively reduced to the validity problem of extended IF first-order logic. Namely, why cannot a second-order sentence simply be thought of as a two-sorted first-order sentence? Because in order to capture the standard interpretation of second-order logic,[80] it must be said that for every extensionally possible set of n-tuples of elements of sort 1 there exists a member of sort 2 having those and only those elements as members, for all arities n such that the second-order sentence contains a quantifier (∃R) where R is n-ary.[81] Such additional conditions can be expressed by a finite conjunction X of contradictory negations of ESO sentences. So there is an ESO sentence Y such that X itself is logically equivalent to ¬Y. Therefore, if φ is a second-order sentence and φ* its reconstruction in two-sorted first-order logic, we have: φ is valid iff (¬X ∨ φ*) is valid iff (Y ∨ φ*) is valid. Here the validity of (¬X ∨ φ*) means that φ* is a logical consequence of X. Because Y is an ESO sentence, it can be expressed in (unextended) IF first-order logic. It follows that the validity of any second-order sentence can be expressed as the validity of a sentence of (unextended) IF first-order logic.[82]

Algebraic structure. The two negations available in extended IF first-order logic, ¬ and ~, agree on true sentences, as well as on false ones: if φ is true (false) in M, then both ~φ and ¬φ are false (true) in M. By contrast, if φ is non-determined in M, then ~φ is non-determined as well, but ¬φ is true. The combination ¬~ of the two negations applied to a sentence φ asserts that φ is not false.[83]

The propositional part of extended IF first-order logic involves the four operators ¬, ~, ∧, and ∨. Hintikka (2004b) raises the question of the algebraic structure induced by these operators, when any two truth equivalent sentences are identified. The operators ¬, ∧, and ∨ give rise to a Boolean algebra – but what does the strong negation ~ add to this structure?

Restricting attention to truth equivalence, ~ is definable from the operators ¬ and ¬~. For, ~φ is true in M iff ¬(¬~φ) is true in M. Instead of ~, the operator ¬~ may then be considered. Hintikka points out that the propositional part of extended IF logic (formulated in terms of the operators ∨, ∧, ¬ and ¬~) is a Boolean algebra with an operator in the sense of Jónsson & Tarski (1951). The additional operator ¬~ is actually a closure operator.

Jónsson and Tarski (1951, Thm. 3.14) showed that any closure algebra is isomorphic to an algebraic system formed by a set equipped with a reflexive and transitive relation.[84] As a matter of fact, the relevant algebraic structure is precisely that of the propositional modal logic S4. Hence the propositional part of extended IF first-order logic has the same algebraic structure as S4. By a well-known result due to Gödel (1933) and McKinsey & Tarski (1948), intuitionistic propositional logic can be interpreted in S4, via a translation t such that φ is intuitionistically provable iff t(φ) is a valid S4 formula. Thus intuitionistic propositional logic is interpretable in extended IF first-order logic.[85]

5. Philosophical Consequences

Hintikka (2006: 73–77) lists the following as consequences of the novel insights made possible by (extended) IF first-order logic: reconstruction of normal mathematical reasoning on the first-order level, a novel perspective on the notion of truth in axiomatic set theory, insights into the nature of negation in logic and in natural languages, the formulation of a self-applied truth-predicate, and the conceptual problem of mutually dependent variables. A related topic of general interest is the phenomenon of informational independence in natural languages.

The ideas related to negation and definability of truth have been discussed in Subsections 4.2 and 4.5, respectively. Hintikka (2002b) has suggested that noncommuting variables in quantum theory are irreducibly interdependent, and proposed IF logic – broadly conceived – as a natural framework for studying certain quantum mechanical phenomena. Mutual dependencies between variables would be represented by mutual dependencies of quantifiers to which these variables are bound. Such mutually dependent quantifiers are not, however, expressible in IF first-order logic, but call for a generalized framework. A framework of this kind is gradually emerging (cf. Hintikka 2006: 74, 515, 752–753; 2008). Let us consider here the remaining issues.

5.1 Place in type hierarchy

Hintikka maintains that the only reasonable way of making a distinction between first-order logic and higher-order logic is by reference to the entities that one's quantified variables range over. A first-order logic is, then, a logic in which all quantifiers range over individuals, in contrast to higher-order entities (e.g., subsets of the domain). On this basis Hintikka holds that substantially speaking, IF first-order logic and even extended IF first-order logic are first-order logics.[86] Solomon Feferman (2006: 457–461) criticizes the criterion that Hintikka employs for judging the first-order status of a logic. Feferman makes use of generalized quantifiers in his argument.[87] The formulas

Q[z1]…[zk] (φ1,…,φk)

involving generalized quantifiers are syntactically first-order, insofar as the quantified variables zi1,…, zini = [zi] are first-order (1 ≤ ik). The semantics of a generalized quantifier Q is formulated by associating with each domain M a k-ary relation QM among relations on M, with QM ⊆ Mn1 × … × Mnk. E.g., for any infinite cardinal κ, there is a generalized quantifier Q≥κ such that Q≥κz P(z) is true in a model M iff there are at least κ elements that satisfy the predicate P. Hence generalized quantifiers can be semantically higher-order. The fact that the variables in a formula range over individuals only, does not offer a reliable criterion for the logic's first-order status.

Hintikka's criterion could be reformulated by saying that a logic is of first order, if any play of a semantic game associated with a formula of this logic only involves (in addition to choices interpreting conjunctions and disjunctions) choices of individuals, as opposed to choices of higher-order entities. By this criterion IF first-order logic (and even extended IF first-order logic) are first-order logics, but the logic of generalized quantifiers such as Q≥κ is not.[88] Feferman (2006: 461) anticipates the possibility of such a reply, but finds it unconvincing.

By a result of Hintikka (1955), the problem of deciding whether a sentence of second-order logic is valid can be effectively reduced to the validity problem of IF first-order logic.[89] Väänänen (2001) has shown that the set of valid sentences of IF first-order logic has the same very high complexity as the set of validities of the full second-order logic.[90] Väänänen (2001) and Feferman (2006) conclude that speaking of validity in IF first-order logic leads to a strong commitment to full second-order logic. Hintikka (2006: 476–477) looks at these results from the opposite direction: for him they mean that indeed one can speak of validity in full second-order logic in terms of validity in IF first-order logic. What is more, Hintikka (1997) affirms that even extended IF first-order logic is a first-order logic. If so, any mathematical theory that can be expressed by the truth of an extended IF first-order sentence is likewise free from problems of set existence.

Hintikka's position remains puzzling: the validity of a sentence means the existence of a winning strategy for player 2 in the game correlated with this sentence and any model. How can commitment to entities other than individuals be said to have been avoided, when considering, say, the validity of an extended IF first-order sentence ¬∀xyz(∃w/∀x) R(x, y, z, w)? This sentence is valid iff for all models M and functions f : M → M and g : M → M, the condition R(a, f(a), b, g(b)) fails, for some elements a and b. Can the meaning of the sentence be well understood without presupposing the genuinely second-order idea of all subsets (or all functions)?[91]

5.2 Philosophy of set theory

According to Hintikka, our pretheoretical idea of the truth of a quantified sentence φ (in negation normal form) is that there exist ‘witness individuals’ for the existential quantifiers, usually depending on values corresponding to the preceding universal quantifiers.[92] It is the existence of such witnesses that constitutes the truth of φ. Providing witnesses is precisely what Skolem functions for φ do.[93] The truth of a quantified sentence φ amounts to the existence of a full set of Skolem functions for φ. In Hintikka's view, then, our ordinary notion of first-order truth is conceptualized in terms of (existential) second-order logic. What happens when this idea is applied to axiomatic set theory, say Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC)? It should be born in mind that the very idea of axiomatic set theory is to dispense with higher-order logic; its underlying logic is taken to be FO. Hintikka argues as follows.[94]

For each sentence φ of ZFC, there is another sentence φ* = ∃f1…∃fnψ* of ZFC which says, intuitively, that ‘Skolem functions’ for φ exist. These ‘Skolem functions’ are certain individuals of the domain of a model of ZFC. Here both φ* and φ are first-order sentences. But if for every sentence φ of ZFC we have φ and φ* being logically equivalent, why could the sentences φ* not be used for formulating a truth-predicate for ZFC in ZFC itself? However, by Tarski's undefinability result, no such truth-predicate exists.[95] So there must be a model of ZFC and a sentence φ true in that model such that φ* is false: not all ‘Skolem functions’ asserted to exist by φ* actually exist in the model.

This reasoning shows that ZFC does not fully capture the idea of truth according to which the truth of a sentence φ means that the Skolem functions for φ exist. Furthermore, it also shows that the standard interpretation of higher-order logic is not fully captured by ZFC. To see this, observe that for every sentence φ there is a logically equivalent second-order sentence φ** = ∃F1…∃Fnψ** actually asserting the existence of Skolem functions for φ. The first-order sentence φ* = ∃f1…∃fnψ* must not be confused with the second-order sentence φ** = ∃F1…∃Fnψ**. The Skolem functions Fi of which the sentence φ** speaks are sets built out of individuals of the set-theoretical universe, while the ‘Skolem functions’ fi spoken of by φ* are individuals.[96]

The conclusion of Hintikka's argument is that our ordinary notion of truth is misrepresented by ZFC. Furthermore, by Tarski's undefinability result the situation cannot be improved by adding further axioms to ZFC. In Hintikka's judgment, axiomatic set theory is a systematic but futile attempt to capture on the first-order level truths of standardly interpreted second-order logic. Like Gödel (1947), also Hintikka holds that the concepts needed to state, say, the continuum hypothesis are sufficiently well-defined to determine the truth-value of this conjecture. The continuum hypothesis does not receive its meaning from phrasing it in ZFC. Gödel and Hintikka agree that the independence results due to Gödel himself and Paul Cohen do not by themselves show anything about the truth or falsity of the continuum hypothesis. But unlike Gödel, Hintikka finds the derivability of any conjecture whatever in ZFC (or in any of its extensions) simply irrelevant for the truth of the conjecture. For Hintikka, it is a ‘combinatorial’ question whether every infinite subset of the reals is either countable or else has the cardinality of the set of all reals – a question properly conceptualized within second-order logic. This is what Hintikka takes to be the pretheoretical sense of the truth of the continuum hypothesis, and that is not captured by ZFC.[97]

5.3 Extended IF first-order logic and mathematical theorizing

Hintikka sees extended IF first-order logic as making it possible to reconstruct all normal mathematical reasoning on the first-order level. This result is essentially dependent on the acceptability of Hintikka's claim that extended IF first-order logic is ontologically committed to individuals only (Subsect. 5.1). But how would extended IF first-order logic serve to reconstruct an important part of all mathematical reasoning?

Hintikka (1996: 194–210) discusses mathematical theories (or mathematical axiomatizations) and mathematical problems (or questions of logical consequence) separately.

Any higher-order mathematical theory T gives rise to a many-sorted first-order theory T*. If the theory is finite, there is a finite conjunction J formulated in extended IF first-order logic (equivalent to a contradictory negation of an ESO sentence) expressing the requirement that the standard interpretation of higher-order logic is respected. The question of the truth of the higher-order theory T is thus reduced to the truth of the sentence (T*J) of extended IF first-order logic (cf. Subsect. 4.6).

The mathematical problem of whether a given sentence C is a logical consequence of a finite higher-order theory T coincides with the problem of whether the higher-order sentence (¬T*C) is valid. But then there is a sentence of (unextended) IF first-order logic which is valid iff the sentence (¬T*C) is valid (cf. Subsect. 4.6). Mathematical problems can be understood as questions of the validity of an unextended IF first-order sentence. Among mathematical problems thus reconstructible using IF first-order logic are the continuum hypothesis, Goldbach's conjecture, Souslin's conjecture, the existence of an inaccessible cardinal, and the existence of a measurable cardinal.[98]

As conceptualizations apparently transcending the proposed framework – not expressible in a higher-order logic – Hintikka considers the maximality assumption expressed by David Hilbert's so-called axiom of completeness. The axiom says that no mathematical objects can be added to the intended models without violating the other axioms.[99]

If indeed problems related to the idea of all subsets are avoided in extended IF first-order logic (Hintikka 1997), it offers a way of defending a certain form logicism. Unlike in historical logicism, however, the idea is not to consider logic as an axiom system on the same level as mathematical axiom systems (Hintikka 1996: 183),[100] and to attempt to reduce mathematics to logic. Rather, Hintikka (1996: 184) proposes to ask: (a) Can the crucial mathematical concepts be defined in logical terms? (b) Can the modes of semantically valid logical inferences used in mathematics be expressed in logical terms? The idea is not to concentrate on deductive rules of logic: no complete set of deductive rules exist anyway for IF first-order logic. Because the status of higher-order logic is potentially dubious – due to the problems associated with the notion of powerset – a positive solution to questions (a) and (b) calls for a first-order logic more powerful than FO.

The suggested reduction of all mathematics expressible in higher-order logic to the first-order level would be philosophically significant in showing that mathematics is not a study of general concepts, but of structures consisting of particulars (Hintikka 1996: 207). This is not to say that actual mathematics would be best carried out in terms of IF first-order logic, only that it could in principle be so carried out (Hintikka 1996: 205, 2006: 477). For a critique of Hintikka's conclusions, see (Väänänen 2001, Feferman 2006).

5.4 Informational independence in natural languages

When Hintikka began to apply GTS to the study of natural language (Hintikka 1973a), he took up the question of whether branching quantifiers occur in natural languages. By the same token, he was led to ask whether there are semantic games with imperfect information. He detected various types of grammatical constructions in English that involve informational independence.[101] An often cited example is the sentence

Some relative of each villager and some relative of each townsman hate each other,

true under its relevant reading when the choice of a relative of each townsman can be made independently of the individual chosen for ‘each villager.’[102] Hintikka (1973a) sketched an argument to the effect that actually every FPO sentence can be reproduced as a representation of an English sentence. From this it would follow that the logic of idiomatic English quantifiers is much stronger than FO, and that no effective procedure exists for classifying sentences as analytical or nonanalytical, synonymous or nonsynonymous.[103] This would be methodologically a very important result, showing that syntactic methods are even in principle insufficient in linguistic theorizing. Jon Barwise (1979) suggested that particularly convincing examples supporting Hintikka's thesis can be given in terms of generalized quantifiers.

Lauri Carlson and Alice ter Meulen (1979) were the first to observe cases of informational independence between quantifiers and intensional operators. Consider the question[104]

Who does everybody admire?

Under one of its readings, the presupposition of this question is (∀x)(∃y) admires(x, y). The desideratum of this question is

I know who everybody admires.

Writing ‘KI’ for ‘I know,’[105] the desideratum has a reading whose ‘logical form’ is

KI (∀x)(∃y/KI) admires(x, y).

This desireratum is satisfied by an answer pointing out a function f which yields for each person a suitable admired person. Such a function can be his or her father. Importantly, the value f(b) of this function only depends on person b interpreting ‘everybody,’ but does not depend on the scenario w, compatible with the questioner's knowledge, that interprets the construction ‘I know.’ Interestingly, the desideratum KI (∀x)(∃y/KI) admires(x, y) is not expressible without an explicit independence indicator. This case exhibits an intransitive dependence relation, and cannot be represented in the notation of FPO.[106]

Hintikka's ideas on desiderata of wh-questions were influenced by his exchange with the linguist Elisabet Engdahl.[107] These wh-questions, again, functioned as important test cases for the appearance of informational independence in natural languages.

Hintikka and Sandu (1989) took up the task of formulating an explicit unified formal treatment for the different varieties of informational independence in natural language semantics. They posed the question of which are the mechanisms that allow English to exceed the expressive power of FO. For, natural languages typically do not resort to higher-order quantifiers. Hintikka and Sandu suggested that informational independence plays a key role in increasing the expressive power of natural languages.[108]

In GTS as developed for English in Hintikka & Kulas (1983, 1985), game rules are associated with a great variety of linguistic expressions (cf. Subsect. 2.1). As Hintikka (1990) has stressed, informational independence is a cross-categorical phenomenon: it can occur in connection with expressions of widely different grammatical categories. Hintikka and Sandu (1989) propose several examples from English calculated to show that there is an abundance of instances of informational independence in natural languages. Among examples are wh-questions of the kind discussed above (questions with an outside quantifier), and the distinction between the de dicto and de re readings of certain English sentences. Hintikka and Sandu argue that representing such readings in terms of IF logic is more truthful to the syntax of English than the alternative, non-IF representations are.

As a rule, informational independence is not indicated syntactically in English.[109] Methodological consequences of this fact are discussed in Hintikka (1990), where he tentatively puts forward the Syntactic Silence Thesis, according to which sufficiently radical cross-categorical phenomena are not likely to be marked syntactically in natural languages. A case in point would, then, be informational independence. Evidence for this thesis would, for its part, be evidence against the sufficiency of a syntax-oriented approach to semantics.

6. Related Propositional and Modal Logics

IF first-order logic is obtained from FO by allowing the expression of more dependencies between quantifiers. In the literature several authors have attempted to import this idea from FO to other logics: notably to propositional logic and different modal logics.

IF propositional logic. In the early publications on IF first-order logic sentences such as

φ: ((∃x/∧1)P(x) ∧1 (∃x/∧1)Q(x)).

were considered (see Sandu 1993). Hodges (1997a: 546–548) pointed out that independence indications such as (∃x/∧1,…,∧n) are actually vacuous. Because φ contains two tokens of (∃x/∧1), a strategy of player 2 in game G(φ, M) will consist of two strategy functions fL and fR, one for the left and the other for the right token of (∃x/∧1). What about the arguments of these strategy functions? One may propose to generalize the notion of strategy function (Sandu 1993), and allow a strategy function for a token of (∃x/∧1,…,∧n) to take as arguments any previous moves by player 1, including moves for preceding conjunctions, except for moves made for conjunctions identified as ∧1,…,∧n.

In game G(φ, M), there is only one move by player 1 before it will be player 2's turn to move: the choice of a conjunct. Due to the independence indications, however, the strategy functions of player 2 may not employ player 1's choice of a conjunct as an argument. Hence the strategy functions fL and fR are both constants (zero-place functions). The informational independence evaporates, however. When player 2 is to make a move, say, for the left token of (∃x/∧1), she can infer that player 1 must have chosen the left conjunct. After all, she is now making a choice for the left conjunct! Strategy functions are defined for tokens of operators. The syntactic position of such a token uniquely determines which choices must have been made for conjunctions and disjunctions to reach it.

Independence pertaining to propositional connectives as construed above is simply a vacuous idea. The sentence φ is truth equivalent to the FO sentence ((∃x)P(x) ∧ (∃x)Q(x)).

Hodges observes that a different formulation of the syntax would allow expressing non-vacuous independencies of conjunctions: construing conjunctions as restricted quantifiers. E.g., the sentence

ψ: i∈{L, R} (∃x / i)Qi(x).

contains only one token of (∃x / i). A strategy function for this token is allowed to take as its arguments any moves made by player 1, including moves for conjunctions, as long as they are not explicitly forbidden. There is only one earlier move, and that is a move for a conjunction which is forbidden as an argument of the strategy function: the index i identifying the conjunction is written after the slash sign in the expression (∃x / i). Clearly F={c} is a winning strategy for player 2 in G(ψ, M) iff the constant (zero-place function) c satisfies: both QL(c) and QR(c) hold in M. As it happens, ψ is truth equivalent to the FO sentence (∃x)(QL(x) ∧ QR(x)).

Viewing disjunctions and conjunctions as restricted quantifiers has led to variants of slash logic rather than IF logic; cases in point can be found in Sandu & Pietarinen (2001) and Sevenster (2006b).[110] In Sandu & Pietarinen (2003) a formulation of IF propositional logic is proposed that attempts to overcome the problems pointed out by Hodges. In that paper a connection is made to partial logic.[111]

Hintikka (2004b) expresses doubts as to the general theoretical interest of IF propositional logic. As explained in Section 1, he sees as the deepest reason for IF logic the possibility of expressing, on the first-order level, real dependencies between variables by means of dependencies between quantifiers. He notes that dependencies between conjunctions and disjunctions do not have an equally interesting interpretation.

IF modal logics. Among what are formally classified as modal operators, there are – besides alethic modalities (necessarily, possibly) – various intensional (knows that, believes that), temporal (sometimes in the future, always in the past, until, next), and deontic (is obligatory that, is permitted that) operators. As well, provability logic and propositional dynamic logic are branches of modal logic. Mathematically, modal languages are simple but expressive languages for talking about relational structures.[112]

As noted in Subsection 5.4, informational independence between quantifiers and modal operators came to play an important role in Hintikka's general theory of questions and answers.[113] In Hintikka & Sandu (1989) the possibility is considered that a singular term is independent of a modal operator. If ‘k’ is an abbreviation of the definite description ‘the king of France,’ an example would be

BAlice tall(k / BAlice),

formed from the sentence BAlice tall(k) by indicating the constant ‘k’ as independent of the belief-operator ‘BAlice’.[114] The intended interpretation of BAlice tall(k / BAlice) is that BAlice tall(k) holds with the extra specification that there is an individual c which exists in all the scenarios compatible with all that Alice believes, and c is in each scenario the denotation of ‘k’.[115]

But what about independencies between modal operators in propositional modal logic? Semantically speaking modal operators are quantifiers. Hence it makes sense to study (in)dependencies among them. The first steps in this direction were taken by Bradfield (2000) and Bradfield & Fröschle (2002), where the independence indications applied to modal operators were given a semantics using a combination of transition systems with concurrency and games of imperfect information.

In publications by Hyttinen, Sevenster and Tulenheimo, versions of IF modal logic have been defined in a simpler framework: making use of the game-theoretical notion of informational independence. Enriching standard modal structures by taking concurrency as a separate, primitive component is not necessary.[116] An example of an IF modal formula would be

1(◊2/□1)p,

obtained from the basic modal formula □◊p by (labeling the tokens of the operators to facilitate reference to them and) marking the diamond as independent of the box. Its semantics is the same as that of □◊p, except that the scenario witnessing the diamond must be chosen independently of the scenario interpreting the box. So the formula states, when evaluated at w, that there is a scenario c accessible from all scenarios v that are accessible from w. The formula expresses a kind of confluence property.

All formulations of IF modal logic presented in the literature have either used semantic ideas that go beyond the IF logical framework (concurrency), or else are, in Hodges's classification, slash logics rather than IF logics.[117] As to the motivation of IF modal logics, the approach initiated by Bradfield ties together two threads of general interest in computer science: modal logic and concurrent systems. The approach followed by Hyttinen, Sevenster and Tulenheimo, again, has not led to any direct applications. Its interest has been logic-internal. Just like various standard modal logics can be seen as well-behaved expressive fragments of FO, this approach has led to the discovery of expressive and rather well-behaved fragments of IF first-order logic. In some cases these fragments are actually translatable into FO.

7. Conclusion

In this article IF first-order logic and extended IF first-order logic have been surveyed. Their metalogical properties have been explained and the philosophical relevance of these properties has been discussed. The suggested consequences of these logics for philosophical issues such as the existence of a self-applied truth-predicate, the logicist program, the philosophical relevance of axiomatic set theory, and informational independence in natural languages have been covered as well. The attempts to adapt the framework of IF first-order logic to propositional and modal logics were also briefly considered.

Bibliography

Other Internet Resources

Related Entries

compositionality | game theory | generalized quantifiers | logic: and games | logic: classical | logic: deontic | logic: dialogical | logic: epistemic | logic: hybrid | logic: intensional | logic: intuitionistic | logic: modal | logic: propositional dynamic | logic: provability | logic: second-order and higher-order | logic: temporal | model theory | Prior, Arthur | reasoning: automated | set theory | Tarski, Alfred: truth definitions

Acknowledgments

The work for this entry was carried out within the project “Modalities, Games and Independence in Logic” (Project No. 207188) funded by the Academy of Finland.