Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

The Logic of Action

First published Tue Mar 31, 2009

In this article we provide a brief overview of the logic of action in philosophy, linguistics, computer science and artificial intelligence. The logic of action is the formal study of action in which formal languages are the main tool of analysis.

The concept of action is of central interest to many disciplines: the social sciences including economics, the humanities including history and literature, psychology, linguistics, law, computer science, artificial intelligence, and probably others. In philosophy it has been studied since the beginning because of its importance for epistemology and, particularly, ethics; and since a few decades it is even studied for its own sake. But it is in the logic of action that action is studied in the most abstract way.

The logic of action began in philosophy. But it has also played a certain role in linguistics. And currently it is of great importance in computer science and artificial intelligence. For our purposes it is natural to separate the accounts of these developments.


1. The Logic of Action in Philosophy

1.1 Historical overview

Already St. Anselm studied the concept of action in a way that must be classified as logical; had he known symbolic logic, he would certainly have made use of it (cf. [Henry 1967] and [Walton 1976]). In modern times the subject was introduced by, among others, Alan Ross Anderson, Frederick B. Fitch, Stig Kanger, and Georg Henrik von Wright; Kanger's work was further developed by his students Ingmar Pörn and Lars Lindahl. The first clearly semantic account was given by Brian F. Chellas in [Chellas 1969]. (For a more detailed account, see [Segerberg 1992] or the mini-history in [Belnap 2001].)

Today there are two rather different groups of theories that may be described as falling under the term logic of action. One, the result of the creation of Nuel Belnap and his many collaborators, may be called stit theory (a term that will be explained in the next paragraph). The other is dynamic logic. Both are connected with modal logic, but in different ways. Stit theory grew out of the philosophical tradition of modal logic. Dynamic logic, on the other hand, was invented by computer scientists in order to analyse computer action; only after the fact was it realized that it could be viewed as modal logic of a very general kind. One important difference between the two is that (for the most part) actions are not directly studied in stit theory: the ontology does not (usually) recognize a category of actions or events. But dynamic logic does. Among philosophers such ontological permissiveness has been unusual. Hector-Neri Castañeda, with his distinction between propositions and practitions, provides one notable exception.

The stit tradition is treated in this section, the dynamic logic one in the next.

1.2 The stit saga

The term “stit” is an acronym based on “sees to it that”. The idea is to add, to an ordinary classical propositional language, a new propositional operator stit, interpreting stitiφ, where i stands for an agent and φ for a proposition, as i sees to it that φ. (The official notation of the Belnap school is more laborious: [i stit: φ].) Note that φ is allowed to contain nestings of the new operator.

In order to develop formal meaning conditions for the stit operator stit a semantics is defined. A stit frame has four components: a set T, the nodes of which are called moments; an irreflexive tree ordering < of T; a set of agents; and a choice function C. A maximal branch through the tree is called a history.

The tree (T, <) seems to correspond to a naïve picture familiar to us all: a moment m is a temporary present; the set {n : n < m} corresponds to the past of m, which is unique; while the set {n : m < n} corresponds to the open future of m, each particular maximal linear subset of which corresponds to a particular possible future.

To formalize the notion of action, begin with two general observations:

  1. usually an agent is not able to select one possible future to become the unique actual future, but
  2. by his action he can make sure that certain futures, which before his action are possible, are no longer possible after his action.

This is where the choice function C comes in: for each moment m and agent i, C yields a partitioning Cim of the set Hm of all histories through m. An equivalence class in Cim is called a choice cell. (Note that two histories belonging to the same choice cell agree up to the moment in question but not necessarily on later on.) If h is a history running through m we write Cim(h) for the choice cell of which h is a member. It is natural to associate Cim with the set of actions open to the agent i at m, and to think of the choice cell Cim(h) as representing the action associated with h.

A stit model has an additional component: a valuation. A valuation in a frame, it turns out, is a function that assigns to each index either 1 (truth) or 0 (falsity), where an index is an ordered pair of a history and a moment on the history. The notion of truth or falsity of a formula with respect to an index can now be defined. Let us write [[φ]]m for the set {hHm : (h,m) ⊨ φ }, that is, the set of histories agreeing with h at least up to m and such that φ is true with respect to the index consisting of that history and m. If V is the valuation we have the following basic truth-condition for atomic φ:

  1. (h,m) ⊨ φ iff V(h,m) = 1.

The truth-conditions for the Boolean connectives are as expected; for example,

  1. (h,m) ⊨ ¬φ iff (h,m) ⊭ φ,
  2. (h,m) ⊨ φ ∧ ψ iff (h,m) ⊨ φ and (h,m) ⊨ ψ.

Defining formal truth-conditions for the stit operator there are at least two possibilities to be considered:

  1. (h,m) ⊨ stiti φ iff Cmi(h) ⊆ [[φ]]m.
  2. (h,m) ⊨ stiti φ iff Cmi(h) ⊆ [[φ]]m and [[φ]]mHm.

To distinguish the two different operators that those conditions define, the former operator is called the Chellas stit, written cstit, while the latter operator is called the deliberative stit, written dstit.

In words, cstiti φ is true at an index (h,m) if φ is true with respect to h′ and m, for all histories h′ in the same choice cell at m as h; this is called the positive condition. The truth-condition for dstiti φ is more exacting; not only the positive condition has to be satisfied but also what is called the negative condition: there must be some history through m such that φ fails to be true with respect to that history and m.

Both cstit and dstit are studied; it is claimed that they capture important aspects of the concept “sees to it that”. The two operators become interdefinable if one also introduces the concept “it is historically necessary that”. Using □ for historical necessity, define

  1. (h,m) ⊨ □φ iff, for all h′ ∈ Hm,(h′,m) ⊨ φ.

Then the formulas dstiti φ ↔ (cstiti φ ∧ ¬□φ) and cstiti φ ↔ (dstiti φ ∨ □φ) are true with respect to all indices.

One advantage of stit theory is that the stit analysis of individual action can be extended in natural ways to cover group action.

A number of the initial papers defining the stit tradition are collected in the volume [Belnap 2001]. One important later work is John F. Horty's book [Horty 2007]. The logic of stit was axiomatized by Ming Xu [Xu 1998].

1.3 Intentions

Michael Bratman's philosophical analysis of the notion of intention has had a significant influence on the development of the logic of action within computer science. It will be discussed below.

1.4 Logics of special kinds of action

In a series of papers Carlos Alchourrón, Peter Gärdenfors and David Makinson created what they called a logic of theory change, later known as the AGM paradigm. Two particular kinds of change inspired their work: change due to deontic actions (Alchourró;n) and change due to doxastic actions (Gärdenfors and before him Isaac Levi). Examples of deontic actions are derogation and amendment (laws can be annulled or amended), while contraction and expansion are analogous doxastic actions (beliefs can be given up, new beliefs can be added). Later the modal logic of such actions has been explored under the names dynamic deontic logic, dynamic doxastic logic and dynamic epistemic logic. (For the classic paper on AGM, see [AGM 1985]. For an introduction to dynamic deontic logic and dynamic doxastic logic, see [Lindström and Segerberg 2006]. We will return to this topic in Section 4, where it is viewed from the perspective of the field of artificial intelligence.

2. The Logic of Action in linguistics

In linguistics, there are two ways in which actions play a role: on the one hand, utterances are actions and on the other they can be used to talk about actions. The first leads to the study of speech acts, a branch of pragmatics, the second to the study of the semantics of action reports, hence is of a distinctly semantic nature. In addition to this, there is a special type of semantics, dynamic semantics, where meanings are not considered as state descriptions but as changes in the state of a listener.

2.1 Speech acts

The study of speech acts goes back to Austin [Austin 1957] and Searle [Searle 1969]. Both emphasise that using language is to perform certain acts. Moreover, there is not just one act but a whole gamut of them (Austin himself puts the number in the magnitude of 103). The classification he himself gives involves acts that are nowadays not considered as part of a separate science: the mere act of uttering a word (the phatic act) or sentence is part of phonetics (or phonology) and only of marginal concern outside. By contrast, the illocutionary and perlocutionary acts have been the subject of intense study. An illocutionary act is the linguistic act performed by using that sentence; it is inherently communicative in nature. By contrast, the perlocutionary act is an act that needs surrounding social contexts to be successful. The act of naming a ship or christening a baby, for example, are perlocutionary. The sentence “I hereby pronounce you husband and wife” has the effect of marrying two people only under certain well-defined circumstances. By definition. perlocutionary acts take us outside the domain of language and communication.

[Searle and Vanderveken 1985] develop a logic of speech acts which they call illocutionary logic. This was refined in [Vanderveken 1990, Vanderveken 1991]. Already, Frege used in his Begriffsschrift the notation “⊦ φ”, where φ denotes a proposition and “⊦” the judgment sign. So, “⊦ φ” says that φ is provable, but other interpretations of ⊦ are possible (accompanied by different notation; for example, “⊨ φ” says that φ is true (in the model), “⊣ φ” says that φ is refutable, and so on). An elementary speech act is of the form F(φ), where F denotes an illocutionary point and φ a proposition. In turn, an illocutionary force is identified by exactly seven elements:

  1. a point,
  2. the mode of achievement of the illocutionary point,
  3. the degree of strength of the illocutionary point,
  4. the propositional content conditions,
  5. the preparatory conditions,
  6. the sincerity conditions,
  7. the degree of strength of the sincerity conditions.

There are exactly five points according to [Searle and Vanderveken 1985]:

Later treatments of this matter tend to disregard much of the complexity of this earlier approach for the reason that it fails to have any predictive power. Especially difficult to handle are “strengths”, for example. Modern models try to use update models instead (see Section 2.3 below). [Van der Sandt 1991] uses a discourse model with three different slates (for each speaker, and one common slate). While each speaker is responsible for maintaining his own slate, changes to the common slate can only be made through communication with each other. [Merin 1994] seeks to reduce the manipulations to a sequential combination of so-called elementary social acts: claim, concession, denial, and retraction.

2.2 Action sentences

In [Davidson 1967], Davidson gave an account of action sentences in terms of what is now widely known as events. The basic idea is that an action sentence has the form (∃e)(⋯), where e is a variable over acts. For example, “Brutus violently stabbed Caesar” is translated (ignoring tense) as (∃e)(stab(e,Brutus,Caesar) ∧ Violent(e)). This allows to capture the fact that this sentence logically entails that Brutus stabbed Caesar. This idea has been widely adopted in linguistics; moreover, it is now assumed that basically all verbs denote events [Parsons 1990]. Thus action sentences are those that speak about special types of events, called eventualities.

Vendler in [Vendler 1957] classified verbs into four groups:

  1. states (“know”, “sit”),
  2. activities (“run”, “eat”),
  3. accomplishments (“write a letter”, “build a house”), and
  4. achievements (“reach”, “arrive”).

[Moens and Steedman 1988] add a fifth category:

  1. points (“flash”, “burst”).

The main dividing line is between states and the others. The types (b)–(e) all refer to change. This division has been heavily influential in linguistic theory; mostly, however, research concentrated on its relation to aspect. It is to be noted, for example, that verbs of type (c) can be used with the progressive while verbs of type (d) cannot. In an attempt to explain this, [Krifka 1986, Krifka 1992] has introduced the notion of an incremental theme. The idea is that any eventuality has an underlying activity whose progress can be measured using some underlying participant of the event. If, for example, I write a letter then the progress is measured in amounts of words. The letter is therefore the incremental theme in “I write a letter” since it defines the progress. One implementation of the idea is the theory of aspect by Verkuyl [Verkuyl 1993]. Another way to implement the idea of change is constituted by a translation into propositional dynamic logic (see [Naumann 2001]). Recently, [Van Lambalgen and Hamm 2005] have applied the event calculus by [Shanahan 1990] to the description of events.

2.3 Dynamic semantics

The idea that propositions can not only be viewed as state descriptions but also as updates has been advocated independently by many people. Consider the possible states of an agent to be (in the simplest case) a theory (that is, a deductively closed set of sentences). Then the update of a theory T by a proposition φ is the deductive closure of T ∪ {φ}.

[Gärdenfors 1988] advocates this perspective with particular attention to belief revision. [Veltman 1985] develops the update view for the treatment of conditionals. One advantage of the idea is that it is possible to show why the mini discourse “It rains. It may not be raining.” is infelicitous in contrast to “It may not be raining. It rains.”. Given that an update is felicitous only to a consistent theory, and that “may φ” (with epistemic “may”) simply means “it is consistent” (written ◊φ), the first is the sequence of updates with φ and ◊¬φ. The second step leads to inconsistency, since φ has already been added. It is vital in this approach that the context is constantly changing.

[Heim 1983] contains an attempt to make this idea fruitful for the treatment of presuppositions. In Heim's proposal, a sentence has the potential to change the context, and this is why, for example, the sentence “If John is married his wife will be happy.” does not presuppose that John is married. Namely, the second part of the conditional (“his wife will be happy”) is evaluated against the the context incremented by the antecedent (“John is married”). This of course is the standard way conditions are evaluated in computer languages. This parallel is exploited in [Van Eijck 1994], see also [Kracht 1993].

The idea of going dynamic was further developed in Dynamic Predicate Logic (DPL, see [Groenendijk and Stokhof 1991]), where all expressions are interpreted dynamically. The specific insight in this grammar is that existential quantifiers have a dynamically growing scope. This has first been noted in [Kamp 1981], where a semantics was given in terms of intermediate representations, so-called Discourse Representation Structures. Groenendijk and Stokhof replace these structures by introducing a dynamics into the evaluation of a formula, as proposed in Dynamic Logic (DL). An existential quantifier is translated as a random assignment x ← ? of DL, whose interpretation is a relation between assignments: it is the set of pairs ⟨β,γ⟩ such that β(y) = γ(y) for all yx (in symbols β ∼x γ). The translation of the sentence “A man walks.” is

(1) x ← ?⟩man′(x) ∧ walk′(x)

This is a proposition, hence interpreted as a set. One can however, push the dynamicity even lower, and make all meanings relational. Then “A man walks.” is interpreted by the ‘program’

(2) xx; man′(x)?; walk′(x)⟩

Here, man′(x)? uses the test constructor “?”: φ? is the set of all ⟨β,β⟩ such that β satisfies φ. The meaning of the entire program (2) therefore also is a relation between assignments. Namely, it is the set R of all pairs ⟨β,γ⟩ where β ∼x γ, and γ(x) walks and is a man. The meaning of (1) by contrast is the set of all β such that some ⟨β,γ⟩ ∈ R. Existential quantifiers thus have ‘side effects’: the change in assignment is never undone by a quantifier over a different variable. Hence the open-endedness to the right of the existential. For an overview of dynamic semantics see [Muskens et al. 1997].

3. The Logic of Action in Computer Science

The logic of action plays an important role in computer science. This becomes evident once one realizes that computers perform actions in the form of executing program statements written down in some programming language, changing computer internals and, by interfaces to the outside world, also that outside world. As such a logic of action provides a means to reason about programs, or more precisely, the execution of programs and their effects. This enables one to prove the correctness of programs. In principle, this is something very desirable: if we could prove all our software correct, we would know that they would function exactly the way we designed them. This was already realized by pioneers of computer programming such as Turing [Turing 1949] and Von Neumann [Goldstein and Von Neumann 1963]. Of course, this ideal is too hard to establish in daily practice for all software. Verification is a nontrivial and time-consuming occupation, and there are also theoretical limitations to it. However, as the alternative is “just” massive testing of programs experimentally, with no 100% guarantee of correctness, it has remained an active area of research to this day.

3.1 Reasoning about programs

Program verification has a long history. Already since the inception of the computer and its programming researchers started to think of ways of analyzing programs to be sure they did what they were supposed to do. In the 60s the development of a true mathematical theory of program correctness began to take serious shape (cf. [de Bakker 1980], p. 466). Remarkably, the work of John McCarthy who we will also encounter later on when we turn to the field of artificial intelligence played an important role here, distinguishing and studying fundamental notions such as ‘state’, [McCarthy 1963a]. This led on the one hand to the field of semantics of programming languages, and on the other to major advances in program correctness by Floyd [Floyd 1967], Naur [Naur 1966], Hoare [Hoare 1969] and Dijkstra [Dijkstra 1976] (cf. [de Bakker 1980]). Floyd and Naur used an elementary stepwise induction principle and predicates attached to program points to express invariant properties of imperative-style programs (cf. [Cousot 1990], p. 859), programs that are built up from basic assignment statements (of arithmetical expressions to program variables) and may be composed by sequencing, conditionals and repetitions. While the Floyd-Naur approach—called the inductive assertion method—giving rise to a systematic construction of verification conditions, was a method to prove the correctness of programs by means of logic, it was not a logic itself in the strict sense of the word. The way to a proper logic of programs was paved by Hoare, whose compositional proof method led to what is now known as Hoare logic. By exploiting the syntactic structure of (imperative-style) programs, Hoare was able to turn the Floyd-Naur method into a true logic with as assertions so-called Hoare triples of the form {P}S{Q}, where P and Q are first-order formulas and S is a program statement in an imperative-style programming language as mentioned above. The intended reading is if P holds before execution of the statement S then Q holds upon termination of (execution of) S. (The issue whether the execution of S terminates can be put in the reading of this Hoare triple either conditionally (partial correctness) or nonconditionally (total correctness), giving rise to different logics, see [Harel et al. 2000]). To give an impression of Hoare-style logics, we give here some rules for a simple programming language consisting of variable assignments to arithmetic expressions, and containing sequential (;), conditional (IF) and repetitive (WHILE) composition.

{P}S1{Q} , {Q}S2{R}
{P}S1 ; S2{Q}
{PB}S1{Q} , {P ∧ ¬B}S2{Q}
{P} IF B THEN S1 ELSE S2 {Q}
{PB}S{P}
{P} WHILE B DO S {P ∧ ¬B}

Later Pratt and Harel generalized Hoare logic to dynamic logic [Pratt 1976, Pratt 1979a, Harel 1979, Harel 1984, Kozen and Tiuryn 1990, Harel et al. 2000], of which it was realized[1] that it is in fact a form of modal logic, by viewing the input-output relation of a program S as an accessibility relation in the sense of Kripke-style semantics.[2] A Hoare triple {P}S{Q} becomes in dynamic logic the following formula: P → [S] Q, where [S] is the modal box operator associated with (the accessibility relation associated with) the input-output relation of program S. The propositional version of Dynamic Logic, PDL, was introduced by Fischer & Ladner [Fischer and Ladner 1977], and became an important topic of research in itself. The key axiom of PDL is the induction axiom

[S*] (P → [S] P) → (P → [S*] P)

where * stands for the iteration operator, S* denoting an arbitrary (finite) number of iterations of program S. The axiom expresses that if after any number of iterations of S the truth of P is preserved by the execution of S, then, if P is true at the current state, it will also be true after any number of iterations of S. A weaker form of PDL, called HML, with only atomic action box and diamond and propositional connectives, was introduced by Hennessy & Milner to reason about concurrent processes, and in particular analyze process equivalence [Hennessy and Milner 1980].

It is also worth mentioning here that the work of Dijkstra [Dijkstra 1976] on weakest precondition calculus is very much related to dynamic logic (and Hoare's logic). In fact, what Dijkstra calls the weakest liberal precondition, denoted wlp(S,Q), is the same as the box operator in dynamic logic: wlp(S,Q) = [S]Q, while his weakest precondition, denoted wp(S,Q), is the total correctness variant of this, meaning that this expression also entails the termination of statement S (cf. [Cousot 1990]).

It was later realized that the application of dynamic logic goes beyond program verification or reasoning about programs. In fact, it constitutes a logic of general action. In [Meyer 2000] a number of other applications of dynamic logic are given including deontic logic (also [Meyer 1988]), reasoning about database updates, the semantics of reasoning systems such as reflective architectures. As an aside we note here that the use of dynamic logic for deontic logic as proposed in [Meyer 1988] needed an extension of the action language, in particular the addition of the ‘action negation’ operator. The rather controversial nature of this operator triggered work on action negation in itself (see e.g., [Broersen 2004]). Below we will also encounter the use of dynamic logic in artificial intelligence when specifying intelligent agents.

The logics thus far are adequate for reasoning about programs that are supposed to terminate and display a certain input/output behavior. However, In the late seventies one came to realize that there are also programs that are not of this kind. Reactive programs are designed to react to input streams that in theory may be infinite, and thus show ideally nonterminating behavior. Not so much input-output behavior is relevant here but rather the behavior of programs over time. Therefore Pnueli [Pnueli 1977] proposed a different way of reasoning about programs for this style of programming based on the idea of a logic of time, viz. (linear-time) temporal logic. (Since reactivity often involves concurrent or parallel programming, temporal logic is often associated with this style of programming. However, it should be noted that a line of research continued to extend the use of Hoare logic to concurrent programs [Lamport 1977, Cousot 1990, de Roever et al. 2001].) Linear-time temporal logic typically has temporal operators such as next-time, always (in the future), sometime (in the future), until and since.

An interesting difference between temporal logic on the one hand, and dynamic logic and Hoare logic on the other, is that the former is what in the literature is called an endogenous logic, while the latter are so-called exogenous logics. A logic is exogenous if programs are explicit in the logical language, while for endogenous logics this is not the case. In an endogenous logic such as temporal logic the program is assumed to be fixed, and is considered part of the structure over which the logic is interpreted ([Harel et al. 2000], p. 157). Exogenous logics are compositional and have the advantage of allowing analysis by structural induction. Later Pratt [Pratt 1979b] tried to blend temporal and dynamic logic into what he called process logic, which is an exogenous logic for reasoning about temporal behavior.

At the moment the field of temporal logic as applied in computer science has developed into a complete subfield on its own, including techniques and tools for (semi-)automatic reasoning and model-checking (cf. [Emerson 1990]). Also variants of the basic linear-time models have been proposed for verification, such as branching-time temporal logic (and, in particular the logics CTL (computation tree logic) and its extension CTL* [Emerson 1990], in which one can reason explicitly about (quantification over) alternative paths in nondeterministic computations, and more recently also an extension of CTL, called alternating-time temporal logic (ATL), with a modality expressing that a group of agents has a joint strategy to ensure its argument, to reason about so-called open systems. These are systems, the behavior of which depends also on the behavior of their environments, see [Alur et al. 1998].

Finally we mention still alternative logics to reason about programs, viz. fixpoint logics, with as typical example the so-called μ-calculus, dating back to Scott & de Bakker [Scott and de Bakker 1969], and further developed in [Hitchcock and Park 1972, Park 1976, de Bakker 1980, Meyer 1985]. The basic operator is the least fixed point operator μ, capturing iteration and recursion:if φ(X) is a logical expression with a free relation variable X, then the expression μXφ(X) represents the least X such that φ(X) = X, if such an X exists. A propositional version of the μ-calculus, called propositional or modal μ-calculus comprising of the propositional constructs → and false, together with the atomic (action) modality [a] and μ operator is completely axiomatized by propositional modal logic plus the axiom φ[XXφ] → μXφ, where φ[X/Y] stands for the expression φ in which X is substituted by Y, and rule

φ[X/ψ] → ψ
Xφ → ψ}

[Kozen 1983, Bradfield and Stirling 2007]. This is logic is known to subsume PDL (cf. [Harel et al. 2000]).

4. The Logic of Action in Artificial Intelligence

In the field of artificial intelligence (AI), where the aim is to devise intelligently behaving computer-based artifacts (with the purpose of understanding human intelligence or just making intelligent computer systems and programs). In order to achieve this, there is a tradition within AI to try and construct these systems based on symbolic representations of all relevant factors involved. This tradition is called symbolic AI or ‘good old-fashioned’ AI (GOFAI). In this tradition the sub-area of knowledge representation (KR) obviously is of major importance: it played an important role since the inception of AI, and has developed to a substantial field of its own. One of the prominent areas in KR concerns the representation of actions, performed by either the system to be devised itself or the actors in its environment. Of course, besides their pure representation also reasoning about actions is important, since representation and reasoning with these representations are deemed to be closely connected within KR (which is sometime also called KR&R, knowledge representation & reasoning). A related, more recent development within AI is that of basing the construction of intelligent systems on the concept of an (intelligent) agent, an autonomously acting entity, regarding which, by its very nature, logics of action play a crucial role in obtaining a logical description and specification.

4.1 Representing and reasoning about actions

As said above, the representation of actions and formalisms/logics to reason with them are very central to AI and particularly the field of KR. One of the main problems that one encounters in the literature on reasoning about actions in AI, and much more so than in mainstream computer science, is the discovery of the so-called frame problem [McCarthy and Hayes 1969]. Although this problem has been generalized by philosophers such as Dennett [Dennett 1984] to a general problem of relevance and salience of properties pertaining to action, the heart of the problem is that in a ‘common-sense’ setting as one encounters in AI, it is virtually impossible to specify all the effects by the actions of concern, as well as, notably, all non-effects. For instance, given an action, think about what changes if the action is performed and what does not—generally the latter is much more difficult to produce than the former, leading to large, complex attempts to specify the non-effects. But there is of course also the problem of relevance: what aspects are relevant for the problem at hand; which properties do we need to take into consideration? In particular, this also pertains to the preconditions of an action that would guarantee the successful performance/execution of an action. Again, in a common-sense environment, these are formidable, and one can always think of another (pre)condition that should be incorporated. For instance, for successfully starting the motor of a car, there should be a charged battery, sufficient fuel, …, but also not too cold weather, or even sufficient power in your fingers to be able to turn the starting key, the presence of a motor in the car, … etc. etc. In AI one tries to find a solution for the frame problem, having to do with the smallest possible specification. Although this problem gave rise to so-called defeasible or non-monotonic solutions such as defaults (‘normally a car has a motor’), which in itself gave rise to a whole new a realm within AI called nonmonotonic or commonsense reasoning, this is beyond the scope of this entry (we refer the interested reader to the article by Thomason [Thomason 2003] in this encyclopedia). We focus here on a solution that does not appeal to nonmonotonicity (directly).

Reiter [Reiter 2001] has proposed a (partial) solution within a framework, called the situation calculus, that has been very popular in KR especially in North America since it was proposed by John McCarthy, one of the founding fathers of AI [McCarthy 1963b, McCarthy 1986]. The situation calculus is a dialect of first-order logic with some mild second-order features, especially designed to reason about actions. (One of its distinctive features is that of the so-called reification of semantic notions such as states or possible worlds (as well as truth predicates) into syntactic entities (‘situations’) in the object language.) For the sake of conformity in this entry and reasons of space, we will try rendering Reiter's idea within (first-order) dynamic logic, or rather, a slight extension of it. (We need action variables to denote action expressions and equalities between action variables and actions (or rather action expressions) as well as (universal) quantification over action variables).

What is known as Reiter's solution to the frame problem assumes a so-called closed system, that is to say, a system in which all (relevant) actions and changeable properties (in this setting often called ‘fluents’ to emphasize their changeability over time) are known. By this assumption it is possible to express the (non)change as a consequence of performing actions as well as the issue of the check for the preconditions to ensure successful performance in a very succinct and elegant manner, and coin it in a so-called successor state axiom of the form

(∀A)[Poss(A) → (([A]f(x)) ↔ (γf+ (x, A) ∨ (f(x) ∧ ¬γf(x, A))))]

where A is an action variable, and γf+(x, A) and γf(x, A) are ‘simple’ expressions without action modalities expressing the conditions for φ becoming true and false, respectively. So the formula is read informally as, under certain preconditions pertaining to the action A at hand, the fluent (predicate) f becomes true of arguments x, if and only if either the condition γf+(x, A) holds or f(x) holds (before the execution of A) and the condition γf(x, A) (that would cause it to become false) does not hold. Furthermore, the expression Poss(A) is used schematically in such axioms, where the whole action theory should be complemented with so-called precondition axioms of the form φA → Poss(A) for concrete expressions φA stating the actual preconditions needed for a successful execution of A.

To see how this works out in practice we consider a little example in a domain where we have a vase v which may be broken or not (so we have broken as a fluent), and actions drop and repair. We also assume the (non-changeable) predicates fragile and held-in-hand of an object. Now the successor state axiom becomes

(∀A)[Poss(A) → ([A]broken(v) ↔ ((A = drop(v) ∧ fragile(v)) ∨ (broken(v) ∧ A ≠ repair(v))))]

and as precondition axioms we have held-in-hand(x) → Poss(drop(x)) and broken(x) → Poss(repair(x)). This action theory is very succinct: one needs only one successor state axiom per fluent and one precondition axiom per action.

Finally in this subsection we must mention some other well-known approaches to reasoning about action and change. The event calculus [Kowalski and Sergot 1986, Shanahan 1990, Shanahan 1995] and fluent calculus [Thielscher 2005] are alternatives to the situation-based representation of actions in the situation calculus. The reader is also referred to [Sandewall and Shoham 1994] for historical and methodological issues as well as the relation with non-monotonic reasoning. These ideas have led to very efficient planning systems (e.g., TALplanner [Kvarnström and Doherty 2000]) and practical ways to program robotic agents (e.g., the GOLOG family [Reiter 2001] of languages based on the situation calculus, and FLUX [Thielscher 2005] based on the fluent calculus).

4.2 Description and specification of intelligent agents

In the last two decades the notion of an intelligent agent has emerged as a unifying concept to discuss the theory and practice of artificial intelligence (cf. [Russell and Norvig 1995, Nilsson 1998]). In short, agents are software entities that display forms of intelligence/rationality and autonomy. They are able to take initiative and make decisions to take action on their own without direct control of a human user. In this subsection we will see how logic (of action) is used to describe / specify the (desired) behavior of agents (cf. [Wooldridge 2002]). First we focus on single agents, after which we turn to settings with multiple agents, called multi-agent systems (MAS) or even agent societies.

4.2.1 Single agent approaches

Interestingly, the origin of the intelligent agent concept lies in philosophy.

First of all there is a direct link with practical reasoning in the classical philosophical tradition going back to Aristotle. Here one is concerned with reasoning about action in a syllogistic manner, such as the following example taken from Audi ([Audi 1999], p. 729):

Would that I exercise.
Jogging is exercise.
Therefore, I shall go jogging.

Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of theoretical reasoning, on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, viz. his/her beliefs (jogging is exercise) and his/her desires or goals (would that I exercise). So, practical reasoning is reasoning directed toward action, the process of figuring out what to do, as Wooldridge [Wooldridge 2000] puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation.

Dennett [Dennett 1971] has put forward the notion of the intentional stance: the strategy of interpreting the behaviour of an entity by treating it as if it were a rational agent that governed its choice of action by a consideration of its beliefs and desires. As such it is an anthropomorphic instance of the so called design (functionality) stance, contra the physical stance, towards systems. This stance has been proved to be extremely influential, not only in cognitive science and biology/ethology (in connection with animal behavior), but also as a starting point of thinking about artificial agents.

Finally, and most importantly, there is the work of the philosopher Michael Bratman [Bratman 1987], which, although in the first instance is aimed at human agents, lays the foundation of the BDI approach to artificial agents. In particular, Bratman makes a case for the incorporation of the notion of intention for describing agent behavior. Intentions play the important role of selection of actions that are desired, with a distinct commitment attached to the actions thus selected. Unless there is a rationale for dropping a commitment (such as the belief that an intention has been achieved already or the belief that it is impossible to achieve) the agent should persist / persevere in its commitment, stick to it, so to speak, and try realizing it,

After Bratman's philosophy was published, researchers tried to formalize this theory using logical means. We mention here three well-known approaches. Cohen & Levesque [Cohen and Levesque 1991] tried to capture Bratman's theory in a linear-time style temporal logic where they added primitive operators for belief and goal as well as some operators to cater for actions, such as operators for expressing that an action is about to be performed (HAPPENS α), has just been performed (DONE α) and what agent is the actor of a primitive action (ACT i a: agent i is the actor of α). From this basic set-up they build a framework in which ultimately the notion of intention is defined in terms of the other notions. In fact they define two notions: an intention_to_do and an intention_to_be. First they define the notion of an achievement goal (A-Goal): an A-Goal is something that is a goal to hold later, but is believed not to be true now. Then they define a persistent goal (P-Goal): a P-Goal is an A-Goal that is not dropped before it is believed to be achieved or believed to be impossible. Then the intention to do an action is defined as the P-Goal of having done the action, in a way such that the agent was aware of it happening. The intention to achieve a state satisfying φ is the P-Goal of having done some action that has φ as a result where the agent was aware of something happening leading to φ, such that what actually happened was not something that the agent explicitly had not as a goal.

Next there is Rao & Georgeff's formalization of BDI agents using the branching-time temporal logic CTL [Rao and Georgeff 1991, Rao and Georgeff 1998, Wooldridge 2000]. On top of CTL they introduce modal operators for Belief (BEL), Goal (GOAL) (sometimes replaced by Desire (DES)) and Intention (of the to_be kind, INTEND) as well as operators to talk about the success (succeeded(e)) and failure (failed(e)) of elementary actions e. So they do not try to define intention in terms of other notions, but rather introduce intention as a separate operator, of which the meaning is later constrained by ‘reasonable’ axioms. The formal semantics is based on Kripke models with accessibility relations between worlds for the belief, goal and intention operators. However, possible worlds here are complete time trees (modeling the various behaviors of the agent) on which CTL formulas are interpreted in the usual way. Next they propose a number of postulates/axioms that they find reasonable interactions between the operators, and constrain the models of the logic accordingly so that these axioms become validities. For example, they propose the formulas GOAL(α) → BEL(α)) and INTEND(α) → GOAL(α), for a certain class of formulas α, of which α = E(ψ) is a typical example. Here E stands for the existential path quantifier in CTL. Rao & Georgeff also show that one can express commitment strategies in their logic. For example, the following expresses a ‘single-minded committed’ agent, that keeps committed to its intention until it believes it has achieved it or thinks it is impossible (which is very close to what we saw in the definition of intention in the approach of Cohen & Levesque):

INTEND(A ◇ φ) → ((INTEND(A ◇ φ) until (BEL(φ) ∨ ¬BEL(E ◇ φ)))

where A stands for the universal path quantifier in CTL.

Finally there is the KARO approach by Van Linder et al. [Van der Hoek et al. 1998, Meyer et al. 1999], which takes dynamic logic as a basis instead of a temporal logic. First a core is built, consisting of the language of propositional dynamic logic augmented with modal operators for knowledge (K), belief (B) and desire (D) as well as an operator (A) that stands for ability to perform an action. Next the language is extended mostly by abbreviations (definitions in terms of the other operators) to get a fully-fledged BDI-like logic. The most prominent operators are:

The framework furthermore has special actions commit and uncommit to control the agent's commitments. The semantics of these actions is such that the agent obviously can only commit to an action α if there is good reason for it, viz. that there is a possible intention of α with a known goal φ as result. Furthermore the agent cannot uncommit to a certain action α that is part of the agent's commitments, as long there is a good reason for it to be committed to α, i.e. as long as there is some possible intention where α is involved. This results in having the following validities in KARO: (Here I(α, φ) denotes the possibly intend operator and Com(α) is an operator that expresses that the agent is committed to the action α, which is similar to Cohen & Levesque's intention-to-do operator INTEND1 in [Cohen and Levesque 1990].)

Informally these axioms say the following: if the agent possibly intends an action for fulfilling a certain goal then it has the opportunity to commit to this action, after which it is recorded on its agenda; as long as an agent possibly intends an action it is not able to uncommit to it (this reflects a form of persistence of commitments: as long as there is a good reason for a plan on the agenda it will have to stay on!); if the agent is committed to an action is has the opportunity to uncommit to it (but it may lack the ability to do this, cf. the previous axiom); if an agent is committed to a sequence of two actions then it knows that it is committed to the first and it also knows that after performing the first action it will be committed to the second.

Besides this focus on motivational attitudes in the tradition of agent logics in BDI style, the KARO framework also provides an extensive account of epistemic and doxastic attitudes. This is worked out most completely in [Van Linder et al. 1995]. This work hooks into a different strand of research in between artificial intelligence and philosophy, viz. Dynamic Epistemic Logic, the roots of which lie in philosophy, linguistics, computer science and artificial intelligence! Dynamic Epistemic Logic (DEL) is the logic of knowledge change; it is not about one particular logical system, but about a whole family of logics that allow us to specify static and dynamic aspects of knowledge and beliefs of agents (cf. [Van Ditmarsch et al. 2007]). The field combines insights from philosophy (about belief revision, AGM-style [AGM 1985], as we have seen in Section 1), dynamic semantics in linguistics and the philosophy of language (as we have seen in Section 2), reasoning about programs by using dynamic logic (as we have seen in Section 3) with ideas in artificial intelligence about how knowledge and actions influence each other [Moore 1977]. More generally we can see the influence of the logical analysis of information change as advocated by Van Benthem and colleagues ([Van Benthem 1989, Van Benthem 1994, Faller et al. 2000]. Also Veltman's update semantics of default reasoning [Veltman 1996], an important reasoning method in artificial intelligence [Reiter 1980, Russell and Norvig 1995], can be viewed as being part of this paradigm.

For the purpose of this entry, it is interesting to note that the general approach taken is to apply a logic of action, viz. dynamic logic, to model information change. This amounts to an approach in which the epistemic (or doxastic) updates are reified into the logic as actions that change the epistemic/doxastic state of the agent. So, for example in [Van Linder et al. 1995] we encounter the actions such as expand(φ), contract(φ), revise(φ), referring to expanding, contracting and revising, respectively, one's belief with the formula φ. These can be reasoned about by putting them in dynamic logic boxes and diamonds, so that basically extensions of dynamic logic are employed for reasoning about these updates. It is further shown that these actions satisfy the AGM postulates so that this approach can be viewed as a modal counterpart of the AGM framework. Very similar in spirit is the work of Segerberg [Segerberg 1995] on Dynamic Doxastic Logic (DDL), the modal logic of belief change. In DDL modal operators of the form [+φ], [*φ] and [−φ] are introduced with informal meanings: “after the agent has expanded/revised/contracted his beliefs by φ”, respectively. Combined with the ‘standard’ doxastic operator B, where Bφ is interpreted as “φ is in the agent's belief set”, one can now express properties like [+φ]Bψ expressing that after having expanded its beliefs by φ the agent believes ψ (also cf. [Hendricks and Symons 2006]).

Finally in this subsection we mention recent work where the KARO formalism is used as a basis for describing also other aspects of cognitive behavior of agents, going ‘beyond BDI’, viz. attitudes regarding emotions [Meyer 2006, Steunebrink et al. 2007]. The upshot of this approach is that an expressive logic of action such as KARO can be fruitfully employed to describe how emotions such as joy, gratification, anger, and remorse, are triggered by certain informational and motivational attitudes such as certain beliefs and goals (‘emotion elicitation’) and how, once elicited, the emotional state of an agent may influence its behavior, and in particular its decisions about the next action to take.

4.2.2 Multi-agent approaches

Apart from logics to specify attitudes of single agents, also work has been done to describe the attitudes of multi-agent systems as wholes. First we mention the work by Cohen & Levesque in this direction [Levesque et al. 1990, Cohen and Levesque 1991]. This work was a major influence on a multi-agent version of KARO [Aldewereld et al. 2004]. An important complication in a notion of joint goal involves that of persistence of the goal: where in the single agent case the agent pursues its goal until it believes it has achieved it or believes it can never be achieved, in the context of multiple agents, the agent that realizes this, has to inform the others of the team about it so that the group/team as a whole will believe that this is the case and may drop the goal. This is captured in the approaches mentioned above. Related work, but not a logic of action in the strict sense, concerns the logical treatment of collective intentions [Keplicz and Verbrugge 2002].

It must also be mentioned here that inspired by several sources among which the work on knowledge and belief updates for individual agents as described by DEL and DDL, combined with work on knowledge in groups of agents such as common knowledge (see e.g., [Meyer and Van der Hoek 1995]), a whole new subfield has arisen, which can be seen as the multi-agent (counter)part of Dynamic Epistemic Logic. This deals with matters such as the logic of public announcement, and more generally actions that have effect on the knowledge of groups of agents. This has generated quite some work by different authors such as Plaza [Plaza 1989], Baltag [Baltag 1999], Gerbrandy [Gerbrandy 1998], Van Ditmarsch [Van Ditmarsch 2000]. Kooi [Kooi 2003]. For example, public announcement logic [Plaza 1989] contains an operator of the form [φ]ψ, where both φ and ψ are formulas of the logic, expressing “after announcement of φ, it holds that ψ”. This logic can be seen as a form of dynamic logic again, where the semantic clause for [φ]ψ reads (in informal terms): [φ]ψ is true in a model-state pair iff the truth of φ in that model-state pair implies the truth of ψ in a model-state pair, where the state is the same, but the model is transformed to capture the information contained in φ. Also in the other approaches the transformation of models induced by communicated information plays an important role, notably in the approach by Baltag et al. on action models [Baltag 1999, Baltag and Moss 2004]. A typical element in this approach is that in action model logic one has both epistemic and action models and that the update of an epistemic model by an epistemic action (an action that affects the epistemic state of a group of agents) is represented by a (restricted) modal product of that epistemic model and an action model associated with that action. (See [Van Ditmarsch et al. 2007], p. 151; this book is a recent comprehensive reference to the field.)

Finally we mention logics that incorporate notions from game theory to reason about multi-agent systems, such as game logic, coalition logic and alternating temporal logic (ATL, which we also encountered at the end of the section on mainstream computer science!), and its epistemic variant ATEL [Van der Hoek and Wooldridge 2003, Van der Hoek et al. 2007]. For instance, game logic is an extension of PDL to reason about so-called determined 2-player games. This area currently is growing fast, also aimed at the application of verifying multi-agent systems (cf. [Van der Hoek et al. 2007]). The latter constitutes still somewhat of a holy grail in agent technology. On the one hand there are many logics to reason about both single and multiple agents, while on the other hand multi-agent systems are being built that need to be verified. To this day there is still a gap between theory and practice. Much work is being done to render logical means combining the agent logics discussed and the logical techniques from mainstream computer science for the verification of distributed systems (from section 3), but we are not there yet…!

Conclusion

In this entry we have briefly reviewed the history of the logic of action, in philosophy, in linguistics, in computer science and in artificial intelligence. Although the ideas and techniques we have considered were developed in these separate communities in a quite independent way, we feel that they are nevertheless very much related, and by putting them together in this entry we hope we have contributed in a modest way to some cross-fertilization between these communities regarding this interesting and important subject.

Bibliography

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

events | frame problem | logic: dynamic epistemic | logic: non-monotonic | logic: propositional dynamic | logic: temporal | semantics: dynamic | situations: in natural language semantics | speech acts