B11. CAUSALITY 1. Causal Complexes and the Predicate "cause" The account of causality we use here is that of Hobbs (2005). This distinguishes between the monotonic, precise notion of "causal complex" and the nonmonotonic, defeasible notion of "cause". The former gives us mathematical rigor; the latter is more useful for everyday reasoning and can be characterized in terms of the former. We begin with an abbreviated account of these concepts. When we flip a switch to turn on a light, we say that flipping the switch caused the light to turn on. But for this to happen, many other factors had to be in place. The bulb had to be intact, the switch had to be connected to the bulb, the power had to be on in the city, and so on. We will use the predicate "cause" for flipping the switch, and introduce the predicate "causalComplex" to refer to the set of all the states and events that have to hold or happen for the effect to happen. Thus, the states of the bulb, the wiring, and the power supply would all be in the causal complex. The predicate "causalComplex" has two arguments, a set of eventualities (the cause and the various preconditions) and an eventuality (the effect). We will also allow the first argument to be a single eventuality. (forall (s e) (1) (if (causalComplex s e) (and (eventuality e) (or (eventualities s) (eventuality s))))) That is, if s is a causal complex for effect e, then e is an eventuality, and either s is an eventuality or a set of eventualities. Because we can view a single eventuality as a singleton set of eventualities, the expressions "(causalComplex e1 e)" and "(and (causalComplex s e)(singleton s e1))" are equivalent. Because a set of eventualities really exist exactly when their conjunction really exists, then, for example, the expressions "(and (causalComplex s e)(doubleton s e1 e2))" and "(and (causalComplex e0 e)(and' e0 e1 e2))" are equivalent. These equivalences will allow us to be sloppy in referring to causal complexes as eventualities, sets of eventualities, or conjunctions of eventualities. Causal complexes have two primary features. The first is that if all of the eventualities in the causal complex obtain or occur, then so does the effect. This property interacts with time, so we will state the axiom in Chapter B12. The second property is that each of the members of the causal complex is relevant, in the sense that if it is removed from the set, the remainder is not a causal complex for the effect. (forall (s s1 e1 e) (2) (if (and (causalComplex s e)(member e1 s)(deleteElt s1 s e1)) (not (causalComplex s1 e)))) It may be that we can still achieve e in another manner, but that would involve adding other eventualities to the causal complex; we would no longer have s1. For example, flipping the switch e1 may be part of a causal complex s for turning a light off e. If I remove this from the set of eventualities that really exist, the remaining set s1 is not a causal complex for turning the light off. I could add another eventuality, say, unscrewing the light bulb. But that would be a different causal complex, neither s nor s1. A common approach to causality is to treat it in terms of counterfactuals (e.g., Lewis, 19??). The sentence "Flipping a switch causes the light to go on" is true, so goes the first attempt at this reduction, exactly when the counterfactual sentence "If the switch hadn't been flipped, the light would not have gone on" is true. The standard counterexample to this is illustrated in the movie "Gosford Park". A woman knows her son is going to kill his father, so she gets there first and poisons him. When the son approaches, the father is slumped over dead. The son thinks he is sleeping, and stabs him. Did the woman's poisoning the father cause him to die. In this case, the counterfactual is not true. If the woman had not poisoned the father, he would have died anyway, at the hand of the son. This sort of example forces one to tinker with the account of causality in terms of counterfactuals. But in terms of causal complexes, there is nothing paradoxical about this example, and there is no violation of Axiom (3). The son's stabbing of his father was not in the causal complex for his father's dying. All he did was stab his father's corpse. Without that stabbing, the death would still have occured, so Axiom (3) tells us that the stabbing was not part of the operative causal complex. If we remove the poisoning from the causal complex, the remainder of the causal complex would not have brought about the death. If we add the stabbing to that remainder, the death would have occurred, but that is a different causal complex. In practice, we can never specify all the eventualities in a causal complex for an event. So while the notion gives us a precise way of thinking about causality, it is not adequate for the kind of practical reasoning we do in planning, explaining, and predicting. For this, we need the defeasible notion of "cause". In a causal complex, for most events we can bring about, the majority of the eventualities are normally true. In the light bulb case, it is normally true that the bulb is not burnt out, that the wiring is intact, that the power is on in the city, and so on. What is not normally true is that someone is flipping the light switch. Those eventualities that are not normally true are identified as causes (cf. Keyser et al., 20??). They are useful in planning, because they are often the actions that the planner or some other agent must perform. They are useful in explanation and prediction because they frequently constitute the new information. In Hobbs (2005) the interpretation of the predicate "cause" is constrained by axioms involving the largely unexplicated notion of "presumable", among others; most elements of a causal complex can be presumed to hold, and the others are identified as causes. We won't repeat that development here, but we will place some looser constraints on causes. We will use the predicate "cause0" here, because in the next section we will introduce a "cause" predicate that will also allow agents as well as eventualities to be causes. First, a cause is an eventuality in a causal complex. (forall (e1 e2) (3) (if (cause0 e1 e2) (exists (s)(and (causalComplex s e2)(member e1 s))))) This allows only single eventualities to be causes, and of course many events have multiple causes. But this is not a limitation because we can always bundle the multiple causes into a single conjunction of causes. So if e1 is pouring starter fluid onto a pile of firewood and e2 is lighting a match, then the cause of the fire starting is e3 where "(and' e3 e1 e2)" holds. This notion of "cause0" as the conjunction of the nonpresumable eventualities in a causal complex fails to cover the case of "Gravity is the cause of my desk staying on the floor." Perhaps the predicate "presumable" should be replaced by "nonproblematic"; in making the statement about gravity, we are "problematizing" the effect of gravity. In any case, the explication of the predicates "presumable" and "problematic" would require the kind of theories of human cognition and action that we present in Part C of this book. The principal useful property of "cause" is a kind of causal modus ponens. When the cause happens or holds, then, defeasibly, so does the effect. This interacts with time, so we defer stating the axiom until Chapter B12. Causality is not strictly speaking transitive. Shoham (1990) give as an example that making a car lighter causes it to go faster, and taking the engine out causes the car to be lighter, but taking the engine out does not cause the car to go faster. In the second action, we have undone one of the presumable conditions in the causal complex for the first action. The two causal complexes are inconsistent. However, when they are consistent, "cause" is transitive, so we can say that "cause" is defeasibly transitive. (forall (e1 e2 e3) (4) (if (and (cause0 e1 e2) (cause0 e2 e3) (etc)) (cause0 e1 e3))) 2. Agents and Agenthood There are some entities in the world that are viewed as being capable of initiating a causal chain. This is a scientifically inaccurate view; when a dog stands up and walks across a room, there are events in its brain that caused it to do so. But the idea pervades commonsense reasoning. We say that the dog's walking was caused by the dog, and don't necessarily expect to find anterior causes. We will call such entities agents. People are the prime examples of agents, but the class also includes robots and other intelligent software, higher animals, organizations, and a variety of fictional entities like gods, ghosts, and goblins. Frequently, when we use a person metaphor for some other type of entity, agenthood is what motivates us. In this book we will use the predicate "agent" to describe this broader class of entities. Much in the cognitive theories in Part C describe abstract properties that would be true of anything capable of what might be called cognition, and those elements of the theories will be attributed to agents in general. Properties that are specific to persons, such as emotions and human perceptual organs, will be attributed only to persons. We defined "cause0" as applying only to eventualities. We can now define "cause" as a predicate like "cause0" except that it also allows agents as its first argument. In Chapter C?? we will talk about the process by which plans become actions. This can be subsumed under the predicate "will". The expression "(will a e)" means that agent a wills to carry out eventuality e, and then e happens with no intermediate causes. Then for an agent a to "cause" an event e is for a's willing e to function as a "cause0". The following axiom defines "cause" to be "cause0" when the first argument is an eventuality. When the first argument is an agent, that agent's willing is the "cause0" of the second argument. (forall (a e2) (5) (and (if (eventuality a) (iff (cause a e2)(cause0 a e2))) (if (agent a) (iff (cause a e2) (exists (e1) (and (will' e1 a e2)(cause0 e1 e2))))) (if (not (or (eventuality a)(agent a))) (not (cause a e2))))) The chief property of agents is that they, defeasibly, are capable of causing some events. (forall (a) (6) (if (and (agent a)(etc)) (exists (e) (cause a e)))) Case roles common in linguistics can be defined in terms of core theories. In particular, the agent of an event is an agent that causes it. (forall (a e) (7) (iff (agentOf a e) (and (agent a)(cause a e)))) In this formulation, we are silent about whether there are prior causes for the willing of the event. There may or may not be a prior cause. The word "action" will not be a technical term in this theory, but for convenience we will use it informally to refer to events that have an agent. Doing an action can be defined as follows: (forall (a e) (8) (iff (do a e) (and (agentOf a e)(Rexist e)))) Agent a has done an action e if and only if a is the agent of the action and the action really takes place. Although we will not deal with other "case labels", it would perhaps be of interest to define some of the others in this framework, specifically, "instrumentOf", "objectOf", "sourceOf" and "goalOf". What we are calling the "object" of an action has also been called the "patient", and also, more bizarrely, the "theme". It is the entity that goes through a change in the final stage of the causal chain. (forall (x e) (9) (iff (objectOf x e) (or (changeIn' e x) (exists (e1 e2) (and (and' e e1 e2)(cause e1 e2)(objectOf x e2)))))) That is, x is the object of an event if the event is a change in x, or recursively if the event is a causal chain of subevents, the final event of which has x as its object. An instrument is an entity that the agent causes to go through a change of state where this change plays an intermediate role in the causal chain. (forall (y e) (10) (iff (instrumentOf y e) (exists (a e1) (and (agentOf a e1)(changeIn' e1 y) (or (cause e1 e) (exists (e2) (and (cause e1 e2)(and' e e1 e2)))))))) That is, y is an instrument of an event e if the agent causes a change in y, and that causes e or the end state in e. When the property that changes in the object is a real or metaphorical "at" relation, say, from "(at x z)" to "(at x w)", then we can call z the "source" and w the "goal". However, since the predicate "goal" with a different meaning plays such a huge role in this book, we will call this case label the "terminusOf" the action or event. (forall (z e) (11) (iff (sourceOf z e) (exists (x w e1 e2 s) (and (at' e1 x z s)(at' e2 x w s)(change' e e1 e2))))) (forall (z e) (12) (iff (terminusOf w e) (exists (x z e1 e2 s) (and (at' e1 x z s)(at' e2 x w s)(change' e e1 e2))))) Agents frequently work together. When they do, we will call the set of agents a "collective". So far, we can state that collectives are sets of agents. (forall (s) (if (collective s) (forall (a) (if (member a s)(agent a))))) We will introduce further properties of collectives in Part C, namely, that they have mutual beliefs and common goals. 3. Other Causal Predicates The predicate "cause" is important in planning, explanation and prediction, but for many cognitive acts, we will rely on a much looser notion -- that of being simply causally involved. An eventuality e1 is causally involved in bringing about some effect e if it is in some causal complex for e. (forall (e1 e2) (13) (iff (causallyInvolved e1 e2) (exists (s) (and (causalComplex s e2) (member e1 s))))) A causal complex consists of causes and other, presumable or nonproblematic, eventualities. The latter are frequently referred to as enabling conditions or preconditions. For these, we will introduce the predicate "enable". One eventuality e1 enables another e2 if it is a non-cause part of a causal complex s for e2. In the preliminary predicate "enable0" we include the causal complex s as one of the arguments, because there may be many ways to achieve the effect, only some of which require e1 to hold. (forall (e1 e2 s) (14) (iff (enable0 e1 e2 s) (and (causalComplex s e2) (member e1 s) (not (cause e1 e2))))) The expression "(enable0 e1 e2 s)" says that e1 is an enabling condition for e2 provided it is the causal complex s that will be used to bring about e2. We can then say that if an eventuality e1 is required for any way of bringing about e2, then we can use the two-argument predicate "enable" -- "(enable e1 e2)". (forall (e1 e2) (15) (iff (enable e1 e2) (forall (s) (if (causalComplex s e2)(enable0 e1 e2 s))))) If an enabling condition does not hold, then the effect will not occur. Since the enabling condition is presumable, its negation is not presumable. We thus have a causal complex for the negation of the effect in which the negation of the enabling condition is not presumable, and hence a cause. More succinctly, if e1 enables e2, then not-e1 causes not-e2. (forall (e1 e2) (16) (iff (enable e1 e2) (forall (e3) (if (not' e3 e1) (exists (e4) (and (not' e4 e2)(cause e3 e4))))))) That is, e1 enables e2 just in case any negation of e1 causes some negation of e2. In the STRIPS model of Fikes and Nilsson (1971), the enabling conditions correspond to the preconditions and the body corresponds to the cause. The added and deleted states correspond to the effect. Two related notions are "allow" and "prevent". An eventuality e1 allows an eventuality e2 if e1 does not cause not-e2. (forall (e1 e2) (17) (iff (allow e1 e2) (forall (e4) (if (not' e4 e2) (not (cause e1 e4)))))) An eventuality e1 prevents e2 if e1 causes not-e2. (forall (e1 e2) (18) (iff (prevent e1 e2) (exists (e4) (and (not' e4 e2) (cause e1 e4))))) There are two weaker varieties of the predicate "cause" that are occasionally useful. The first is "partiallyCause". An eventuality e1 partially causes another eventuality e2 if e1's conjunction with another eventuality e3 causes e2. (forall (e1 e2) (19) (iff (partiallyCause e1 e2) (exists (e3 e4) (and (not (cause e1 e2))(not (cause e3 e2)) (and' e4 e1 e3) (cause e4 e2))))) The second predicate is "tcause". The expression "(tcause e1 e2)" means that e1 tends to cause e2. Very often in planning we can't be sure our actions will actually cause the desired outcome. We only know that they will increase its probability, and we proceed on this basis. In Hobbs (2005) the following account of probabilistic causality is given. Suppose s is a causal complex for an effect e. Suppose we are certain that a subset s1 of s actually holds. Then when we say that s1 will bring about e with probability p, we are simply saying that p is the joint probability of the eventualities in s-s1. If the probabilities are high enough to be called "likely", that is, they are, distributionally and/or functionally, in the high region of the "likelihood" scale, then we can use the predicate "tcause". If "(cause e1 e2)" holds, then the other eventualities in the causal complex for e2 can be presumed to hold. If "(tcause e1 e2)" holds, then the other eventualities in the causal complex for e2 are merely likely. Both "cause" and "tcause" are only defeasible, but "tcause" is more defeasible. It is more likely to be defeated. (forall (e1 e2) (20) (iff (tcause e1 e2 c) (exists (s) (and (causalComplex s e2) (member e1 s) (deleteElt s1 s e1) (likely s1 c) (if (Rexist s)(cause e1 e2)))))) That is, e1 tends to cause e2 with respect to a set of constraints c if e1 is in a causal complex for e2, if the rest of s is likely given constraints c, and if e1 would be singled out as a cause of e2 provided s actually obtains. It will also be useful to have a predicate that makes the value of the likelihood explicit. Its definition is very similar to that of "tcause". (forall (e1 e2) (21) (iff (tcauseq e1 e2 q) (exists (s) (and (causalComplex s e2) (member e1 s) (deleteElt s1 s e1) (likelihood q s1) (if (Rexist s1)(cause e1 e2)))))) That is, e1 tends to cause e2 with likelihood q if e1 is in a causal complex for e2, if the likelihood of the rest of s is q, and if e1 would be singled out as a cause of e2 provided the rest of s actually obtains. The predicates "likelihood" and "likely" are explicated in Chapter B16. The notion "cause" is stronger than the notion "tcause", in the sense that if e1 causes e2 then it tends to cause e2. (forall (e1 e2) (22) (if (cause e1 e2)(tcause e1 e2))) 4. Ability The concept of "ability" is difficult to characterize. It is closely related to possibility. We will see in Chapter B16 that possibility is characterized with respect to a set of constraints. Something is possible if those constraints do not rule it out. Ability is also with respect to an implicit set of constraints. Suppose Joan is sleeping when we ask. "Is Joan able to play tennis?" The answer is clearly no if we include her sleeping as one of the constraints. If we don't, it may well be yes. But when we speak of an agent's ability to do something, we generally remove from consideration eventualities that are beyond the agent's control. For example, is Joan able to play tennis if all the tennis courts within reach are already occupied. In that case, it would not be _possible_ for her to play tennis, but she is still _able_ to play tennis. A person's ability to perform an action is normally viewed as the action being possible provided all the eventualities not under his or her control go the right way. First we define the eventualities beyond an agent a's control as the subset s1 of eventualities in a set s that a cannot bring about by a's efforts alone. That is, a is not the agent of actions in s1 nor the agent of an action that causes events in s1. (forall (s1 s a) (23) (iff (evsBeyondControl s1 s a) (and (eventualities s)(subset s1 s)(agent a) (forall (e) (iff (member e s1) (and (member e s)(not (agentOf a e)) (not (exists (e1) (and (agentOf a e1) (cause e1 e)))))))))) Now we can say that an agent a is able to do e, given a set of constraints c, if the agent's causing e is possible with respect to c whenever the set s2 of all the events in a causal complex s1 for e that are beyond a's control really exists independently. (forall (a e) (24) (iff (able a e c) (exists (s1 s2 e1) (and (causalComplex s1 e)(cause' e1 a e)(member e1 s1) (eventsBeyondControl s2 s1 a) (if (Rexist s2)(possible e1 c))))) That is, a is able to do e with respect to constraints c whenever there is a causal complex s1 for effecting e, a's causing e is a member of that causal complex, s2 is the subset of s1 beyond a's control, and if s2 really exists, then a's causing e is possible with respect to constraints c. If the set of constraints c does not include Joan's sleeping, then it is possible for Joan to play tennis, so she is able to play tennis. If Joan herself has all the requisite skills for playing tennis, that is, if the constraints c do not rule out events under Joan's control in the causal complex for playing tennis, she is able to play tennis with respect to constraints c. Ability is the state of being able. (forall (e1 a e c) (25) (iff (ability e1 a e c)(able' e1 a e c))) That is, e1 is a's ability to do e given constraints c if e1 is the state of a's being able to do e given constraints c. 5. Executability In computational settings, it is relatively easy to say what "executable" means. An action is executable if it is directly implemented in the underlying hardware and its preconditions are satisfied. But more generally, the notion of "executable" is a matter of perspective. From one perspective, one can view driving home from work as an executable action. From a finer-grained perspective, one has to decompose this into actions such as drive a block and turn right. At an even finer granularity, we can take as the executable actions such things as maintaining a certain pressure on the accelerator and turning the steering wheel by a certain amount. There is no limit in principle to how fine-grained the decomposition can be, although when talking about human plans, we are generally satisfied calling something executable if it is an automatic action that does not require conscious thought. We can posit a notion of "directly causes" ("dcause") as a relation between an agent or event and an event, that is true when there are no intermediate, mediating events. (forall (e1 e2) (26) (iff (dcause e1 e2) (and (cause e1 e2) (not (exists (e3) (and (cause e1 e3)(cause e3 e2))))))) Whether or not this is possible in reality is irrelevant; we certainly have the idea in our commonsense thinking. If a vase breaks on being dropped, we think of its hitting the ground as the direct cause of the breaking, without imagining internal stresses in the ceramic material between the hitting and the breaking. Many bodily and mental actions are seen as directly caused by the person. So when a man moves his arm, he directly causes the arm to move. If an agent can will an event to happen, then this willing is a direct cause of the event. (forall (e1 a e) (27) (if (and (will' e1 a e)(Rexist e1)) (dcause e1 e))) Here, e1 is the action of a's willing e to happen. If e1 really happens, then it is a direct cause of e. An agent directly causes an event if and only if the agent's willing it to happen directly causes it. (forall (a e) (28) (if (agent a) (iff (dcause a e) (exists (e1)(and (will' e1 a e)(dcause e1 e)))))) This axiom provides the coercion from events to agents as direct causes. In Chapter B15 on Persons and in Part C on cognition, we will see several examples of events that persons are able to cause directly by willing. Next we need the concept of an eventuality being "enabled" at a particular time. In Section 3 we defined "enable0", enablement with respect to a particular causal complex, being a non-cause element of the causal complex. We will say that a causal complex for an eventuality is enabled at time t if all its preconditions hold at time t. (forall (s e) (29) (iff (enabled' e0 s e) (iff (Rexist e0) (forall (e1) (if (enable0 e1 e s)(Rexist e1)))))) If an agent can directly cause an action that is enabled, then it is executable. Moreover, if the action can be caused by another executable action, then it is executable. Thus, Mary's driving a nail into a board is executable if the enabling conditions hold -- she has the hammer and the board -- because she can directly cause her hand to grasp the hammer and her arm to swing, this will cause the hammer to hit the nail, and that will cause the nail to go into the board. (forall (e a c) (30) (iff (executable e a c) (exists (s) (and (enabled s e) (or (exists (e1) (and (dcause' e1 a e)(possible e1 c))) (exists (e2) (and (cause e2 e)(executable e2 a c)))))))) The expression "(executable e a c)" says that action e is executable by agent a given constraints c. For this to hold, there must be a causal complex s that brings e about and s must be enabled. Moreover, either the agent can directly cause e, or recursively, something that causes e must be executable. In Chapter C7 on execution envisionment, we will deepen the analysis of executability in the context of planning. 6. Difficulty If an action is difficult for an agent, it is because there are states and events, i.e., eventualities, that tend to cause the action not to happen. We will attempt to capture this intuition in a predicate called "difficult" that takes an action and its agent as arguments. (forall (a e) (31) (if (difficult e a)(and (agent a)(agentOf a e)))) The expression "(difficult e a)" says action e is difficult for agent a. First, we define the set of difficulties associated with an action as the set of eventualities that tend to cause the action not to happen. (forall (s e) (32) (iff (difficultiesWith s e) (forall (e1) (iff (member e1 s) (exists (e2) (and (not' e2 e)(tcause e1 e2))))))) The predicate "difficultiesWith" is a relation between a set of eventualities and an eventuality. Thus, it may be used with the predicate "subsetConsistent" from Chapter B8 to constrain the ordering on a scale of difficulty. In a move that will be common in Part C of this book, we will not define the scale of difficulty, we will only constrain it by subset consistency. If the set of difficulties associated with achieving e1 contains the set of difficulties associated with achieving e2, then e1 is more difficult than e2. This fact does not specify the complete structure on the scale, but it does provide a minimal condition on the ordering. (forall (s s1 e) (33) (if (and (difficultyScale s e)(difficultiesWith' e1 s e)) (subsetConsistent s e1))) What we are beginning to capture here is the observation that the more obstructions there are, the more difficult something is. It is often hard to judge whether one action is more or less difficult than another, but when we analyze the issue, we often consider what obstructions there are to each action's achievement. Finally, we can say that something is difficult if it is in the Hi region of a difficulty scale, or in other words, the difficulty scale is the "scaleFor" the "difficult" property. (forall (e1 e a) (34) (iff (difficult' e1 e a) (and (agent a)(agentOf a e) (exists (s) (and (difficultyScale s e)(scaleFor s e1))))) This completes our analysis of "difficult", such as it is. An action is difficult if it is in the functionally or distributionally high region of a scale whose ordering is consistent with the subset ordering on sets of obstructions involved in performing the action. Predicates Introduced in this Chapter (causalComplex c e): The collection c of eventualities is a causal complex for effect e. (cause0 e1 e2): Eventuality e1 causes eventuality e2. (cause e1 e2): Eventuality or agent e1 causes eventuality e2. (agent a): a is an agent. (agentOf a e): Agent a is the agent or cause of eventuality e. (do a e): Agent a does action e. (objectOf x e): x is the entity undergoing change in e. (instrumentOf x e): x is an instrument used in e. (sourceOf x e): e is a change in an "at" relation and x is the location of the initial "at" relation. (terminusOf x e): e is a change in an "at" relation and x is the location of the final "at" relation. (collective s): s is a collective of agents. (causallyInvolved e1 e2): e1 is in some causal complex for e2. (enable0 e1 e2 s): Eventuality e1 is a member of a causal complex s for eventuality e2 but not the cause of e2. (enable e1 e2): Eventuality e1 is a member of every causal complex for eventuality e2 but not the cause of e2. (allow e1 e2): e1 doesn't cause not e2. (prevent e1 e2): e1 causes not e2. (partiallyCause e1 e2): e1 together with something else causes e2. (tcause e1 e2): e1 tends to cause e2. (tcauseq e1 e2 q): e1 tends to cause e2 with likelihood q. (evsBeyondControl s1 s a): s1 is the subset of eventualities in s that are not under agent a's control. (able a e c): Agent a is able to do e under constraints c. (ability e1 a e c): e1 is agent a's ability to do e under constraints c. (dcause e1 e2): Eventuality or agent e1 directly causes eventuality e2 without any intermediate causes. (enabled s e): All the enabling conditions for causal complex s resulting in eventuality e hold. (executable e a c): Action e is executable by agent a under constraints c. (difficultiesWith s e): s is the set of obstructions tending to prevent action e from being performed. (difficultyScale s e): s is a scale for measuring the difficulty of actions of type e. (difficult e a): Action e is difficult for agent a. In addition, we used the four following predicates that have not yet been defined, the first three from Chapter B16, and the fourth from Chapters B 15 and C??. (possible e c): e is possible with respect to constraints c. (likelihood q e): q is the likelihood of e. (likely e): e is likely. (will a e): Agent a does e by an act of will. References: Fikes, Richard, and Nils J. Nilsson, 1971. STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving'', {\it Artificial Intelligence}, Vol. 2, pp. 189-208. Hobbs, Jerry R., 2005. Toward a Useful Notion of Causality for Lexical Semantics'', Journal of Semantics, Vol. 22, pp. 181-209. Kayser, Daniel, and Farid Nouioua, 20??. "From the Description of an Accident to its Causes", to appear in {\it Artificial Intelligence}?? Lewis, David K., 1973. {\it Counterfactuals}, Harvard University Press, Cambridge, Massachusetts. Shoham, Yoav, 1990. Nonmonotonic reasoning and Causation'', {\it Cognitive Science}, Vol. 14, pp. 213-252.