SORITES, ISSN 1135-1349

Issue #14 -- October 2002. Pp. 21-35

Copyright © by SORITES and Ronald A. Cordero

Robots and if ... then

Ronald A. Cordero

<cordero@uwosh.edu>

0. Prefatory Note

For the sake of simplicity in this paper I am going to use expressions of cognitive process and propositional attitude, such as «reason,» «know,» «suspect,» «remember,» and «think,» without elaboration in referring to robots. In doing so, I do not mean to be taking a position on the question of whether or not robots can actually reason, think, remember, suspect, or know in the same way humans can. I simply wish to avoid the complexity of saying, for example, that a robot «believesR,» where «believesR» is defined as «is in the robotic state which corresponds to the state of mind of a human who believes.» I do think the question of whether or not entities like robots or non-human animals can have propositional attitudes and engage in cognitive processes is an interesting one, but I do not think that it has to be answered before issues regarding the use of «if...then» by robots can be resolved, and I do not propose to try to answer it here.

1. Introduction

We are going to have robots that reason. The scientific, commercial , and military motives for building them and letting them work with some degree of autonomy are going to prove irresistible. But this means that if we want to avoid awkward or even disastrous consequences, we must program these robots to make only valid, humanly intelligible inferences. And this may not be a simple thing to do where «if...then» is concerned. Conditional («if...then») statements are extremely important in human communication, but their analysis has long been a source of disagreement among logicians. In the present paper, I argue that unless we wish to court disaster, we cannot let our robots use the most widely accepted rules of inference concerning conditional statements and I propose alternative rules that I believe will stave off disaster.

There is at present a long list of commonly accepted «rules of inference» for what is commonly called «statement logic.» These «rules» are in essence analytically true statements of entailment relations between statements that contain expressions such as «not,» «or,» «and,» «if...then,» and «if and only if.» Because these analytically true statements are statements of fundamental truths of logic, I shall refer to them as axioms. Some statement-logic axioms express one-way entailment relations, saying that statements of one sort entail statements of another sort. Others express two-way entailment relations, saying that statements of a first sort entail and are entailed by -- and thus are equivalent to -- statements of a second sort. If, as is usually done, pairs of similar statement-logic axioms are counted as a single axiom, the total number of commonly accepted axioms is eighteen. If each axiom were to be counted separately, the total number would be twenty-four. Fewer than half of these axioms involve conditional statements. Using a single-headed double-barred arrow (=>) to indicate one-way entailment and a double-headed double-barred arrow (=>) to indicate two-way entailment, those which do can be represented as follows:

Modus Ponens ((p C q)^p) => q

Modus Tollens ((p C q)^~q) => ~p

Hypothetical Syllogism ((p C q)^(q C r)) => (p C r)

Constructive Dilemma (((p C q)^(r C s))^(p V r)) => (q V s)

Transposition (p C q) => (~q C ~p)

Material Implication (p C q) => (~p V q)

Material Equivalence (p [[iff]] q) => ((p C q) ^ (q C p))

(p [[iff]] q) => ((p^q) V (~p^~q))Foot note 3_1

Exportation ((p^q) C r) => (p C (q C r))

If our robots are not to make serious mistakes in reasoning with conditional statements, some of these axioms will have to be given to them in altered forms. In the following sections I shall illustrate the sort of problems that will arise if robots are allowed to use all these axioms without alterations -- and shall suggest alternative axioms that can preclude the problems. In a sense, this will involve putting certain axioms «off limits» to robots, establishing what might be called a list of «forbidden inferences.»

2. Material Implication

Among the classic statement-logic axioms, one real potential trouble maker is the one commonly known as «Material Implication.» In its traditional form it encapsulates an analysis of conditional statements that has been widely used by logicians since early in the twentieth century -- and which in fact goes back to Philo of Megara in the fourth century BC.Foot note 3_2 According to this analysis, when we say «If a, then b,» we are basically saying that either a is false or b is true. A conditional statement on this analysis, that is, is equivalent to the disjunction of the consequent with the negation of the antecedent. This core meaning has been held to be common to the different types of «if...then» statements,Foot note 3_3 and it is this meaning that the horseshoe ( C ) has been used to symbolize. In rendering «If a, then b,» as «a C b» , we are taking the statement to mean ~a V b.

Is there any reason why robots should not be allowed to use the Material Implication axiom in its traditional form? The answer is definitely «Yes»: if they are allowed to do so, their reasoning simply cannot be trusted. And there is not just one way in which the axiom can cause problems: it can generate unacceptable inferences in a variety of ways.

For one thing, application of Material Implication in conjunction with the axiom called «Addition» (a => (a V b)) can generate the infamous «paradoxes of material implication» -- in which one finds that any true statement is materially implied by absolutely any other statement, and that any false statement materially implies any other statement whatsoever! Suppose, for example, that a robot, R, knows that Ms. Gonzales is in Paris. (The statement P is true.) With Material Implication and Addition, R could reason as follows ...



OWING TO ITS TECHNICAL NATURE, THIS PAPER CANNOT BE EASILY FORMATED AS AN HTML WEB-BROWSABLE DOCUMENT. PLEASE, DOWNLOAD THE PDF VERSION OF THIS FILE

...



Using these axioms, I submit, reasoning robots will be able to «get it right» when making inferences involving «if...then,» and we will consequently be able to trust their reasoning when conditional statements are involved.


Ronald A. Cordero
Department of Philosophy. The University of Wisconsin at Oshkosh
Oshkosh, Wisconsin, USA 54901
<cordero@uwosh.edu>