Mental models represent entities and persons, events and processes, and the operations of complex systems. However, what is a mental model? The current theory is based on principles that distinguish models from linguistic structures, semantic networks, and other proposed mental representations (Johnson-Laird & Byrne, 1 991). The first principle is

The principle of iconicity: A mental model has a structure that corresponds to the known structure of what it represents.

Visual images are iconic, but mental models underlie images. Even the rotation of mental images implies that individuals rotate three-dimensional models (Metzler & Shepard, 1982), and irrelevant images impair reasoning (Knauff, Fangmeir, Ruff, & Johnson-Laird, 2003; Knauff & Johnson-Laird, 2002). Moreover, many components of models cannot be visualized.

One advantage of iconicity, as Peirce noted, is that models built from premises can yield new relations. For example, Schaeken, Johnson-Laird, and d'Ydewalle (1996) investigated problems of temporal reasoning concerning such premises as

John eats his breakfast before he listens to the radio.

Given a problem based on several premises with the form:

A before B.

B before C.

D while A.

E while C.

reasoners can build a mental model with the structure:

where the left-to-right axis is time, and the vertical axis allows different events to be contemporaneous. Granted that each event takes roughly the same amount of time, reasoners can infer a new relation:

Formal logic less readily yields the conclusion. One difficulty is that an infinite number of conclusions follow validly from any set of premises, and logic does not tell you which conclusions are useful. From the previous premises, for instance, this otiose conclusion follows:

A before B, and B before C.

Possibilities are crucial, and the second principle of the theory assigns them a central role:

The principle of possibilities: Each mental model represents a possibility.

A |
B |
A or else B, but not both |

True |
True |
False |

True |
False |
True |

False |
True |
True |

False |
False |
False |

This principle is illustrated in sentential reasoning, which hinges on negation and such sentential connectives as "if and "or." In logic, these connectives have idealized meanings: They are truth-functional in that the truth-values of sentences formed with them depend solely on the truth-values of the clauses that they connect. For example, a disjunction of the form: A or else B but not both is true if A is true and B is false, and if A is false and B is true, but false in any other case. Logicians capture these conditions in a truth table, as shown in Table 9.1. Each row in the table represents a different possibility (e.g., the first row represents the possibility in which both A and B are true), and so here the disjunction is false.

Naive reasoners do not use truth tables (Osherson, 1974-1976). Fully explicit models of possibilities, however, are a step toward psychological plausibility. The fully explicit models of the exclusive disjunction, A or else B but not both, are shown here on separate lines:

where "—" denotes negation. Table 9.2 presents the fully explicit models for the main sentential connectives. Fully explicit models correspond exactly to the true rows in the truth table for each connective. As the table shows, the conditional If A then B is treated in logic as though it can be paraphrased as If A then B, and if not-A then B or not-B. The paraphrase does not do justice to the varied meanings of everyday conditionals (Johnson-Laird & Byrne, 2002). In fact, no connectives in natural language are truth

Table 9.2. Fully Explicit Models and Mental Models of Possibilities Compatible with Sentences Containing the Principal Sentential Connectives

Sentences Fully Explicit Models Mental Models

Table 9.2. Fully Explicit Models and Mental Models of Possibilities Compatible with Sentences Containing the Principal Sentential Connectives

Sentences Fully Explicit Models Mental Models

A and B: |
A |
B |
A |
B |

Neither A nor B: |
-A |
-B |
-A |
-B |

A or else B but not both: |
A |
-B |
A | |

-A |
B |
B | ||

A or B or both: |
A |
-B |
A | |

-A |
B |
B | ||

A |
B |
A |
B | |

If A then B: |
A |
B |
A |
B |

-A |
B | |||

-A |
-B | |||

If, and only if A, then B: |
A |
B |
A |
B |

A |
B |

functional (see the section on implicit induction and the modulation of models).

Fully explicit models yield a more efficient reasoning procedure than truth tables. Each premise has a set of fully explicit models, for example, the premises:

have the models:

or in brief:

Their conjunction depends on combining each model in one set with each model in the other set according to two main rules:

• A contradiction between a pair of models yields the null model (akin to the empty set).

• Any other conjunction yields a model of each proposition in the two models.

The result is:

Output null model -A B

Because an inference is valid if its conclusion holds in all the models of the premises, it follows that: B. The same rules are used recursively to construct the models of compound premises containing multiple connectives.

Because infinitely many conclusions follow from any premises, computer programs for proving validity generally evaluate conclusions given to them by the user. Human reasoners, however, can draw conclusions for themselves. They normally abide by two constraints (Johnson-Laird & Byrne, 1991 ). First, they do not throw semantic information away by adding disjunctive alternatives. For instance, given a single premise, A, they never spontaneously conclude, A or B or both. Second, they draw novel conclusions that are parsimonious. For instance, they never draw a conclusion that merely conjoins the premises, even though such a deduction is valid. Of course, human performance rapidly degrades with complex problems, but the goal of parsimony suggests that intelligent programs should draw conclusions that succinctly express all the information in the premises. The model theory yields an algorithm that draws such conclusions (Johnson-Laird & Byrne, 1991, Chap. 9).

Fully explicit models are simpler than truth tables but place a heavy load on working memory. Mental models are still simpler because they are limited by the third principle of the theory:

The principle of truth: A mentalmodel represents a true possibility, and it represents a clause in the premises only when the clause is true in the possibility.

The simplest illustration of the principle is to ask naive individuals to list what is possible for a variety of assertions (Barrouillet & Lecas, 1999; Johnson-Laird & Savary, 1996). Given an exclusive disjunction, not-A or else B, they list two possibilities corresponding to the mental models:

The first mental model does not represent B, which is false in this possibility; and the second mental model does not represent notA, which is false in this possibility, in other words, A is true. Hence, people tend to neglect these cases. Readers might assume that the principle of truth is equivalent to the representation of the propositions mentioned in the premises. However, this assumption yields the same models of A and B regardless of the connective relating them. The right way to conceive the principle is that it yields pared-down versions of fully explicit models, which in turn map into truth tables. As we will see, the principle of truth predicts a striking effect on reasoning.

Individuals can make a mental footnote about what is false in a possibility, and these footnotes can be used to flesh out mental models into fully explicit models. However, footnotes tend to be ephemeral. The most recent computer program implementing the model theory operates at two levels of expertise. At its lowest level, it makes no use of footnotes. Its representation of the main sentential connectives is summarized in Table 9.2. The mental models of a conditional, if A then B, are

The ellipsis denotes an implicit model of the possibilities in which the antecedent of the conditional is false. In other words, there are alternatives to the possibility in which A and B are true, but individuals tend not to think explicitly about what holds in these possibilities. If they retain the footnote about what is false, then they can flesh out these mental models into fully explicit models. The mental models of the biconditional, If, and only if, A then B, as Table 9.2 shows, are identical to those for the conditional. What differs is that the footnote now conveys that both A and B are false in the implicit model. The program at its higher level uses fully explicit models and so makes no errors in reasoning.

Inferences can be made with mental models using a procedure that builds a set of models for a premise and then updates them according to the other premises. From the premises,

A or else B but not both.

the disjunction yields the mental models A

The categorical premise eliminates the first model, but it is compatible with the second model, yielding the valid conclusion, B. The rules for updating mental models are summarized in Table 9.3.

The model theory of deduction began with an account of reasoning with quantifiers as in syllogisms such as:

Some actuaries are businessmen.

All businessmen are conformists.

Therefore, some actuaries are conformists.

A plausible hypothesis is that people construct models of the possibilities compatible with the premises and draw whatever conclusion, if any, holds in all of them. Johnson-Laird (1975) illustrated such an account with Euler circles. A premise of the form, Some A are B, however, is compatible with four distinct possibilities, and the previous premises are compatible with 16 distinct possibilities. Because the inference is easy, reasoners may fail to consider

Table 9.3. The procedures for forming a conjunction of a pair of models. Each procedure is presented with an accompanying example. Only mental models may be implicit and therefore call for the first two procedures

1: The conjunction of a pair of implicit models yields the implicit model: ... and... yield...

2: The conjunction of an implicit model with a model representing propositions yields the null model (akin to the empty set) by default, for example, ... and B C yield nil.

But, if none of the atomic propositions (B C) is represented in the set of models containing the implicit model, then the conjunction yields the model of the propositions, for example, ... and B C yield B C.

3: The conjunction of a pair of models representing respectively a proposition and its negation yield the null model, for example,

4: The conjunction of a pair of models in which a proposition, B, in one model is not represented in the other model depends on the set of models of which this other model is a member. If B occurs in at least one of these models, then its absence in the current model is treated as negation, for example,

However, if B does not occur in one of these models (e.g., only its negation occurs in them), then its absence is treated as equivalent to its affirmation, and the conjunction (following the next procedure) is

5: The conjunction of a pair of fully explicit models free from contradiction update the second model with all the new propositions from the first model, for example, -A B and -A C yield -A B C.

all the possibilities (Erickson, 1974), or they may construct models that capture more than one possibility (Johnson-Laird & Bara, 1984). The program implementing the model theory accordingly constructs just one model for the previous premises:

actuary [businessman] conformist actuary

[businessman] conformist where each row represents a different sort of individual, the ellipsis represents the possibility of other sorts of individual, and the square brackets represent that the set of businessmen has been represented exhaustively - in other words, no more tokens representing businessmen can be added to the model. This model yields the conclusion that Some actuaries are conformists. There are many ways in which reasoners might use such models, and Johnson-Laird and Bara

(1984) described two alternative strategies. Years of tinkering with the models for syllogisms suggest that reasoning does not rely on a single deterministic procedure. The following principle applies to thinking in general but can be illustrated for reasoning:

The principle of strategic variation: Given a class of problems, reasoners develop a variety of strategies from exploring manipulations of models (Bucciarelli & Johnson-Laird, 1999).

Stenning and his colleagues anticipated this principle in an alternative theory of syllogistic reasoning (e.g., Stenning & Yule, 1 997). They proposed that reasoners focus on individuals who necessarily exist given the premises (e.g., given the premise Some A are B, there must be an A who is B). They implemented this idea in three different algorithms that all yield the same inferences. One algorithm is based on Euler circles supplemented with a notation for necessary individuals, one is based on tokens of individuals in line with the model theory, and one is based on verbal rules, such as

If there are two existential premises, that is, that contain "some", then respond that there is no valid conclusion.

Stenning and Yule concluded from the equivalence of the outputs from these algorithms that a need exists for data beyond merely the conclusions that reason-ers draw, and they suggested that reasoners may develop different representational systems, depending on the task. Indeed, from Storring (1908) to Stenning (2002), psychologists have argued that some reasoners may use Euler circles and others may use verbal procedures.

The external models that reasoners constructed with cut-out shapes corroborated the principle of strategic variation: Individuals develop various strategies (Bucciarelli & Johnson-Laird, 1999). They also overlook possible models of premises. Their search may be organized toward finding necessary individuals, as Stenning and Yule showed, but the typical representations of premises included individuals who were not necessary; for example, the typical representation of Some A are B was

A focus on necessary individuals is a particular strategy. Other strategies may call for the representation of other sorts of individuals, especially if the task changes - a view consistent with Stenning and Yule's theory. For example, individuals readily make the following sort of inference (Evans, Handley, Harper, & Johnson-Laird, 1999):

Therefore, it is possible that Some A are C.

Such inferences depend on the representation of possible individuals.

The model theory has been extended to some sorts of inference based on pre mises containing more than one quantifier (Johnson-Laird, Byrne, & Tabossi, 1989). Many such inferences are beyond the scope of Euler circles, although the general principles of the model theory still apply to them. Consider, for example, the inference (Cherubini & Johnson-Laird, 2004):

There are four persons: Ann, Bill, Cath, and Dave.

Everybody loves anyone who loves someone. Ann loves Bill. What follows?

Most people can envisage this model in which arrows denote the relation of loving:

Dave

Dave

Hence, they infer that everyone loves Ann. However, if you ask them whether it follows that Cath loves Dave, they tend to respond "no." They are mistaken, but the inference calls for using the quantified premise again. The result is this model (strictly speaking, all four persons love themselves, too):

It follows that Cath loves Dave, and people grasp its validity if it is demonstrated with diagrams. No complete model theory exists for inferences based on quantifiers and connectives (cf. Bara, Bucciarelli, & Lombardo, 2 001). However, the main principles of the theory should apply: iconicity, possibilities, truth, and strategic variation.

Was this article helpful?

Want to be and stay positive? Download This Guide And Discover 50 Tips To Have A Positive Outlook In Life. Finally! Reach your goals, face your fears and have a more fulfilled and happy life.

## Post a comment