Although Bower's theory met with much success, it became apparent that there were a number of empirical and theoretical problems. The theory makes the broad prediction that each mood state should be associated with a range of perceptual, attentional, and mnemonic biases. However, it became clear that different types of biases tend to be
No. of memories retrieved
Type of incident
Figure 3.8 The effect of mood induction on the retrieval of pleasant and unpleasant childhood memories (based on Bower, 1981).
associated with different mood states; as Williams et al. (1997) and others (e.g., Harvey Watkins, Mansell, & Shafran, 2004; Mathews & MacLeod, 2005) have summarised, the evidence to some degree suggests that anxiety is primarily associated with attention-related biases (see Chapter 6), whereas depression may be more associated with memory-related biases (see Chapter 7). Furthermore, even within the comparison of dysphoric versus happy mood states, the results were not as symmetrical as the theory would have predicted. That is, a sad or dysphoric mood tends to decrease the accessibility of positive memories more than it increases the accessibility of negative ones, an asymmetry that is in fact illustrated in the data in Figure 3.8 (see Blaney, 1986, for a summary). Bower (1987) himself subsequently reported on failures to find effects of mood on perception and even failures to replicate some of the original mood-state-dependent retrieval effects originally reported in his 1981 paper: "The effect seems a will-of-the-wisp that appears or not in different experiments in capricious ways that I do not understand" (Bower, 1987, p. 451).
In an attempt to patch up this aspect of the theory, Bower (1987) suggested a "causal belonging" hypothesis—that in order for the state-dependent effects to occur, participants might need to perceive that the material to be learned has some meaningful relation to the mood state that they are experiencing. However, at the time Bower made no suggestions about how the network theory could be adapted to take account of this suggestion, although others such as Eich (1995) have provided clearer boundaries and rules for when and where effects such as mood-dependent memory can be obtained. More recently, Bower and Forgas (2000; Forgas, 1999) have presented an Affect Infusion Model, the central feature of which is multiple processing strategies, that may lead to more or fewer affect priming effects according to which processing strategy is used, and which attempts to deal with some of the empirical problems for the affect-priming network model. It will be interesting to see if this model offers anything more than a post hoc adjustment to deal with the problematic data.
It may seem unfair to kick Bower's affect-priming theory when it's already down, but there are also a number of potential theoretical problems with Bower's network approach, which it may prove advantageous to outline here in order to help our later discussion of other cognitive theories. The first theoretical problem is that although network theories provide strong accounts of the intensional relations between words or concepts (i.e., the sense of words), they provide very poor accounts of extensional relations between words and those things in the world that words refer to (i.e., the referents of words). The basic assumption within the network approach is that intensional relations can be analysed separately from extensional ones. However, Johnson-Laird et al. (1984) present a number of examples including the effects of ambiguity and instantiation in which access to referents is essential. For example, in a study by Anderson et al. (1976) the participants were presented with sentences such as "The fish attacked the swimmer". In a test of recall, cues such as "shark" were more effective than the original term "fish", a finding that would be difficult to account for in a network approach. Indeed, the fact that the word "fish" tended to be instantiated as a particular Jaws-like variety points to another shortcoming of networks, which we will deal with next; namely that even with sentences presented in a psychology laboratory participants do not simply store propositional representations in the way that networks suggest, but rather there is a higher level of organisation imposed on material whenever possible. Before we deal with this point, however, we might note that Johnson-Laird and his colleagues liken the network account of semantics to an alien that learned the Earth's languages solely from radio transmissions without access to the referents; any such alien might get the wrong opinion of Earthlings if phrases such as "Bottoms-up!" and "Down the hatch!" reached the airways.
The second theoretical problem, therefore, with simple networks is that because they were originally designed to represent the relations between individual words, they are inappropriate for representing the structure of other domains such as events, actions, and situations for which more molar forms of representation are useful. The fact that the participants in the Anderson et al. experiment did not remember the propositional form of "The fish attacked the swimmer", but constructed a more specific form of the sentence, requires a higher-level form of representation such as a schema or a mental model in addition to the initial propositional level of representation. Such a need is even more evident in studies of inference and recall. For example, when presented with sequences such as "The man dropped the bottle. He went to the kitchen to fetch a brush", participants typically mistakenly recall the bottle breaking, even though there was no such statement in the original sequence (e.g., Clark & Clark, 1977). We will argue later (see Chapter 5) that at least two levels of representation, those of propositions and of schematic models, are necessary (Power & Champion, 1986; Teasdale & Barnard, 1993).
The third theoretical problem we will mention is that, as Woods (1975) originally noted, the links between nodes in such networks are treated in an ad hoc manner. Thus, links between nodes can be unidirectional or bidirectional, they can be excitatory or inhibitory, they have a range of labels (e.g., "has a", "is not a", "name", etc.), and they represent extremely different types of concepts (e.g., "animal", "four legs", "disgust"). Although we acknowledge the usefulness of different types of nodes or links up to a point, a theory that gives emotion the same status as individual words or concepts is theoretically confused (Power & Champion, 1986).
A fourth theoretical problem focuses on the proposal that there is a literal spread of energy along the links between nodes, which Bower has variously likened to the flow of water or electricity. As illustrated in Figure 3.9, there are alternatives to this proposed literal transfer of energy, for example in the contrast between analogue and digital systems. Figure 3.9(b) illustrates one such system in which a metaphorical transfer of energy is achieved in a network through a signal being sent from one node to another about its current register value which could vary from zero upwards. Note that the signal that passes from one node to another is a signal in the information-processing sense rather than the energy sense; that is, the signal to increment a register to a value of "4" need not contain any more energy than a signal to increment by a value of "1". In fact, a signal could be transmitted by a decrease in energy level, for example through the interruption of the flow of energy through a link, as equally as it might be transmitted through a temporary increase in energy level. To go one step further, Figure 3.9(c) illustrates a so-called Production System equivalent of the two network models. Production Systems were developed by Newell and Simon (1972) in their classic book Human Problem Solving and have had considerable influence in cognitive science. Production Systems consist of condition-action pairs such as in the form shown in Figure 3.9(c) "If . . ., Then . . ."; that is, if the conditions for a particular production system rule are met, then a particular action follows. This example further
(Register value = 4)
(Register value = 0)
(Register value = 2)
(c) If (Dog = 4), then (Cat = 2) (Bone=1) (Else = 0)
Figure 3.9 Three ways in which "energy" can be transferred within an associative network: (a) the literal transfer of energy; (b) the re-setting of register values that indicate a metaphorical arousal level; (c) a notational equivalent of (b) in the form of a Production System.
illustrates the point that networks are not theories in themselves but are one of a number of notational formats in which theories can be expressed.
One final area of complexity in which Bower's network model is arguably overly simplistic relates to the process of emotion generation that we summarised in Chapter 2. If one accepts something resembling the sequence of event, interpretation/ appraisal, conscious awareness, physiological response, and propensity for action that we suggested can be derived from cognitive philosophical models of emotion, then it is difficult to see how this view of emotion generation has a place within Bower's model.
Was this article helpful?