Induction as Scientific Methodology

Induction is of course not merely the province of individuals trying to accomplish everyday goals, but also one of the main activities of science. According to one common view of science (Carnap, 1966; Hempel, 1965; Nagel, 1961; for opposing views, see Hacking, 1983; Popper, 1963), scientists spend much of their time trying to induce general laws about categories from particular examples. It is natural, therefore, to look to the principles that govern induction in science to see how well they describe individual behavior (for a discussion of scientific reasoning, see Dunbar & Fugelsang, Chap. 29). Psychologists have approached induction as a scientific enterprise in three different ways.

the rules of induction

First, some have examined the extent to which people abide by the normative rules of inductive inference that are generally accepted in the scientific community. One such rule is that properties that do not vary much across category instances are more projectible across the whole category than properties that vary more. Nisbett et al. (1983) showed that people are sensitive to this rule:

Variability/Centrality People are more willing to project predicates that tend to be invariant across category instances than variable predicates. For example, people who are told that one Pacific island native is overweight tend to think it is unlikely that all natives of the island are overweight because weight tends to vary across people. In contrast, if told the native has dark skin, they are more likely to generalize to all natives because skin color tends to be more uniform within a race.

However, sensitivity to variability does not imply that people consider the variability of predicates in the same deliberative manner that a scientist should. This phenomenon could be explained by a sensitivity to centrality (Sloman, Love, & Ahn, 1998). Given two properties A and B, such that B depends on A but A does not depend on B, people are more willing to project property A than property B because A is more causally central than B, even if A and B are equated for variability (Hadjichristidis, Sloman, Stevenson, & Over, 2004). More central properties tend to be less variable. Having a heart is more central and less variable among animals than having hair. Cen-trality and variability are almost two sides of the same coin (the inside and outside views, respectively). In Nisbett et al.'s case, having dark skin may be seen as less variable than obesity by virtue of being more central and having more apparent causal links to other features of people.

The diversity principle is sometimes identified as a principle of good scientific practice (e.g., Heit &Hahn, 2001; Hempel, 1965; Lopez, 1995). Yet, Lo et al. (2002) argued against the normative status of diversity. They consider the following argument:

House cats often carry the parasite Floxum.

Field mice often carry the parasite Floxum.

All mammals often carry the parasite Floxum.

which they compare to

House cats often carry the parasite Floxum.

Tigers often carry the parasite Floxum.

All mammals often carry the parasite Floxum.

Even though the premise categories of the first argument are more diverse (house cats are less similar to field mice than to tigers), the second argument might seem stronger because house cats could conceivably become infected with the parasite Floxum while hunting field mice. Even if you do not find the second argument stronger, merely accepting the relevance of this infection scenario undermines the diversity principle, which prescribes that the similarity principle should be determinative for all pairs of arguments. At minimum, it shows that the diversity principle does not dominate all other principles of sound inference.

Lo et al. (2002) proved that a different and simple principle of argument strength does follow from the Bayesian philosophy of science. Consider two arguments with the same conclusion in which the conclusion implies the premises. For example, the conclusion "every single mammal carries the parasite Floxum" implies that "every single tiger carries the parasite Floxum" (on the assumption that "mammal" and "tiger" refer to natural, warm-blooded animals). In such a case, the argument with the less likely premises should be stronger. Lo et al. referred to this as the premise probability principle. In a series of experiments, they show that young children in both the United States and Taiwan make judgments that conform to this principle.

induction as naive scientific theorizing

A second approach to induction as a scientific methodology examines the contents of beliefs, what knowledge adults and children make use of when making inductive inferences. Because knowledge is structured in a way that has more or less correspondence to the structure of modern scientific theories, sometimes to the structure of old or discredited scientific theories, such knowledge is often referred to as a "naive theory" (Carey, 1985; Gopnik & Meltzoff, 1997; Keil, 1989; Murphy & Medin, 1985). One strong, contentful position (Carey, 1985) is that people are born with a small number of naive theories that correspond to a small number of domains such as physics, biology, psychology, and so on, and that all other knowledge is constructed using these original theories as a scaffolding. Perhaps, for example, other knowledge is a metaphorical extension of these original naive theories (cf. Lakoff & Johnson, 1980).

One phenomenon studied by Carey (1985) to support this position is

Human bias

Small children prefer to project a property from people rather than from other animals. Four-year-olds are more likely to agree that a bug has a spleen if told that a person does than if told that a bee does. Ten-year-olds and adults do not show this asymmetry and project as readily from nonhuman animals as from humans.

Carey argued that this transition is due to a major reorganization of the child's knowledge about animals. Knowledge is constituted by a mutually constraining set of concepts that make a coherent whole in analogy to the holistic coherence of scientific theories. As a result, concepts do not change in isolation, but instead as whole networks of belief are reorganized (Kuhn, 1 962). On this view, the human bias occurs because a 4-year-old's understanding of biological functions is framed in terms of human behavior, whereas older children and adults possess an autonomous domain of biological knowledge.

A different enterprise is more descriptive; it simply shows the analogies between knowledge structures and scientific theories. For example, Gopnik and Meltzoff (1997) claimed that, just like scientists, both children and laypeople construct and revise abstract lawlike theories about the world. In particular, they maintain that the general mechanisms that underlie conceptual change in cognitive development mirror those responsible for theory change in mature science. More specifically, even very young children project properties among natural kinds on the basis of latent, underlying commonalities between categories rather than superficial similarities (e.g., Gelman & Coley, 1990). So children behave like "little scientists" in the sense that their inductive inferences are more sensitive to the causal principles that govern objects' composition and behavior than to objects' mere appearance, even though appearance is, by definition, more directly observable.

Of course, analogies between everyday induction and scientific induction have to exist. As long as both children and scientists have beliefs that have positive inductive potential, those beliefs are likely to have some correspondence to the world, and the knowledge of children and scientists will therefore have to show some convergence. If children did operate merely on the basis of superficial similarities, such things as photographs and toy cars would forever stump them. Children have no choice but to be "little scientists," merely to walk around the world without bumping into things. Because of the inevitability of such correspondences and because scientific theories take a multitude of different forms, it is not obvious that this approach, in the absence of a more fully specified model, has much to offer theories of cognition. Furthermore, proponents of this approach typically present a rather impoverished view of scientific activity which neglects the role of social and cultural norms and practices (see Faucher et al., 2002). Efforts to give the approach a more principled grounding have begun (e.g., Gopnik et al., 2004; Rehder & Hastie, 2001; Sloman, Love, & Ahn, 1998).

Lo et al. (2002) rejected the approach outright. They argue that it just does not matter whether people have representational structures that in one way or another are similar to scientific theories. The question that they believe has both prescriptive value for improving human induction and descriptive value for developing psychological theory is whether whatever method people use to update their beliefs conforms to principles of good scientific practice.

computational models of induction

The third approach to induction as a scientific methodology is concerned with the representation of inductive structure without concern for the process by which people make inductive inferences. The approach takes its lead from Marr's (1982) analysis of the different levels of psychological analysis. Models at the highest level, those that concern themselves with a description of the goals of a cognitive system without direct description of the manner in which the mind tries to attain those goals or how the system is implemented in the brain, are computational models. Three kinds of computational models of inductive inference have been suggested, all of which find their motivation in principles of good scientific methodology.

Induction as Hypothesis Evaluation McDonald, Samuels, and Rispoli (1996) proposed an account of inductive inference that appeals to several principles of hypothesis evaluation. They argued that when judging the strength of an inductive argument, people actively construct and assess hypotheses in light of the evidence provided by the premises. They advanced three determinants of hypothesis plausibility: the scope of the conclusion, the number of premises that instantiate it, and the number of alternatives to it suggested by the premises. In their experiments, all three factors were good predictors of judged argument strength, although certain pragmatic considerations, and a fourth factor - "acceptability of the conclusion" -were also invoked to fully cover the results.

Despite the model's success in explaining some judgments, others, such as non-monotonicity, are only dealt with by appeal to pragmatic postulates that are not defended in any detail. Moreover, the model is restricted to arguments with general conclusions. Because the model is at a computational level of description, it does not make claims about the cognitive processes involved in induction. As we see next, other computational models do offer something in place of a process model that McDonald et al.'s (1996) framework does not: a rigorous normative analysis of an inductive task.

Bayesian models of inductive inference Heit (1998) proposed that Bayes' rule provides a representation for how people determine the probability of the conclusion of a categorical inductive argument given that the premises are true. The idea is that people combine degrees of prior belief with the data given in the premises to determine a posterior degree of belief in the conclusion. Prior beliefs concern relative likelihoods that each combination of categories in the argument would all have the relevant property. For example, for the argument

Cows can get disease X.

Sheep can get disease X.

Heit assumes people can generate beliefs about the relative prior probability that both cows and sheep have the disease, that cows do but sheep do not, and so on. These beliefs are generated heuristically; people are assumed to bring to mind properties shared by cows and by sheep, properties that cows have but sheep do not, and so on. The prior probabilities reflect the ease of bringing each type of property to mind. Premises contribute other information as well - in this case, that only states in which cows indeed have the disease are possible. This can be used to update priors to determine a posterior degree of belief that the conclusion is true.

On the basis of assumptions about what people's priors are, Heit (1998) described a number of the phenomena of categorical induction: similarity, typicality, diversity, and homogeneity. However, the model is inconsistent with nonmonotonicity effects. Furthermore, because it relies on an exten-sional updating rule, Bayes' rule, the model cannot explain phenomena that are nonex-tensional such as the inclusion fallacy or the inclusion-similarity phenomenon.

Sanjana and Tenenbaum (2003) offered a Bayesian model of categorical inference with a more principled foundation. The model is applied only to the animal domain. They derive all their probabilities from a hypothesis space that consists of clusters of categories. The model's prediction for each argument derives from the probability that the conclusion category has the property. This reflects the probability that the conclusion category is an element of likely hypotheses - namely, that the conclusion category is in the same cluster as the examples shown (i.e., as the premise categories) and that those hypothesized clusters have high probability. The probability of each hypothesis is assumed to be inversely related to the size of the hypothesis (the number of animal types it includes) and to its complexity, the number of disjoint clusters that it includes. This model performed well in quantitative comparisons against the similarity-coverage model and the feature-based model, although its consistency with the various phenomena of induction has not been reported and is rather opaque.

The principled probabilistic foundation of this model and its good fit to data so far yield promise that the model could serve as a formal representation of categorical induction. The model would show even more promise and power to generalize, however, if its predictions had been derived using more reasonable assumptions about the structure of categorical knowledge. The pairwise cluster hierarchy Sanjana and Tenenbaum use to represent knowledge of animals is poorly motivated (although see Kemp & Tenen-baum, 2003, for an improvement), and there would be even less motivation in other domains (cf. Sloman, 1998). Moreover, if and how the model could explain fallacious reasoning is not clear.

summary of induction as scientific methodology

Inductive inference can be fallacious, as demonstrated by the inclusion fallacy described previously. Nevertheless, much of the evidence that has been covered in this section suggests that people in the psychologist's laboratory are sensitive to some of the same concerns as scientists when they make inductive inferences. People are more likely to project nonvariable over variable predicates, they change their beliefs more when premises are a priori less likely, and their behavior can be modeled by probabilistic models constructed from rational principles.

Other work reviewed shows that people, like scientists, use explanations to mediate their inference. They try to understand why a category should exhibit a predicate based on nonobservable properties. These are valuable observations to allow psychologists to begin the process of building a descriptive theory of inductive inference.

Unfortunately, current ideas and data place too few constraints on the cognitive processes and procedures that people actually use.

Quick Permanent Weight Loss

Quick Permanent Weight Loss

A Step By Step Guide To Fast Fat Loss. Do you ever feel like getting rid of the extra weight of your body? If you do, it‟s quite normal because

Get My Free Ebook


Post a comment