The Local Clinical Scientist (LCS) model initially was presented (Strieker & Trierweiler, 1995; Trierweiler & Strieker, 1998) as a bridge between science and practice. As such, it was conceptualized as an instantiation of the scientist-practitioner model (Raimy, 1950), the most influential and least applied (Stricker, 2000) approach to training in clinical psychology. Ironically, the LCS model has been widely adopted by professional schools of psychology (R. L. Peterson, Peterson, Abrams, & Stricker, 1997) that emphasize training for practice, but has not been accepted by schools more focused on the training of scientists. The much-lauded intention to train psychologists in both practice and science, in a way that takes both missions seriously, occurs less frequently, and is the vision of the LCS approach. This chapter will describe the LCS model, look at some contributions to the application of science from social psychology, recognize some drawbacks to the implementation of the model, and describe a successful approach to such implementation.
the local clinical scientist model
Each of the three words in the term "Local Clinical Scientist" is crucial to an understanding of the overall meaning. The central word, "Clinical," describes the context in which LCSs operate. They are clinicians and they are functioning in the role of a clinician trying to be helpful to patients. The patients they are helping are immediately there before them, not abstract conglomerates of diagnostic terms. The immediacy of this contact represents the "Local" aspect of the construct. Finally, and perhaps most novel, even though they are operating clinically in a local context, nonetheless they are functioning as scientists, treating each patient contact as an experiment to which much previous knowledge is brought and from which much is to be learned. This model initially was applied to psychotherapy (Stricker & Trierweiler, 1995; Trierweiler & Stricker, 1998), but has been expanded to apply to personality assessment (Stricker, 2006) as well.
The LCS model begins with the assumption that science is defined by attitudes, not by activities or generalizations. The variability of activities and generalizations, accompanied by the stability of attitudes, is crucial to the functioning of the scientist, and it also is crucial to the functioning of the clinician. The activities of the scientist vary depending upon the area of study under investigation. The generalizations that are drawn may differ in validity as new knowledge replaces old, only to be replaced itself at a future time. Thus, practice based solely on scientific conclusions today can be hopelessly dated by tomorrow. Attitudes, however, are similar for the scientist in every area of inquiry. All scientists, regardless of specific discipline, should be keen observers who are characterized by disciplined inquiry, critical thinking, imagination, rigor, skepticism, and openness to change in the face of evidence. The LCS carries these attitudes into the practice setting, raising hypotheses in the consulting room and seeking confirmatory or disconfirmatory evidence in the immediate response of the patient.
Formal data may be collected, but it often is not. Nevertheless, LCSs approach the patient using the appropriate literature from both psychotherapy and general psychology insofar as it is applicable, applying it whenever it seems potentially helpful to do so, but also supplementing it by an intuitive grasp of the situation, based on prior experience with similar situations. However, as no two patients are exactly alike, the similarity may not point to an effective intervention. Therefore, it is crucial for LCSs to observe the effects of the intervention, and to add it to their personal data bank so that what they learn can be applied, tentatively, to the next patient/research project for whom it may appear to be applicable. The only way experience can accumulate in a meaningful way is if this process is systematic, and LCSs strive to improve their functioning on the basis of past experience. Because of the problems inherent in memory, this process of systematic learning is facilitated if careful records are kept. Thus, as a clinician, the LCS always is learning and using the product of the learning to apply to new situations. So it is with the scientist as well, as no single research project is definitive, and science proceeds with the accretion of knowledge.
To summarize this portrait to this point, LCSs may be expected to engage in the following activities:
1. the display of a questioning attitude and search for confirmatory or disconfirmatory evidence;
2. the application of relevant research findings to the clinical case immediately at hand;
3. the documentation of each individual clinical contact; and
4. the production of research, either collaboratively or more traditionally.
Not every activity is performed in every instance by the LCS. Rather, these activities have been presented in descending order of frequency, with the questioning attitude the most pervasive and critical feature of the LCS, and the conduct of formal research activity the least likely to occur, although it is highly desirable, and has happened on many occasions. The LCS can be described as
A person who, on the basis of systematic knowledge about persons obtained primarily in real-life situations, has integrated this knowledge with psychological theory, and has then consistently regarded it with the questioning attitude of the scientist. In this image, clinical psychologists see themselves combining the idio-graphic and nomothetic approaches, both of which appear to them significant. (Shakow, 1976, p. 554)
In this statement, Shakow was not describing a LCS, a term that had not yet been coined, but a scientist-practitioner. The similarity between the two descriptions (a LCS and a scientist-practitioner) is striking, and raises the question as to the relationship between the two concepts. My guess is that Shakow would have been very comfortable with the notion of a LCS, and might have seen it, as it was intended, as an instantiation of his influential recommendation. Unfortunately, the more common implementation of the scientist-practitioner recommendation is heavily tilted toward science as it is practiced in the academy, which is the locus of clinical training programs. It is not as much concerned with science as it may be practiced in the community, which is the site of employment of most graduates of clinical training programs. In many cases, the scientist-practitioner model is implemented in a sequential fashion, with science being taught in graduate school and practice occurring during the career of the psychologist (Kanfer, 1990).
There is an important distinction between idiographic and nomothetic, terms used quite appropriately by Shakow. The scientist, working in a laboratory, seeks nomothetic data, which characterize large groups; the clinician, working with one individual at a time, generates idiographic data. A major problem confronting all clinicians is how to apply nomothetic conclusions to local, idiographic presentations, or, more simply, how to apply group findings to individuals. Shakow, of course, did not force a choice between the two, but instead understood that both are legitimate, and he challenged the scientist-practitioner to combine the two. The clinician, faced with the need to respond to a single individual, should apply whatever nomothetic generalizations are relevant, but also must recognize that there will be gaps in knowledge and the nomothetic conclusions cannot be applied blindly. Patients expect clinicians to integrate their professional experience with the nomothetic data to reach an informed and local idiographic intervention.
It should be clear at this point that the LCS model is most focused on the implementation of a scientific attitude and should not be taken as an alternative to scientific activity. LCS activity may not lead to a firmly established set of conclusions and generalizations, but it does seek to develop a loosely determined set of hypotheses. The LCS differs from the ordinary clinician by engaging in the systematic study of the clinical work, a process that will reduce the extent to which the LCS's observations are subject to the distortions of the cognitive heuristics (Tversky & Kahneman, 1974) that are common to all thinking. It is foolish to ignore the results of research, but it also is foolish to apply those results without consideration of the local circumstances that generated the findings, and the inevitable difficulties with generalization. Instead, LCSs will consider whether and how these research findings can be incorporated in their local activity. It must be reiterated that the LCS model does not provide the clinician with an excuse to ignore research. Rather, the model requires LCSs to attend to research done by others, to apply it where applicable, but also to systematically study prior clinical activities so that local hypotheses can be raised a step closer to generalizations. This leads to a continuing process rather than a firm product, and the result should be a closer and closer approximation to choosing appropriate and helpful local interventions.
It would be foolish to think that the LCS model (or any other psychological construct, regardless of the imaginativeness of the nomenclature) arose sui generis. I already noted that the scientist-practitioner model bears considerable resemblance to the LCS model, although many proponents of the former might disavow connection with the latter. A much closer and more readily acknowledged resemblance is with the concept of disciplined inquiry (D. R. Peterson, 1991).
Disciplined inquiry is an approach that flows from a consideration of the relationship between research and practice, carefully avows that professional education never should suggest the rejection of research, but states that the training and goals of researchers and practitioners are quite different. Peterson applauded the scientist-practitioner model but criticized the application of that model as it had been implemented, leading him to suggest a new approach to training practitioners. I agree with his criticism of the implementation of the scientist-practitioner model (Stricker, 2000), agree that a different approach than has been taken is required for adequate training of practitioners, but feel that it can be done within a true scientist-practitioner framework, and the necessary approach is the LCS model. In fact, in light of the recent concerns he has expressed about professional schools (D. R. Peterson, 2003), I wonder if Peterson might be more inclined to agree with the wisdom of adhering more closely to a scientist-practitioner model, assuming it was implemented as originally proposed and not as more typically distorted. Nonetheless, Peterson's suggested approach (disciplined inquiry) has a good deal to recommend it.
In analyzing differences between science and practice, Peterson stated:
The simplifications and controls that are essential to science cannot be imposed in practice. Each problem must be addressed as it occurs in nature, as an open, living process in all its complexity, often in a political context that requires certain forms of action and prohibits others. All functionally important influences on the process under study must be considered. At its best, practice runs ahead of research. Each case is unique. The pattern of conditions the client presents has never occurred in exactly this form before, and the most beneficial pattern of professional action cannot rest only on scientifically established procedures, although any contingencies established in prior research must not be ignored. The measure of effect goes beyond statistical significance to functional importance. It is not enough to determine whether a difference is random or replicable. The difference has to matter to the client. (D. R. Peterson, 1991, p. 426)
Peterson's emphasis on local conditions, along with the recognition of some of the limitations of research precedents without discarding the contributory value of those findings, is consistent with the LCS approach.
Peterson then went on to describe his approach to disciplined inquiry. He advocated that we "start with the client and apply all the useful knowledge we can find" (D. R. Peterson, 1991, p. 426), a procedure quite different than starting with science. In doing so, careful assessment leads to a formulation that results in an action, the effects of which must be evaluated carefully. Depending on the evaluation, either the action can be continued or a reformulation and alternative action is required, with additional analyses and possible reformulation following until the problem is resolved. Finally, it is noted that the results of this sequence become part of the knowledge base of the practitioner, leading to more adequate formulations in similar future cases.
The resemblance of this approach to the LCS model is striking. In both, science is drawn upon when relevant, local conditions are taken into account, and the prior experience of the practitioner is a source of hypotheses that feed into the clinical formulation. There is a need to systematically record the experience so as to benefit future interventions and not start over at the beginning each time. Perhaps the major differences (and these are comparatively minor) are the emphasis on the scientific thought process in the LCS model and the conviction in that model that a scientist-practitioner framework need not be discarded.
It also should be noted that Kanfer (1990) has a slightly different approach, although one that also is quite consistent with the meaningful relationship between science and practice. For Kanfer, as with Peterson, the practitioner is best advised to begin with the patient, not the science, and then seek information in the scientific corpus that will assist with the theory driven hypotheses that have been developed. Here, too, the science is applied where relevant, but the entry into science begins with a formulation about the patient.
the presidential task force
When Ronald Levant assumed the presidency of the American Psychological Association, he constituted a task force that was charged with studying evidence-based practice. The final report of that group (American Psychological Association, 2005) is particularly instructive. Evidence-based-practice, if it is interpreted literally, and in a manner that can present a straight jacket rather than a set of permissive guidelines (Chambless & Ollendick, 2001), would hamper the activities recommended for the LCS.
The Task Force was constituted of a heterogeneous group of psychologists, some of whom had clear allegiances to the strict and sole application of evidence-based practices and others who were more aligned with practice as usual (as well as many in between these alternatives). The conclusions they drew were consistent with the LCS model, and mentioned that model favorably.
The Task Force began by adopting the definition of evidence-based practice from the important report of the Institute of Medicine: "Evidence-based practice is the integration of best research evidence with clinical expertise and patient values" (Institute of Medicine, 2001, p. 147). This definition presages their conclusions and is consistent with an LCS approach. That is, the report attends to research evidence but also incorporates clinical expertise (patient values can be considered part of the local component of the LCS). This led to their definition of evidence-based practice in psychology as "the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences" (American Psychological Association, 2005, p. 5).
The Task Force went on to endorse the consideration of multiple sources of evidence, recognizing the need to appraise the strengths and weaknesses of each source of information. Its approach to evidence follows from that of the Template for Developing Guidelines (American Psychological Association Task Force on Psychological Intervention Guidelines, 1995; Stricker et al., 1999). This was a document that proposed a dual axis approach to evidence, thereby giving credibility to both efficacy and effectiveness approaches to psychotherapy research, and recognizing the inherent tradeoff between internal and external validity.
In the consideration given to clinical expertise, an important aspect of the LCS model (as is acknowledged in the report), there is explicit recognition that the clinician is responsible for integrating the best of research data and clinical data, keeping in mind the patient's characteristics and the goals of treatment. Thus, recognition is given to the local aspects of the clinical situation and potential problems with the generalizability of some efficacy data based on controlled trials. It is this aspect of the report, the recognition that different patients require different treatment in different circumstances, that is characterized as patient preferences and this represents an emphasis on local conditions.
The report concludes with a summary statement that clearly supports the crux of the LCS model: "What this document reflects, however, is a reassertion of what psychologists have known for a century: that the scientific method is a way of thinking and observing systematically and is the best tool we have for learning about what works for whom" (American Psychological Association, 2005, p. 18).
In an interesting complement to the report, and an additional endorsement of multiple sources of evidence, the philosopher Cranor (2005), writing within a legal context summarized his article by stating that:
If the Daubert trilogy of decisions tried to ensure that the law better comports with the relevant science, this will happen only if courts recognize the complexity of scientific evidence, how scientists draw inferences from such evidence and the fact of reasonable disagreements between respectable scientific experts. With the help of the public health community, judges might rectify overreactions to the initial Daubert teaching and help ensure that the courts use in the courtroom the kinds of inferences that public health scientists use in their research. (Cranor, 2005, p. S127)
If we read this as referring to the need to ensure that practice comports with the relevant science, and that the psychological community, particularly its practitioners, use the same inferences from research that scientists use in their research, we have a sound statement about the problems with, and solutions to, current statements about research. The LCS certainly should recognize all of the value that research has to contribute to clinical practice, but also should have a keen awareness of the limitations of that research.
some lessons from social psychology
About 40 years ago I had a conversation with Paul Rozin, an old friend from college days. At the time, I was engaged in research in some clinical areas and Rozin was identified as a physiological psychologist with a particular interest in taste. He was astonished by my concern with significance levels, preferring to subject findings to an interocular test (did the results hit you between the eyes). He felt that science, in general, proceeded by the presence of striking findings rather than simply probabilistic ones. In fact, Rozin's formulation anticipated Peterson's (1991) concern about placing statistical significance above functional importance. I did not agree with him then about the lack of value of significance testing, and I do not thoroughly agree with him now, but it is an issue worth considering.
It was in that context that I saw a familiar article addressing this topic in a recent journal in social psychology (Rozin, 2001), Rozin's newest interest. He took his cue from Solomon Asch, an old colleague of his and certainly a respected social psychologist. Rozin introduced the article with a quotation from Asch's work:
In their anxiety to be scientific, students of psychology have often imitated the latest forms of sciences with a long history, while ignoring the steps these sciences took when they were young. They have, for example, striven to emulate the quantitative exactness of natural sciences without asking whether their own subject matter is always ripe for such treatment, failing to realize that one does not advance time by moving the hands of the clock. Because physicists cannot speak with stars or electric currents, psychologists have often been hesitant to speak to their human participants. (Asch 1952/1987, pp. xiv-xv; from Rozin, 2001, p. 2)
The LCS model is concerned with the practice of clinical psychology rather than research in social psychology, but the ideas Rozin expressed are relevant. In an attempt to be scientific, clinicians often can become overly scientistic and trade narratives for statistics in an attempt to understand and treat their patients. We probably are at a point of development where description remains a valuable activity and definitive conclusions are beyond our grasp.
Rozin indicated that "There are many possible methods, including examination of historical materials or literature, observation, participant observation, laboratory experiment, natural experiment, questionnaire/survey, and interview" (Rozin, 2001, p. 4). He went on to note that "It is as if the experiments in question transcend time, location, culture, race, religion, and social class" (Rozin, 2001, p. 4). He was discussing social psychology, but the same observation pertains to clinical psychology and describes the approach and methods of the LCS. Notice that traditional research, whether in a laboratory or in the field, was not denigrated, but the available and appropriate methodology was extended to include theoretical and observational work. The local aspects of the research were noted, and the problems with generalizability were emphasized.
Although Rozin understood the power of a well-controlled experiment, he also recognized its limitations. In the course of instituting appropriate controls in an experiment, there are two risks that we must attend to: "(a) they allow for the possibility that the results will not bear on real social situations and (b) they may generalize to only a very narrow range of apparently similar experimental situations" (Rozin, 2001, p. 9). Again, the potential failure of generalizability forces us to turn to the local aspects of the clinical situation, and all clinical work, like all politics, is local.
The lesson from social psychology is that we should recognize the stage of scientific endeavor we have reached and not discard methodology appropriate to that stage by engaging in physics envy (physics, being more advanced, has progressed beyond simple and exclusive reliance on experimental methods). The LCS has learned this lesson and attends to research data as appropriate, while remaining cognizant of limitations of generalizability and local considerations that impact the interpretation of those findings.
a word of caution
The LCS model places great reliance on the skill of the individual clinician, acting with the curiosity and skepticism of a scientist. However, as the presidential task force report notes:
experts are not infallible. All humans are prone to errors and biases. Some of these stem from cognitive strategies and heuristics that are generally adaptive and efficient. Others stem from emotional reactions, which generally guide adaptive behavior as well but can also lead to biased or motivated reasoning (e.g., Ditto & Lopez, 1992; Ditto et al., 2003; Kunda, 1990). Whenever psychologists involved in research or practice move from observations to inferences and generalizations, there is inherent risk for idiosyncratic interpretations, overgeneralizations, confirmatory biases, and similar errors in judgment (Dawes, Faust, & Meehl, 2002; Grove et al., 2000; Meehl, 1954; Westen & Weinberger, 2004). Integral to clinical expertise is an awareness of the limits of one's knowledge and skills and attention to the heuristics and biases-both cognitive and affective-that can affect clinical judgment. Mechanisms such as consultation and systematic feedback from the patient can mitigate some of these biases. (American Psychological Association, 2005, p. 10)
This introduces a class of potential errors that generally can be considered as cognitive heuristics (Tversky & Kahneman, 1974). Cognitive heuristics are mental shortcuts that allow us to make judgments quickly but sometimes erroneously because the shortcut disregards some of the available information. These heuristics characterize a good deal of cognitive activity, often have much functional value, but occasionally can lead to significant error. Among the most frequently used heuristics by clinicians are the availability heuristic, the representativeness heuristic, and the anchoring heuristic.
The availability heuristic refers to the tendency to reach solutions that come to mind most easily, so that a dramatic instance of a previous case will be remembered more readily than frequent but unremarkable exceptions. A clinician can draw upon past experiences, use the availability heuristic, and be certain that a particular intervention is warranted because it "always" has worked in the past. Of course it has not, and the only relevant counter to this heuristic is careful record keeping, so that a probabilistic rather than absolute conclusion can be reached, and the probability can be determined by actual occurrence rather than faulty memory processes. It should be noted that the availability heuristic probably underlies the phenomenon of the illusory correlation (Chapman, 1967), another frequent error that a clinician may demonstrate. An illusory correlation consists of the impression that two variables are correlated when, in fact, they are not. This can result from faulty recollection of the co-occurrence of the two variables, a recollection based on the availability heuristic. A LCS will be skeptical about presumed probability statements unless there are supportive data available.
The representativeness heuristic links judgments to signs that are representative of the group in general, so that it is assumed, once it is determined that a patient is a member of a particular diagnostic group (or has a particular set of psychodynamics), that the patient has all of the characteristics of that group. This is the object of the diagnostic procedure, but it is not always an accurate basis for individual judgment, as not all patients with borderline personality disorder, for example, have all of the characteristics of borderline personality disorder (e.g., not all cut themselves). It is also important to recognize that the representativeness heuristic is the source of much stereotyping and subsequent bias. The assumption, for example, that all members of a particular racial group, gender, or religious persuasion are characterized by specific features are examples of the representativeness heuristic at work, and must be guarded against.
In fact, the representativeness heuristic can be seen as a failure in generaliz-ability, and is a source of potential difficulty for the universal adoption of interventions that are empirically supported. This strange connection between the scientific inclinations of proponents of empirically supported techniques and the pitfalls of the representativeness heuristic rarely is noted, and represents a danger to the "scientific" practitioner as well as to the clinician in the field. However, the LCS, maintaining scientific objectivity and skepticism, should be better prepared than the ordinary clinician to deal with the problem (although no one is immune).
The anchoring heuristic is a process by means of which the clinician (actually, all human beings) draws conclusions early in the therapeutic process, and then is more likely to respond to confirmatory evidence afterwards, unwittingly failing to be as responsive to evidence contrary to the early conclusion. This also can be seen as a confirmation bias.
The approach of forming hypotheses and then seeking confirmatory or discon-firmatory evidence is an important aspect of clinical functioning, but it is prone to the danger of sorting of evidence so that only the confirmatory is considered seriously. Awareness of this tendency is one way of guarding against it, but as with all heuristics, it is easier said than done. We should note that it is not only "consultation and systematic feedback from the patient [that] can mitigate some of these biases (American Psychological Association, 2005, p. 10)," but also careful attention to the existence and problems created by them, and systematic record keeping that can mitigate the natural errors created by cognitive heuristics and other biases. The LCS, being aware of the tendency of the heuristics to mislead, and having systematic data available, can reduce the likelihood of error, but no human being can avoid it entirely, regardless of the approach taken to clinical practice.
In addition to cognitive heuristics, errors also can be made simply because of faulty memory and the limitations created by theoretical and social expectations. There is a need to keep these possible problems in mind when considering the use of clinical judgment and expertise.
the local clinical scientist in action
The LCS model cannot be implemented by means of a manual. It is more a model of a mind set and process than it is of a series of carefully crafted interventions. Nonetheless, there are some good examples of approaches, both in the clinical and in the research literature, that embody this approach.
Before looking at specific examples, it must be noted that a general approach to patients that cuts across theoretical lines is entirely consistent with the LCS model. The model consists of an informed sequence of hypothesis formation, testing, and revision on the part of the therapist. This process has been presented most clearly, for psychodynamic therapists, by Sullivan (1954). He explicitly stated that "the interviewer obtains impressions which on scrutiny may or may not be justifiable. More or less specific testing operations should be applied to those impressions with the idea of getting them more nearly correct" (Sullivan,
1954, p. 122). In order to get them more nearly correct, "the testing of hypotheses cannot safely be left wholly to relatively unformulated referential operations. Instead it is well for the interviewer now and then to think about the impressions that he has obtained. The very act of beginning to formulate them throws them into two rough groups: those about which one has no reasonable doubt and those which, when noted, are open to question. The latter, of course, need further testing" (Sullivan, 1954, p. 122, italics in original). Finally, the "way of testing hypotheses is by clearly purposed exploratory activity of some kind. The interviewer asks critical questions—that is, questions so designed that the response will indicate whether the hypothesis is reasonably correct or quite definitely not adequate" (Sullivan, 1954, p. 122). Of course, to guard against the anchoring heuristic, the interviewer must be open to the evidence provided by the carefully crafted exploratory questions. This explicit description of the process of hypothesis testing is precisely what is required of the LCS, and the Sullivanian statement is echoed, in a more classical vein, by Greenson (1967), who refers to interpretations as alternative hypotheses. Greenson also wrote about the specific link between empathy and theory, so that the impressions that lead to hypotheses are based on a solid grounding in theoretical knowledge.
The method described above is not restricted to psychodynamic approaches. Beck's (Beck, Rush, Shaw, & Emery, 1979) cognitive behavioral approach relies on a method he calls collaborative empiricism. In this approach, the thoughts, attitudes, beliefs, and behaviors of the patient are tested through methods such as Socratic Questioning in order to determine their validity and usefulness. If these thoughts, attitudes, beliefs, and behaviors are not found to be valid or productive, a search for more functional substitutes is undertaken. Here, too, the process of explicit testing is undertaken, with an eye toward confirming or disconfirming the validity of the therapist's formulations and the patient's approach. Again, the expectations are informed by theoretical and research grounding, so that the approach to the patient is not based solely on empathic connection, and again, the anchoring heuristic remains a danger.
Finally, in describing the characteristics of a culturally competent psychotherapist, Sue (1998) lists scientific mindedness as the first of three cross-cultural skills. By scientific mindedness, he is thinking of "therapists who form hypotheses rather than make premature conclusions about the status of culturally different clients, who develop creative ways to test hypotheses, and who act on the basis of acquired data" (Sue, 1998, p. 445). He notes that "culturally competent therapists will try to devise means of testing hypotheses about their clients. This scientific mindedness may also help to free therapists from ethnocentric biases or theories" (Sue, 1998, p. 446). This skepticism guards against inappropriate generalizations from one culture to another. This scientific approach also will guard against the inappropriate stereotyping of a member of a different cultural group, a second characteristic of the culturally competent therapist, and one Sue refers to as dynamic sizing. Finally, the third characteristic is culture-specific expertise. The culturally competent therapist must have knowledge about the culture of the patient and the knowledge and skill to translate this expertise into culturally effective means of intervention. Each of these characteristics, a skeptical and scientific attitude, the ability to deal effectively with cognitive heuristics (the primary danger here is the representativeness heuristic), and knowledge of the culture of the patient, are part of the expected functioning of the LCS, who combines knowledge with experience to function in an effective manner.
The research community also has had much to contribute to the functioning of the LCS. Although research has not produced a manual that can be implemented without thought or regard to local conditions, it has contributed to a general fund of generalizable knowledge upon which the LCS can draw. The LCS, drawing upon experience to supplement extant data, relies on memory to recall like instances and to implement previously successful interventions. One difficulty with this is that memory is subject to distortions, predictably in the form of the cognitive heuristics, particularly the availability heuristic, that have been described. This leads to the necessity to document findings rather than to rely solely on memory. However, clinicians still are limited to their own experience, and this often is insufficient for new cases. By aggregating data across clinicians with similar local experiences, it is possible to construct a data base that is systematically derived and can be applied more readily to individual cases.
There are two specific examples of this type of research contribution that I would like to cite. The first of these is the practice research network (PRN; Wolf, 2005). A PRN consists of a group of clinicians in the community who collaborate on data collection for the purpose of research. This shifts the laboratory into the community and, just as the LCS views each patient as an experiment and the consulting room as a laboratory, the PRN aggregates data across these individual laboratories and creates a larger and more veridical data base than either the individual clinician or the individual scientist can do. PRNs have been constructed by national professional organizations (e.g., American Psychological Association, American Psychiatric Association), local professional organizations (e.g., Pennsylvania Psychological Association), and local clinical centers (e.g., Anna Freud Center). Perhaps the most developed of these PRNs is the one sponsored by the Pennsylvania Psychological Association. The work of that group has been described (Borkovec, Echemendia, Ragusea, & Ruiz, 2001) and several findings from the project have also been presented in the literature (e.g., Ruiz et al., 2004).
The second is an elaborate project known as patient-focused research (Lambert, Hansen, & Finch, 2001). Recent reports from this exciting project (Harmon, Lambert, Slade, Hawkins, & Whipple, 2005; Lambert, Harmon, Slade, Whipple, & Hawkins, 2005) show the value of aggregating data across many clinicians and then providing feedback to individual clinicians concerning the progress being made by their patients. In this project, patients regularly provide ratings of their progress in therapy; these are compared to normative ratings, and the therapist is advised if the ratings fall below a prescribed cutoff score. In some variations, the therapist also can be given some advice about possible alterations in treatment that might be helpful. The difference in outcome for patients whose therapists were provided with feedback and those who were not was striking, and clearly supported the value of the feedback. Actual therapist ratings, in contrast to the normative data base, were so overly optimistic that the value of pure clinical judgment must be questioned, and the need for systematic data collection as a source of information to supplement the clinician is underlined. It is also possible, within this design, to provide feedback in comparison to the therapist's own patients, or to patients from a particular diagnostic category. The general thrust of patient-focused research is to show the potential of actually using group findings as a guide for individual actions, which is the crux of the problem of the applicability of research findings.
Every clinician engages in evidence-based practice. Indeed, it would be both foolish and professionally irresponsible to knowingly ignore any available evidence. The key lies both in what evidence is available to each clinician, and how that evidence is weighed. In weighing evidence, it is critical to consider both internal and external validity. To speak in the vernacular, clinicians who rely exclusively on internal validity know more and more about less and less. Clinicians who rely exclusively on external validity know less and less about more and more. Clinicians who rely exclusively on internal validity are absolutely certain of something that may not apply to the patient in front of them. Clinicians who rely exclusively on external validity are absolutely certain about something that probably does apply to the patient, but it may not be true. Of course these are caricatures, and there is much room between absolute reliance on one type or another of validity. The LCS occupies this ground, seeks out relevant evidence, weighs it in a balanced, critical, and skeptical manner, and applies it as best as can be done. The LCS then systematically records this new experience so that it can be consulted the next time it may become relevant, not as a guiding principle but as one more piece of relevant evidence. By doing this, the LCS is functioning as a scientist-practitioner.
Was this article helpful?
Download this Guide and Discover How To Find And Monetize on Your Expertise And Strengths. Inside this special report, you'll discover: How positive thinking is one of the key factors in a successful life. Five ways and tools to help you stay positive. Use these to help you keep on track. Case studies that'll inspire you to stick to your dreams. Plus much, much more.