Do you agree with Matt Normand’s assumption that as behavior analysts, we are first scientists? Why or why not? What are the implications of acting as a scientist and how can you ensure that you will practice along these guidelines?
APA format
Check for plagiarim and AI
Attached us the discussion rubric and reading
Must site one outside source
Discussion Post Rubric 20 Possible Points
Category 4 Points 2 Points 0 Points
Length of Post – Enough content to convey a scholarly message
The author’s post consisted of 150 – 200 words (Not counting reference citations)
The author’s post consisted of 100-149 words (Not counting reference citations)
The author’s post consisted of 100 words or less (Not counting reference citations)
Grammar, Usage, Spelling – The author proofread using software for obvious errors in grammar, usage, and spelling
The author’s post contained less than 2 grammar, usage, or spelling errors.
The author’s post contained 3-4 grammar, usage, or spelling errors.
The author’s post contained more than 5 grammar, usage, or spelling errors and proofreading was not apparent.
Referencing and Utilizing Outside Sources – The author referenced all assigned readings and (1) unique reference
The author posted a unique reference from a peer-reviewed document AND all the assigned readings.
The author was missing a unique reference from a peer-reviewed document or did not cite all the assigned readings.
The author neither used a unique reference from a peer-reviewed document and/or did not cite all the assigned readings.
Promotes Discussion – The author produces content beyond a summary and applies it to a logical argument.
The author’s post clearly responds to the assignment prompt, develops ideas cogently, organizes them logically, and supports them through empirical writing. The author’s post also raises questions or stimulates discussion.
The author’s post responds to the assignment prompt but relies heavily on definitional explanations and does not create and develop original ideas and support them logically. The author’s post may stimulate some discussion.
The author’s post does not correspond with the assignment prompt, mainly discusses personal opinions, irrelevant information, or information is presented with limited logic and lack of development and organization of ideas Does not support any claims made.
Demonstrates Application – The author is able to apply content to an example or real world application
The author’s post clearly demonstrates application and relationship to the week’s assigned reading/topic.
The author’s post refers to the assigned topic/reading tangentially but does not demonstrate application.
The author’s post does not demonstrate application of the week’s assigned topic/reading.
Be advised, there are also response costs associated with specific behaviors:
● A response cost of 3 points will be administered for not responding to a peer’s post ● A response cost of 3 points will be administered for late submissions (up to 2 days) ● Discussion posts that are more than two days late will not be accepted unless excused by the
instructor
,
1
Science, Skepticism, and Applied Behavior Analysis Matthew P Normand, Ph D , BCBA, University of the Pacifc
ABSTRACT
Pseudoscientifc claims concerning medical and psychological treatments of all varieties are commonplace. As behavior analysts, a sound skeptical approach to our science and practice is essential. The present paper offers an overview of science and skepticism and discusses the relationship of skepticism to behavior analysis, with an emphasis on the types of issues concerning behavior analysts in practice. Descriptors: Behavior analysis, pseudoscience, science, skepticism
“In science, keeping an open mind is a virtue—just not so open that your brains fall out.”
– James Oberg1
notIn science, being skeptical does mean doubting the validity of
everything, nor does it mean being cynical. Rather, to be skeptical is to judge the validity of a claim based on objective empirical evidence. David Hume, the 18th century philosopher, asserted that we should accept no things as true unless the evidence available makes the non-existence of the thing more miraculous than its existence. Even extraordinary claims can be true, but the more extraordinary the claim, the more extraordinary the evidence required. Not too long ago, the notion of human fight seemed like pure fancy. Today, scores of people take to the sky almost as routinely as they take to the highway. To be skeptical does not mean dismissing claims—even extraordinary claims—out of hand. It means examining the available evidence before reaching a decision or withholding judgment until suffcient evidence is had. One should not start with the assumption that a claim cannot be true any more than one should start with the assumption that a claim must be true. All reasonable evidence on both sides should be considered.
Skepticism is a critical feature of a scientifc repertoire. Indeed, many of the most prominent skeptics are and
1 Quote attributed to James Oberg by (Sagan, 1996).
have been some of the world’s most prominent scientists, including Richard Dawkins, Stephen Jay Gould, and Carl Sagan. Even B. F. Skinner was among the signers of the 1976 letter announcing the formation of the Committee for the Scientifc Investigation of the Paranormal, an organization dedicated to the promotion of scientifc skepticism and publication of the Skeptical Inquirer (Kurtz, 1996).2 The relationship of skepticism to behavior analysis is the relationship between skepticism and science in general. The experimental analysis of behavior is a natural science, and this natural science is the foundation of all behavior analytic research and practice. Moreover, the practical importance of a skeptical repertoire for those engaged in behavior analytic practice cannot be overstated. Pseudoscience abounds in so many of the areas behavior analysts tread, including developmental disabilities, education, and psychotherapy. According to physicist Robert Park, pseudoscience is characterized by claims purportedly supported by well-established scientifc evidence when, in truth, such evidence is misinterpreted, misunderstood, or wholly lacking (Park, 2000).
This paper is aimed primarily at behavior analysts in practice who are likely to encounter various pseudoscientifc claims in the course of their work and who might not immediately identify themselves as
2 Now known as the Committee for Scientifc Investigation (CSI).
scientists, although it will be argued, they should. Pseudosciences know no professional boundaries and thrive in many areas of research and practice. Claims regarding the effectiveness of sensory integration therapy, facilitated communication, and inclusion qualify as pseudoscience. All are offered as legitimate therapies or useful practices when, in fact, the evidence available fails to support them (Jacobson, Foxx, & Mulick, 2005). Today, one would be hard pressed to fnd an area more widely affected by rampant pseudoscience than that of autism treatment, which also happens to be one of the largest single areas of application for behavior analysts (Shook, Johnston, & Mellichamp, 2004). In the sections that follow, I discuss scientifc standards of evidence as they relate to the practice of behavior analysis, describe some of the common characteristics of pseudoscientifc claims, and offer suggestions to promote skepticism in applied behavior analysis.
Standards of Evidence
Interobserver Agreement
When gathering and evaluating relevant evidence, scientists take careful steps to minimize bias in observation. What scientists say should be controlled primarily by what is seen, rather than what one hopes to see. Bias in observation cannot be entirely eliminated, but it can be controlled. The ideal case might be one in which some automated recording system can be utilized, as often is the
SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS Behavior Analysis in Practice, 1(2), 42-49. 42
BAP_v1.2_p1-72.indd 42 10/10/08 8:55:56 AM
case in basic research. Though not without precedent, such automated recording is fairly uncommon in applied research and practice. Where human observers collect behavioral data, steps must be taken to ensure that changes in behavior over time are actually changes in the behavior of interest, and not the behavior of the observer (Baer, Wolf, & Risley, 1968). That is, the veracity of the data are assessed through some form of interobserver agreement measures or through the use of double-blind control procedures. In this way, the primary source of control over the verbal behavior of the observer is more likely to be the actually transpired events.
Experimental Design
Even the most careful observations are not suffcient to inform about, say, the effects of a given cold remedy. Many of us do not visit the doctor when experiencing mild symptoms characteristic of the common cold. Instead, we visit the doctor only when we’ve been struggling with the symptoms for some prolonged period of time or when the symptoms become so severe that we have a diffcult time coping. Typically, we receive a brief exam, are prescribed some medication, and go on our way. Within a few days we are feeling better and able to resume our normal activities. The wonders of modern medicine? Not necessarily. We might well have gotten better in about the same amount of time had we never visited the doctor. We visited the doctor only after some extended period of time suffering with symptoms or after we noticed the symptoms become severe. Either circumstance might suggest that we were nearing the end of our illness. The medication might have dampened our symptoms, but our recovery might not have been hastened. No matter how carefully we observed what happened, we would be unable to drawn any frm conclusions about cause and effect.
Now consider an analogous case concerning a behavior analytic intervention. A young child is referred by his classroom teacher for behavior analytic services because he rarely works
on assigned tasks during the class time allotted. The behavior analyst sets about taking careful records of the time the child is engaged in assigned class work for a period of one week, with observations distributed across times of day and academic domains. Once these data are analyzed, and it is determined that the child is engaged in assigned academic work about 30% of the time he should be so engaged, a token reinforcement system is implemented with points awarded each time he is engaged continuously with his work for 60 s. The points are, of course, later exchanged for back-up reinforcers such as preferred activities or items. The behavioral observation system is continued and, after a few weeks of intervention, the child is now observed to be on-task approximately 80% of the time and the teacher reports that his assignment completion is greatly improved, even better than some of his peers.
The wonders of modern behavioral science? Not necessarily. The intervention could have produced the changes observed, but so could have any number of other uncontrolled variables. Perhaps the type of work assignments changed during the same period of time, resulting in easier or more interesting assignments. Or the referral might have increased the overall amount of attention provided to the student by the teacher and other school personnel, thereby improving performance due to changed motivating conditions or more effective academic instruction or behavior management. It is impossible to know why the student’s performance improved based on the types of observations made. But, you say, we can be more certain of our success because what we did was based on solid behavioral principles and, moreover, we are successful again and again with different children. Perhaps, but it could very well be that our token economy intervention regularly recruits one or more of the extraneous variables mentioned (e.g., increased attention by school personnel), which is the actual agent of change. Then again, maybe not. The point is that we cannot know
from the information obtained. Experimental evaluation is critical
for all sciences and is the mechanism that ultimately provides us the ability to predict and control our subject matter. In most behavior analytic experimental designs, prediction is made possible through repeated measures of behavior during a baseline condition before any experimental or clinical manipulation is made. Such measures then provide a basis against which to compare behavioral observations made under the changed conditions. We use the baseline measures to predict what we would see if our manipulation did not affect the behavior. If the observed behavior under our changed conditions (e.g., during intervention) deviates from our prediction, an experimental or clinical effect is suggested. The extent to which we are able to replicate this effect through experimental manipulations such as reversals to baseline or multiple-baseline arrangements determines the strength of the conclusions that can be drawn. When we can predict the likelihood of behavior occurring or not occurring under certain conditions, and when we can alter such likelihoods through our manipulations, we have demonstrated a cause-effect relationship.
Of course, a well-developed science of behavior should presumably offer well-established technologies for the practitioner, technologies that do not require continued experimental evaluation. In medicine, for example, the diagnosis of a bacterial infection can readily lead to a prescription of antibiotics. The effectiveness of the antibiotic prescription is, however, heavily predicated on an accurate diagnosis. In behavior analytic practice, the prescription of intervention strategies also is heavily predicated on accurate diagnosis or, in behavioral terms, a functional behavior assessment. At present, the varying rigor with which functional assessments are conducted across practitioners and settings suggests that the easy prescription of well- established behavioral technologies is not practically at hand, with some exceptions.
SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 43
BAP_v1.2_p1-72.indd 43 10/10/08 8:55:57 AM
A powerful reinforcement-based intervention such as a token economy, superimposed on existing but unknown contingencies, is likely to be benefcial even without a rigorous functional assessment. Ideally, as behavioral science matures, we will have evidence-based procedures of a fairly standardized sort that have been demonstrated to work for a large majority of people with whom they are used. When non-responders are identifed, more careful functional assessments can be conducted on an individual basis and individualized interventions prescribed as necessary, much the way a physician might alter the prescription of antibiotics if your health is not improved in the expected period of time.
Replication and Self-Correction
Methods applied in any specifc case are not failsafe. Fortunately, the majesty of science is that although it is fallible, it also is self-correcting. Careful technological description of procedures allows others to replicate the same procedures at different times, in different places, and with different participants (Baer et al., 1968). In the best cases, the peer-review process of publication in scientifc journals identifes fawed studies or erroneous conclusions drawn from otherwise solid studies before they are widely disseminated. Once fndings are disseminated, failures to replicate the reported fndings or the discovery of new fndings that refute or attenuate some earlier fndings lead to revisions of scientifc language and, ultimately, to a greater ability to describe, predict, and control our world. Sadly, many non- scientists view this as insufferable fip- fopping. The politician who alters an opinion or policy is thereafter chastised for being indecisive or insincere. The government agency that revises the guidelines for a healthy diet is mistrusted. In the public arena, it often is better to be true to some core conviction than responsive to a changing world. Science embraces “fip-fopping” so long as it is due to changes in evidence rather than extraneous sources of control.
The scientifc community arranges explicit and powerful contingencies of reinforcement for such behavior, and the scientist who treads lightly as preliminary data are gathered is in a much better position to alter his or her stance as emerging evidence dictates. A hallmark of the pseudoscientist is the propensity to make bold statements and draw frm conclusions in the absence of suffcient evidence. Once so committed, the aversive consequences for changing course can trump those arranged by the scientifc community.
Perhaps it is not so diffcult to see how one can succeed in making claims absent any supporting evidence, but how does someone succeed in promoting a claim in the face of existing evidence to the contrary? In psychology and the related social sciences, part of the answer is that markedly lesser standards of evidence are accepted than in the so-called hard sciences (e.g., physics, chemistry, and biology), and society seems to follow suit. It is not entirely clear why this is so. To be sure, a physicist need not labor to convince an engineer of the importance of basic physical laws. If the engineer does not abide by the laws of physics, the building falls down. This outcome is obvious and the cause is not attributed to some unknowable random process beyond the control of the engineer. It is attributed to some faw in design or construction. Even the layperson doesn’t assume that buildings sometimes fall down spontaneously because we can’t hope to control nature well enough to ensure otherwise. As a result, the engineer or builder is blamed and the failed methods revised or discarded. However, when a psychological therapy fails to demonstrably change behavior, the blame is not necessarily laid upon the therapist or the therapy, though the consequences of the failure can be as great or greater than the collapsed building. Instead, many laypeople and scientists alike assume it impossible to reliably infuence human behavior, because human behavior is complex and not entirely lawful. Therefore, to demonstrate that one therapy does
not succeed as reliably as another is not necessarily a fatal blow for the less successful therapy. This is an unfortunate state of affairs.
So what is to be made of the proposition that some things cannot be known with certainty, human behavior or otherwise? Nothing is known for certain, but much is known for which the likelihood of alternative explanations is so small as to be unworthy of consideration. When discussing what we know, we are really describing the strength of a prediction we can make. If we state that the sun will rise in the east tomorrow, we state this because it has never been observed to do otherwise.3 Based on historical observations of both the daily rising of the sun and, more importantly, scores of physical regularities observed by scientists at multiple levels of analysis, we can state the probability of the sun rising in the east as being so high as to be practically certain. Is it possible that the sun will rise in the west? Yes, but to say something is possible is not to say much at all. Science deals with probability, not possibility.
But perhaps the foregoing description of the general philosophy of science is just one of many equally valid philosophies about the world and our knowledge of it. Rubbish. The superiority of science is quite well-established, as science is the only “philosophy” that regularly provides the ability to predict and control that which it purports to explain. One might argue that prediction and control are not the ultimate demonstrations of truth, but such arguments seem to hold better in conversation than in practice. As the biologist Richard Dawkins eloquently put it, “Show me a cultural relativist at 30,000 feet and I’ll show you a hypocrite” (Dawkins, 1995, p. 31). When it really matters, we rely on science; we fy in the plane designed in accordance with the laws of physics.
3 This is, of course, a geocentric description of the behavior of the earth and sun. Although wanting in scientifc precision, it should serve the present purpose better than appeals to the regularity of the earth’s rotation as it revolves around the sun.
44 SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS
BAP_v1.2_p1-72.indd 44 10/10/08 8:55:57 AM
.
The Practical Limits of Scientifc Rationalism
Ideally, we would behave as scientifc rationalists in matters as diverse as nutrition, economics, and global warming. That is, we would be able to respond to direct empirical evidence as we confront important matters affecting our lives. But what if experimentation or the analysis of existing experimental data is beyond reach? Most of us are extremely limited in our ability to distinguish between fact and fction in unfamiliar areas such as quantum mechanics or even automobile repair. What we “know” comes from our contact with others who describe the evidence for us rather than from our evaluation of the relevant research. As a result, we are almost unavoidably dogmatic in practice, insofar as a great deal of what we do is infuenced by statements of truth professed by people of authority (or notoriety) rather than our own examination of the evidence. That this is so should be no great surprise. We haven’t the skills necessary to personally investigate all the phenomena that impact us in our day-to-day affairs.
So what is to be made of those areas that are beyond the scope of our direct study but do have an impact on our lives, both personal and professional? For example, how is a behavior analyst to deal effectively with the many claims made about the genetic underpinnings of a variety of conditions, including obesity, a learning disability, or autism?Ultimately, many of us will have to be dogmatic in approach, but we should be carefully dogmatic. At best, we are likely to consult reviews of the research literature in lieu of the literature itself. But in so doing, we are subject to the biases of interpretation in the writing of the reviewer. At worst, we learn of some new fad diet or therapy from someone already convinced of its effectiveness and thereby vested in convincing us of its effectiveness by providing only evidence seeming to support the claim. There is no easy way for the non-specialist to identify pseudoscience in unfamiliar
disciplines. However, as discussed in the next section, one or more red fags typically accompany pseudoscientifc claims.
Characteristics of Pseudoscience
He Said, She Said
Pseudoscientifc claims often eschew objective experimental evidence in favor of anecdotes or testimonials. The current autism-vaccine controversy is a case in point. A large vocal contingent of parents and professionals contend that the Measles-Mumps-Rubella (MMR) vaccine or other vaccines that contained a mercury-based preservative called thimerosal are the cause behind the recent autism “explosion.” A commonly cited piece of evidence for the alleged link between certain vaccines and autism is that parents of children with autism report that their child only began to show signs of autism after receiving a vaccination. These parent reports have become even more important in the face of mounting empirical evidence failing to show even a correlation between vaccine administration and autism diagnosis (Normand & Dallery, 2007). When the available scientifc evidence is examined, parent testimonies are essentially the only “evidence” that supports a link at all. Despite their best intentions, parent reports are poor sources of evidence, as parents rarely have extensive training in behavioral observation, their observations are not independently corroborated to ensure accuracy, and, being the parents of the children observed, they are far from objective.
Other times, the anecdotal nature of the evidence for a claim is dressed up in scientifc garb, as is the case with claims that mega-vitamin regimens produce marked improvements in young children with autism (e.g., Barthelemy et al., 1981; Rimland, Callaway, & Dreyfus, 1978). The arguments for such treatments are replete with examples of children who reportedly improved after they began a mega-vitamin regimen. A critical problem with such evidence is that the published studies rely almost exclusively
on parent reports of changes in child behavior. Rather than being presented as anecdotes, the reports are dressed up as scientifc data (usually quantifed in some way and analyzed statistically), giving the impression of something more substantial (e.g., Barthelemy et al., 1978). Additionally, steps must be taken to isolate the effects of the vitamins from any other intervention. If the vitamins are only one part of a larger collection of intervention strategies, including intensive behavior analytic intervention, it would be inappropriate to attribute the observed improvement in the child’s behavior to vitamins rather than to any of the other strategies or combinations thereof.
The Unfalsifable Claim
Scientifc studies refuting pseudo- scientifc claims often are criticized and dismissed on grounds of poor methodological rigor or problematic design. Such is the case with facilitated communication (FC) with persons diagnosed with autism. FC proponents claim that it enables these individuals to communicate through the aid of a “facilitator” who physically guides their hand over a keyboard so that they can type messages. A number of well-controlled experiments have demonstrated that it is the facilitators doing the communicating (Jacobson, Mulick, & Schwartz, 1995). Simply put, if the facilitator does not have access to the question posed, a correct answer is not given. Douglas Biklen, one of the main proponents of FC, frequently dismisses this sizeable body of experimental research on the grounds that the studies are poorly designed and conducted, though no acceptable scientifc rationale for this claim is offered (Biklen, 1993). These studies all meet the well-established standards of experimental design and appear in reputable peer-reviewed scientifc journals. As a defense, Biklen has suggested that the methods employed in the contradictory studies are predicated on the assumption that human behavior can be understood from a natural science perspective, and
SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 45
BAP_v1.2_p1-72.indd 45 10/10/08 8:55:57 AM
that traditional scientifc standards of evidence are merely a social construction (Jacobson et al., 1995). In whose plane would you rather fy?
It also is common for proponents of a pseudoscientifc claim to criticize individual studies or pieces of evidence in minute detail, while the confuence of multiple sources of evidence refuting the claim is ignored. In the area of autism, many opponents of behavior analytic interventions focus on the methodological limitations of Lovaas’ (1987) widely cited clinical outcomes study. They point to the lack of random selection and, especially, the lack of random assignment. What they ignore are the other outcome studies supporting the positive results reported by Lovaas (e.g., Howard, Sparkman, Cohen, Green, & Stanislaw, 2005; Sallows & Graupner, 2005). More importantly, they ignore the decades of sound experimental research employing single-case research designs demonstrating the effectiveness of interventions based on behavior analytic principles, targeting a variety of problems across a variety of populations, including young children with autism.
When evidence obtained by independent investigators using a variety of sound experimental methods points to a common conclusion, the picture is clear. Any single study will have limitations. This is why replication plays such an important role in science. As the body of research in any given area of inquiry grows, it becomes populated by numerous studies, all having different sets of strengths and limitations. As the evidence in one study is verifed by other studies, the probability of explanations other than those suggested by the data shrinks.
The Dull Edge of Science
It is sometimes claimed that the very fact that mainstream science rejects a claim offers support of its veracity. The mainstream scientists are characterized as closed-minded and the pseudoscientists as cutting edge. Such characterizations fnd their way into all manner of pseudoscientifc spin
doctoring, from those recommending special diets for children with autism to the aforementioned claims of a vaccine-autism link. Despite the absolute rejection by the medical and related scientifc establishments, diet and vaccine proponents claim that their information is at the forefront of modern medicine. The establishment, they claim, simply lags behind. There is no shortage of case studies in the history of science that they can dredge up to support their position as noble mavericks. After all, at one point in time the heliocentric view of the universe was widely dismissed and Copernicus, as its chief proponent, suffered great abuse. Even the Wright brothers were initially viewed as curiosities for their conviction that human fight was within reach. True, but as Michael Shermer, founder and director of the Skeptics society and publisher of Skeptic magazine, eloquently stated, “They laughed at Copernicus. They laughed at the Wright brothers. Yes, well, they laughed at the Marx brothers” (1997, p. 50). The reality is that far more people have proved deserving of criticism for their outlandish claims than have been vindicated. Heresy alone does not constitute reasonable evidence.
Implications for Behavior Analysts in Practice
What specifc action might a practicing behavior analyst take in light of the preceding discussion? That is diffcult to say. In preparing this paper, I found very little in the way of concrete recommendations for skeptical practice in the literature. Most treatises on skepticism emphasize “critical thinking” and highlight pseudoscience warning signs with illustrative examples, much like has been done in this paper. After some consideration, I have compiled a list, far from exhaustive, of possible actions that seem to me feasible and likely to be of beneft, though whether they will be of beneft is most certainly an empirical question.
Read and Read Widely
A sure way to spot pseudoscience is
to know the real science. Maintaining contact with the peer-reviewed scientifc literature is the primary way of keeping abreast of scientifc developments and controversies. One also should read widely. That is, you should read more than just the mainstream behavior analytic journals. It is not reasonable to assume behavior analysts will be intimately familiar with all the sciences or even all of the behavioral sciences, but reading widely within ones’ specialty (e.g., education, developmental disabilities, health psychology) is important. When you contact a new claim, even one that is from the behavior analytic community, become practiced at searching the scientifc literature for evidence and information before rushing to judgment. In addition, it might not hurt to read or subscribe to a publication such as The Skeptical Inquirer or Skeptic. Doing so will put you in contact with critical analyses of a wide variety of controversial claims, including some directly relevant to your practice.
Be a Scientist-Practitioner
First and foremost, be a proponent of evidence-based practice and good science, not just those things formally identifed as “behavior analysis.” Toward this end, incorporate rigorous evaluative systems into your clinical practice, including experimental manipulations whenever possible. For example, if a family is considering placing their child on a special diet as a means of treating “autistic symptoms,” it might be possible to persuade them to evaluate the effects of the diet in a systematic way. A list of clear operationally defned behavioral objectives could be agreed upon in advance, an adequate baseline established, and then the diet introduced and removed systematically over the period of several weeks or months. It might even be possible to arrange for the parents to systematically provide and withhold the diet according to pre- specifed guidelines but while keeping the behavior analyst(s) blind to the manipulations until the evaluation is complete.
46 SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS
BAP_v1.2_p1-72.indd 46 10/10/08 8:55:57 AM
This approach would both emphasize the role of careful evaluation of treatment and eliminate the need for potentially heated discussion or argument about the merits of the dietary intervention. It might even provide a nice bit of empirical evidence that could be shared through conference presentations or scientifc publication. But also be critical of your own practice and be wary of situations such as that described earlier with the example of the on-task student and the “wonders of modern behavioral science.” In short, be a model of skeptical behavior generally. Do you really know that the improvements you see are attributable to your efforts? Even if they are, which of your efforts are most critical? Might any be omitted, thereby making treatment easier or more effcient? These are questions that can and should be answered, not just by researchers, but by those engaged in the practice in question. Be a true scientist-practitioner. This aspect of applied behavior analysis has long been championed as one of its defning features (e.g., Baer et al., 1968); it is so described in virtually every textbook and so taught in virtually every training program. As a profession, are we living up to these ideals? I, for one, am skeptical in the most literal sense.
Implications for Applied Behavior Analysis
As a feld, behavior analysis would be well served to develop strategies to infuence the behavior of its constituents with respect to the issues discussed in this paper. Any reasonable approach to such infuence will undoubtedly be multi-faceted, and the actions suggested below constitute only some of the many possible strategies.
Promote Skeptical Research and Scholarship
As mentioned in the previous section, specifc recommendations about how to behave skeptically are lacking in the published literature. As I assembled the suggestions for this paper, I found myself wanting a more comprehensive functional analysis of skeptical behavior.
That is, under what conditions are we likely to say someone is skeptical or that they are behaving skeptically? Conceivably, if some such conditions are identifed, then steps can be taken to evaluate ways to teach a skeptical repertoire to students, professionals and paraprofessionals, families, and behavior analysts alike. Some research does exist in this vein, though conducted for different purposes. For example, a recent article by McKenzie, Wixted, and Noelle (2004) describes a method to evaluate the skepticism of experimental subjects about the possible answers provided in a forced-choice task. Though in the context of this particular study skepticism was an undesirable characteristic, presumably such work also could be used to identify conditions that might be altered to enhance skepticism.
As a way to foster skeptical research and analysis by behavior analysts, explicit solicitation of such papers for behavior analytic journals is an obvious move. The very journal you are reading seems a particularly appropriate vehicle for this work, but such articles would also be at home in other outlets. One might publish a review of existing studies evaluating a controversial treatment or summarizing the evidence-based consensus for an effective intervention in Behavior Analysis in Practice (BAP), an experimental evaluation of a fad therapy in the Journal of Applied Behavior Analysis (JABA), or an analysis of the potential controlling variables for skeptical or credulous verbal behavior in The Analysis of Verbal Behavior or The Behavior Analyst. Such work is not without precedent in behavior analytic journals (e.g., the excellent experimental evaluation of facilitated communication by Montee, Miltenberger, & Wittrock, 1995), but it is not common.
Highlight Non-Behavior Analytic Work with Implications for Behavior Analysts
In a manner similar to JABA’s effort some years ago to highlight basic behavioral research of potential interest to applied behavior analysts, journals such as JABA or BAP could devote a
section to reporting on work outside of mainstream behavior analytic circles that nonetheless bears on behavioral research or practice. For example, recent articles have appeared in Current Directions in Psychological Science questioning the validity of claims that there is an autism epidemic (Gernsbacher, Dawson, & Goldsmith, 2005), and in the Proceedings of the National Academy of Sciences demonstrating that contingent, but not noncontingent, maternal attention shapes infant speech (Goldstein, King, & West, 2003). Behavior analysts might not review the contents of such journals on a regular basis or at all and, consequently, are likely to overlook research quite relevant to their interests.
Additionally, workshops and symposia focusing on controversial therapies could be featured events for continuing education at regional and national conferences. These workshops or symposia might not focus specifcally on behavior analytic techniques or theory, but could involve careful scientifc analysis of research and practice relevant to behavior analysis. It seems only reasonable that steps be taken to ensure that professional behavior analysts keep abreast of developments in behavioral science and practice, and not just attend programs that rehash the same old material originating from the same group of researchers and practitioners.
Organizational Position Statements
Many major scientifc and professional organizations release offcial position statements when some manner of ridiculousness relevant to their purposes comes to light. The American Academy of Pediatrics (AAP) issued a statement denying any demonstrated link between vaccines and autism, and the American Psychological Association (APA) issued a resolution describing facilitated communication as unproven and unsupported by scientifc evidence, to cite just two examples. Our regional and national behavior analysis organizations have been conspicuously quiet on such matters, though they no less affect the research and practice
SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 47
BAP_v1.2_p1-72.indd 47 10/10/08 8:55:57 AM
of their constituents than they do the constituents of AAP or APA (indeed there is at least some overlap among the membership of all three organizations). Clear position statements with at least a summary analysis of why the position is as it is could prove a useful guide to parents and professionals alike. An improved interface with the media to promote such endeavors could enhance the effect. This might be accomplished through the establishment of media sections on organizational websites to post current research summaries, organized responses to pseudoscientifc claims, etc., as well as the solicitation of media coverage of national and regional conferences, perhaps with organized panels of experts to be spokespeople. Progress is being made in this direction, with the Association for Behavior Analysis, International and the Florida Association for Behavior Analysis now consulting with public relations professionals and taking these very steps. Hopefully, this is a sign of good things to come.
Do No Harm
Do no harm. It is the credo of the helping professions. It is therefore a credo for behavior analysts in practice. Detection of and protection from pseudoscientifc practices is an important service for those in need who have limited abilities to detect such foolery themselves. Such need can arise when an unproven therapy is used as an adjunct to a proven therapy and, as a result, the proven therapy is compromised in some way. For example, suppose that a couple has been convinced that sessions in a hyperbaric oxygen chamber will be of great beneft to their young son recently diagnosed with autism. Although the parents have enrolled their son in an intensive behavioral intervention program in which he is making good progress, the hyperbaric oxygen therapy requires them to travel out of state once a month for several days at a time. During these travels, their son does not receive any intensive behavioral intervention. What harm might result from such a
diminished intensity of intervention? We cannot know for certain, but we have reason to be concerned. At the very least, we know that considerable beneft can result from early and intensive behavioral intervention and have no evidence that any beneft will result from the time and money spent on the hyperbaric oxygen therapy. If the use of such an unproven treatment with dubious potential for effcacy hinders more proven treatments, it would be unwise to pursue them in non-research settings.
Not only can precious time and money be diverted away from useful and proven practices, but grave physical harm also can result. Consider the case of the 5-year old Pennsylvania boy who, in 2005, reportedly died following complications from chelation therapy, a procedure intended to rid the blood of heavy metals erroneously assumed by some to cause the symptoms of autism. Or the 2000 case in which a young girl in Colorado died from suffocation during “rebirthing,” a form of attachment therapy that involves wrapping the patient in a sheet and requiring that they force their way free, in an attempt to mimic childbirth so that the patient is “reborn” (for a horrifying account of this incident, see Mercer, Sarner, & Rosa, 2003).
Pseudoscience can and has produced harm. Behavior analysts should do more than avoid or ignore what they consider to be non-behavior-analytic practices. They should take it upon themselves to consider the scientifc and pseudoscientifc claims being made in their area of practice, become familiar with the evidence for and against these claims, and consider carefully any potentially harmful implications of the claims should they be adopted as practice. When possible, they should take active roles in the careful experimental evaluation of their own practices, emerging behavior analytic practices, as well as pseudoscientifc claims. That is, they should be scientifc skeptics and informed behavior analytic practitioners.
References
Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97.
Barthelemy, C., Garreau, B., Leddet, I., Ernouf, D., Muh, J. P., & LeLord, G. (1981). Behavioral and biological effects of oral magnesium, vitamin B6, and combined magnesium-B6 administration in autistic children. Magnesium Bulletin, 3, 150-153.
Biklen, D. (1993). Notes on validation studies of facilitated communication. Facilitated Communication Digest, 1, 4-6.
Dawkins, R. (1995). River out of Eden: A Darwinian view of life. New York: Basic Books.
Gernsbacher, M. A., Dawson, M., Goldsmith, H. H. 2005. Three reasons not to believe in an autism epidemic. Current Directions in Psychological Science,14, 55-58.
Goldstein, M. H., King, A. P., & West, M. J. (2003). Social interaction shapes babbling: Testing parallels between birdsong and speech. Proceedings of the National Academy of Sciences, 100, 8030-8035.
Howard, J. S., Sparkman, C. R., Cohen, H. G., Green, G., & Stanislaw, H. A. (2005). Comparison of intensive behavior analytic and eclectic treatments for young children with autism. Research in Developmental Disabilities, 26, 359-383.
Jacobson, J. W., Foxx, R. M., & Mulick, J. A. (Eds.). (2005). Controversial therapies for developmental disabilities: Fad, fashion, and science in professional practice. Hillsdale, NJ: Lawrence Erlbaum Associates.
Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication. American Psychologist, 50, 750-765.
Kurtz, P. (1996). CSICOP at twenty. Skeptical Inquirer, 20, 5-8.
Lovaas, O. I. (1987). Behavioral treatment and normal educational and intellectual functioning in young autistic children. Journal of Consulting and Clinical Psychology, 55, 3-9.
48 SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS
BAP_v1.2_p1-72.indd 48 10/10/08 8:55:57 AM
McKenzie, C. R. M., Wixted, J. T., & Noelle, D. C. (2004). Explaining purportedly irrational behavior by modeling skepticism in task parameters: An example examining confdence in forced-choice tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 947-959.
Mercer, J., Sarner, L., & Rosa, L. (2003). Attachment therapy on trial: The torture and death of Candace Newmaker. Westport, CT: Praeger.
Montee, B. B., Miltenberger, R. G., & Wittrock, D. (1995). An experimental analysis of facilitated communication. Journal of Applied Behavior Analysis, 28, 189-200.
Normand, M., & Dallery, J. (2007). Mercury rising: Exposing the vaccine- autism myth. Skeptic, 13, 32-36.
Park, R. (2000). Voodoo science: The road from foolishness to fraud. Oxford: University Press.
Rimland, B., Callaway, E., & Dreyfus, P. (1978). The effects of high doses of vitamin B6 on autistic children: A double-blind crossover study. American Journal of Psychiatry, 135, 472-475.
Sagan, C. (1996). The demon-haunted world: Science as a candle in the dark. New York: Random House.
Sallows, G. O., & Graupner, T. D. (2005). Intensive behavioral treatment for children with autism: Four year outcome predictors. American Journal on Mental Retardation, 110, 417-438.
Shermer, M. (1997). Why people believe weird things. New York: MJF Books.
Shook, G. L., Johnston, J. M., & Mellichamp, F. H. (2004). Determining essential content for applied behavior analyst practitioners. The Behavior Analyst, 27, 67-94.
Author Note
I would like to acknowledge the writings of Richard Dawkins, Carl Sagan, and Michael Shermer as primary infuences on the present paper. Wherever possible, I cite directly the sources from which specifc material is drawn. However, the overall content of the paper cannot be meaningfully disentangled from my extensive history of reading the work of these three authors. I also would like to acknowledge the insightful comments of the reviewers of this manuscript. Their comments were especially useful and contributed to a greatly improved paper.
Address correspondence to Matthew Normand, University of the Pacifc, Department of Psychology, 3601 Pacifc Ave., Stockton, CA 95211. (E-mail: [email protected].)
5th International Conference
August 7 – 10 • Oslo, Norway
ABA International’s international conferences promote behavior analysis outside the U.S.
and reach a broad population interested in behavior analysis. International delegations
help develop basic and applied behavior analysis in regions of the world where they are not
established and may require support.
We welcome you and all those interested in the philosophy, science, education, and teaching
of behavior analysis to join us in Oslo for an outstanding and educational conference!
Information pertaining to the 2009 international conference, taking place in Oslo, Norway,
is available on the ABA International Web site, at:
www abainternational org/oslo/index asp
SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 49
BAP_v1.2_p1-72.indd 49 10/10/08 8:55:57 AM