Research on Destructive Cults
This is a pre-galley draft of a paper delivered to the International Congress
on Totalitarian Groups and Sects in Barcelona, Spain, 1993. The paper was
translated into Spanish and published in a book of proceedings.
What is research and why do we bother to conduct it? According to Webster's
Ninth New Collegiate Dictionary (1983), research is "the collecting of
information about a particular subject," "studious inquiry or examination;
especially Investigation or experimentation aimed at the discovery and
interpretation of facts, revision of accepted theories or laws in the light of
new facts or practical application of such new or revised theories or laws"
(p.1002). Although experimentalists prefer to emphasize the latter definition of
research, the multidisciplinary nature and recent development of cultic studies
suggest that one ought not lose sight of the broader definition. Thus, religious
scholars studying the texts of various cultic groups, clinicians, sociologists,
or anthropologists carefully recording their observations of cultists and their
families, and researchers using psychological tests and statistics can all
contribute to the advancement of understanding in this area.
These professionals conduct research -- and are called upon by others to
conduct research - because their systematic and disciplined methods provide more
credible answers to important questions than uninformed opining offers.
Nevertheless, the subtlety and complexity of researchers' various methodologies
make it very difficult to conduct truly "definitive" research. Consequently, the
key questions regarding a particular topic may often remain incompletely
answered, even after years of research. In large part, this is the case with
regard to the study of cults. We have learned a lot but there is still much that
we do not sufficiently understand.
In another paper presented to this conference, Margaret Singer and I discuss
definition issues concerning cults. Here, I will simply say that I distinguish
between cults (or what Europeans typically call "sects") and "new" movements,
whether new religious movements, innovative psychotherapies, or new political
movements. Cults are characterized by the systematic induction or exacerbation
of psychological dependency within a context of exploitative manipulation.
Noncultic movements are relatively nonmanipulative, nonexploitative, and
respectful of individual autonomy.
Because a complete, multidisciplined review of this field is not possible
within the space limitations of this paper, I will focus my review on the
psychological study of two areas of special concern to those working with the
victims of cultic groups: prevalence and harm. First, however, I wish to discuss
some of the methodological issues that should be considered in evaluating
published reports in this area.
Given the dynamic relationship between cultic groups and society,
at any given time a collection of cultic groups identified for research
purposes will inevitably exhibit varying levels and types of destructiveness.
Moreover, researchers using different definitions of "cult" may tend to
focus on different types of groups. Therefore, comparing research studies on
cultic groups, including the studies examined in this paper, is hazardous. The
situation is somewhat analogous to that of psychopathology research before the
advent of today’s more precise and operational (though still far from
definitive) diagnostic classifications. Although some proposals for
operationalizing the concept "cult" have been proposed (Andersen, 1985; Ash,
1984; Langone, 1989), no such proposal has been implemented, and a great deal of
ambiguity still characterizes the term. Nevertheless, if we do not make the best
of what we have, we relinquish the opportunity to make things better.
The fuzziness of the concept "cult" demands that we pay special attention to
the generalizability of research studies. A study involving subjects from many
different groups, for example, may include subjects from groups that are not
truly cults or may have a preponderance of subjects from more destructive and
controversial groups, or the opposite. In any case, the application of the
findings of any particular study to the broad population of cult members will be
suggestive at best.
Even if one limits generalizability (e.g., by applying a study's findings to
only one group), sampling problems can arise. Rarely can researchers obtain
random samples from a group. Groups that tend to have many geographical
locations (e.g., Hare Krishna temples) may differ significantly from location to
location. Samples derived from clinical research will tend to include a
disproportionate number of distressed members or ex-members. Samples that
require the cooperation of a group's leadership run the risk of being "selected"
to make the group look good. Samples derived from the "snowball" technique
(where subjects are asked to identify other subjects) or from "network samples"
(e.g., members of cult awareness organizations) may be skewed because people
tend to associate with others like themselves.
Thus, we find that samples derived from networks associated with
organizations critical of cults tend to have a higher percentage of deprogrammed
or exit counseled subjects. Another limitation with ex-member samples, even when
not associated with "anti-cult" networks, is the difficulties researchers
encounter in attempting to find subjects. Knight, (1986), for example, was only
able to locate 20 of 60 former members of the Center for Feeling Therapy. Given
the tendency for the seriously disturbed to experience "downward drift," it is
quite possible that the most distressed ex-members may be the least likely to
come to researchers' attention.
Studies that required the cooperation of cultic groups sometimes, even in
published accounts, reveal possible biases within their samples. In most of
Galanter’s studies of the Unification church, for example, virtually full
cooperation was obtained, whereas Gaines, Wilson, Redican and Baffi (1984)
received not one reply to 100 questionnaires mailed to current members of cultic
groups. Such a disparity raises questions about the motivations and by extension
the reliability of reports by officially sanctioned groups of subjects. Such
doubts are magnified when one considers that subject compliance rates can vary
considerably. Galanter’s study of engaged Moonies (Galanter, 1983), for example,
had a 100% rate of cooperation at a meeting organized by the Unification Church.
In his follow-up study of married Moonies (Galanter, 1986), on the other hand,
only 66% completed his research questionnaire. Although this was a mailed
questionnaire (so a lower cooperation rate is to be expected), it is possible
that a disproportionate number of those who did not complete the questionnaire
might have had negative experiences that they, being committed members of the
Unification Church, would be reluctant to acknowledge, even perhaps to
themselves. Hence, they simply did not complete the questionnaire. Such
subtleties of methodology can easily be overlooked by students of the field.
Ungerleider and Wellisch (1979) openly acknowledge the question of subject
motivations, although they do not attribute as much importance to it as others
We did indicate, however, that we would be willing, if asked, to give
our findings in a court proceeding. However, this was never required.
This became the motivation of many cult members to cooperate with us.
Those who were no longer cult members cooperated mainly out of a desire
to further knowledge in this area. It is important to note that we did
not promise cult members that our findings would be positive or helpful
to them. (p.279)
Many of the subjects of these researchers obviously wanted to appear "normal"
(which was the researchers’ finding) in order to help their groups in court
proceedings. Elevated Lie scales in the studies which used the MMPI cast further
doubt on the credibility of the findings of these studies. Moreover, Ash (1984)
notes that people with dissociative disorders often show "normality" on
objective tests, but show psychopathology on projectives, as is the case with
the only studies of cultic groups to use projectives (Deutsch & Miller, 1983;
Spero, 1982). Levine and Salter (1976) did not even bother to administer
No formal tests were administered to members, although it was
originally intended to do so – this plan was dropped because the members
were extremely suspicious of tests, testers, indeed, society in general
in regard to our attitudes toward them (they feared an expose). (p412).
The credibility of such a subject population is necessarily diminished by
such strong fears about participating in scientific research.
Questionnaires and Psychological Tests
When used to study cultists, these methods have the following advantages: 1)
subjects are all exposed to the same "stimulus"; 2) the measures are easy and
relatively economical to administer; 3) they permit the collection of
quantifiable data; 4) some psychological tests have been thoroughly researched,
and many provide standardized norms against which subjects can be compared.
Questionnaires and psychological tests have the following disadvantages: 1)
many are retrospective, and therefore their responses may reflect faulty
memories; 2) they are self-report measures, and therefore their responses may
reflect psychological variables that incline subjects to answer inaccurately; 3)
they are often not able to detect subtle variables, such as ambivalent
motivations; 4) they may not truly measure that which they purport to measure
(especially if they have not been subjected to rigorous psychometric testing).
Interviews may be structured or unstructured. The former can have all the
advantages of questionnaires and psychological tests (standardized interview
tests exist, e.g. the Hopkins Symptom Check List) but also have greater
flexibility and provide nonverbal information gleaned by interviewers who can
vary their protocols somewhat to suit the circumstances.
Unstructured or semi-structured interviews, although not as easily quantified
as structured interviews, offer the advantage of greater flexibility, but at the
cost of less precision and control and greater interviewer-generated
distortions. Unstructured interviews are often most appropriate for exploratory
When interviews involve retrospective accounts, the probability of
distortions obviously increases. But a skilled interviewer can diminish the
impact of this factor and, indeed, can draw out information not accessible to
"paper and pencil" measures.
Clinical Case Study
The clinical case study is, in a sense, a form of interview with certain
distinguishing features. Its main advantage over other types of interviews is
the deeper and broader psychological understanding of the client/subject that
results from the duration of psychotherapy and the depth of trust between
therapist and client/subject. Sometimes this method is the most effective for
obtaining useful information, because, for example, so little is known about a
topic that it is impossible to develop truly effective structured interviews or
questionnaires or tests. Indeed, this may be the case with respect to the cult
phenomenon. If allegations of deception in cults are valid, interviewers or
researchers using pencil and paper measures can be easily misled. Clinicians,
especially when they work with a variety of cultists who do not know each other,
may be more effective at penetrating a group "persona" that members tend to
adopt. Although their work may not significantly illuminate questions of
prevalence (because their samples are necessarily skewed toward those needing
help), it does shed light on the processes that harm people in cultic groups.
Clinical methods are also the most appropriate for forensic work involving
damage claims. These situations demand an expert opinion on how specific
processes in a specific group affected a particular individual. Other methods of
study may be helpful in trying to arrive at generalized conclusions (e.g., the
prevalence of harm among the members of a particular cult), but they cannot
contribute significantly to answering the question of whether a particular cult
environment harmed a particular person. Indeed, it seems unlikely that
experimental investigations of extreme influence processes will ever shed much
light on the phenomenon of induced conversion because ethical restraints
prohibit the conducting of such research. Many of the pioneering social
influence experiments (e.g. Milgram, 1974) would not be possible in today’s more
restrictive ethical climate with regard to research with humans.
Naturalistic observation of a cultic group may be brief or extended,
unstructured or structured. Extended, unstructured observation (e.g.,
participant observation) immerses researchers in the day-to-day activities of a
group. This method, therefore, ought to enable researchers "to penetrate the
fronts that members erect to guard family secrets" (Balch, 1985, p. 32).
However, observers of a group, though they may be better positioned than
psychotherapists for understanding group processes, may not be as well
positioned for understanding the psychological processes affecting individuals.
Moreover, "researcher’s conceptualizing system(s) may significantly affect
his/her perception, description, and interpretation of the phenomena under
study" (Langone & Clark, 1985, p. 96), much as countertransference can affect a
clinician’s analysis of a psychotherapy case. Balch (1985) describes this
process in his own research:
When I first returned from the UFO cult I gave several talks about
the group where I tried to dispel certain misconceptions fostered by the
media, especially those alleging mind control. My descriptions focused
on the voluntary aspects of membership and almost completely ignored the
ways that Bo and Poop used group dynamics to promote conformity. It was
not until later, after interviewing defectors and reflecting on the
patterns recorded in my field notes, that I began to appreciate the
subtleties of social pressure in the group. With greater detachment I
realized that my efforts to defend the cult against unfounded charges
had led me to bias my descriptions by selective reporting. (p33)
More structured observational procedures, such as those employed by
researchers in behavior therapy, would help diminish distortions resulting from
the observer’s interpretive framework. Although one proposal for utilizing such
methods has been advanced (Langone, 1989), to date no studies have been
conducted using these methods. Clearly, we need observational protocols that are
sensitive to psychological subtleties and capable of penetrating the group
Statistical methods in behavioral and sociological research can vary from the
simple and straightforward to the arcane. Sometimes an excellent study requires
simple methods (e.g., a t-test of means). Sometimes a poorly conceived study can
obscure its deficiencies by confusing the reader with complex statistical
methods. Determining which methods are appropriate for which studies is often a
daunting task demanding attention to subtle details of methodology. Gonzales
(1986) provides an example in a critique of one of Galanter’s studies:
Galanter’s major finding is that members "actually do experience
amelioration in psychological well-being long after joining" (p. 1579).
He bases this, however, on the long-standing Unification Church members
(N=237) from one study (Galanter et al., 1979) compared with guests who
joined the Unification church after a 21-day workshop (N=9) in another
study (Galanter, M., 1980). Galanter is thus comparing means from an N
ratio of 1:25. With such a dramatic difference in N, an F-test should
have been performed to assess whether a t-test was still valid, but no
such test was made. It is further interesting to note the profound
difference in variance between the two compared groups: for the larger
group (N=237), a variance of 289 was calculated, while for the smaller
group (N=9), a variance of 400 was calculated. When the larger sample
has a smaller variance, the probability of finding the statistic
significant goes up considerably – perhaps even at the one-tailed level.
The t-value would have perhaps not been as significant if there had not
been such a gross difference between the sample sizes and their
The controversy related to the cult phenomenon results, in large part,
because the issues of concern impinge directly on three subjects about which
human beings, including trained scientists, can easily become emotional:
religion, politics (in its broadest sense), and psychological autonomy. The
criticisms directed at cults imply that a) the human mind is more easily
influenced than people care to admit (psychological autonomy), b) some religious
(and psychotherapeutic and political) groups can be corrupt and destructive
(religion), and c) the status quo, whatever its defects, ought to be
defended against the predations of cults (politics). Emotions aroused by these
issues can affect scholarly efforts in many ways.
The subtleties of bias
A somewhat humorous anecdote illustrates one such way. When Dr. John Clark
and I were revising a paper presented at one of the few conferences that brought
together "pro-cult" and "anti-cult" researchers (Langone & Clark, 1985), we
received written feedback from the conference organizer, who was editing the
proceedings. In an attempt to put forth some of the methodological points
described above, we wrote:
While such emotional reactions are understandable, professionals should try
to rise above emotion (although this is certainly easier said than done) and, at
minimum, truly listen to those with whom they disagree.
The editor markedly changed the meaning of this sentence by inserting "mental
health" in front of "professional," thereby implying that only benighted mental
health professionals succumb to emotional reactions and prejudice! Needless to
say, we protested vehemently and the editor’s "editorializing" was eliminated.
Nevertheless, this type of "sniping" still characterizes much scholarly work
Not listening to the opposition. Such intrusions of bias into the scholarly
process makes it difficult, as Dr. Clark and I noted, for scholars to "truly
listen to those tracts that imply that all cult critics, no matter what their
academic affiliations, subscribe to a caricature of "brainwashing," in which
physical brutality is used to turn victims into automata. Schuler (1983) has
sharply criticized the "pro-cultists" who subscribe to this view of
Bromley and Shupe’s notion of coercion doesn’t go much further than
the use of torture and threats of violence, so it is rare that anyone
ever is guilty of unjustified manipulation of human behavior. They
construct a straw man argument which they attribute to the critics of
the cults that is easily refuted. For unwarranted coercion to exist, one
would seem to need to develop a metallic sheen, walk with a gimp, smile
on cue; and not exhibit fear of death. Under their subtle touch,
brainwashing appears literally as a washed-out cranium with wind
whistling through the brain cavity. Short of physical violence, they
presume that "free will" is operating intact. Working with such
absolutist notions leads them to ignore obvious distinctions (e.g., when
a Moonie recruiter or a used car salesman has introduced guilt, deceit,
or forced dilemmas into their sales pitches) and to construct highly
exotic puzzles. For example, Bromley and Shupe speculate about a
revolutionary massacre at Jonestown where Jones persuades his adult
followers to swallow cyanide without the use of guns. Presumably, they
would then be acting freely. Get rid of guns, and you’re left with free
will! (Schuller, 1983, pp. 9-10)
Certain "pro-cultists" have apparently had great fun smashing this straw man
over and over again. But the positions my colleagues and I have advanced over
the years are, I dare say, rather more nuanced (see Singer, Temerlin & Langone,
1990 for a recent formulation of the cultic processes frequently labeled
The repeated demolition of this straw man view of "brainwashing" undermines
proper clinical treatment of ex-cultists and their families because clinicians
and laypersons exposed only to this viewpoint are likely to fall into a
counterproductive, victim-blaming posture. This is not to say that cultists
don’t play a role in their conversions. An early formulation of the position
articulated by my colleagues and myself (Clark, Langone, Schecter, & Daly, 1981)
emphasized a person-situation perspective toward cult conversions. And Margaret
Singer in her frequently cited Psychology Today article (Singer, 1979)
stated, "many participants joined these religious cults during periods of
depression and confusion" (p. 72). Nevertheless, the capacity of cult
environments to persuade and control recruits and members should not be
underestimated. As Singer (1987) notes, persuasion can work through reason,
coercion, or subterfuge. The potency of cultic environments comes not from the
crude physical coercion of the "brainwashing" caricature, nor even the much more
sophisticated processes of POW (prisoner-of-war) thought reform sometimes
referred to as the "DDD syndrome" – debility, dependency, and dread (Farber,
Harlow & West, 1956). Their power rests on subterfuge that induces and maintains
dependency, a revised "DDD syndrome" – deception, dependency and dread.
Nowhere is the problem of misrepresenting or not understanding one’s
opponents more conspicuous than in the forensic arena. During the past fifteen
years, professionals have played key roles as expert witnesses in legal cases in
which ex-cultists sue their groups for psychological damages. Many of these
cases depend upon testimony concerning coercive persuasion, or thought reform.
Many who oppose these expert witnesses apparently fear that legal successes in
this area threaten religious freedom. Cult critics, on the other hand, believe
that such successes will limit psychological abuse perpetrated by groups that
are, and will continue to be, remarkably free.
Although this question involves judgment calls about which rational people
may disagree, the adversarial nature of the legal system seems, especially in
the "pro-cult" camp, to have spilled over into the research arena, where it
influences researchers’ methods and conclusions. The experience of Dr. Margaret
Singer, the preeminent expert witness in cases involving psychological damage,
is illustrative. Dr. Singer has been subjected to what, in my opinion, could be
interpreted as a campaign of character assassination. She was unjustly accused
of ethics violations regarding her forensic testimony; the American
Psychological Association dismissed the charges. Then, a series of amicus
briefs, which appear to have been instigated by cult apologists, incorrectly
accused her of being a scientific renegade and using concepts rejected by "the
scientific community." In short, she was falsely accused of supporting the
"brainwashing" caricature described earlier. However, when many respected
psychologists and psychiatrists came to her defense and when it was pointed out
that her work has appeared in such bastions of medical orthodoxy as the Merck
Manual of Diagnosis and Treatment (Singer, 1987) and the Comprehensive
Textbook of Psychiatry (West & Singer, 1980), the attacks shifted focus.
Most recently, perhaps because of the credibility of her publications, the focus
of some cult apologists seems to have shifted. She is now falsely accused of
saying things in her testimony that are inconsistent with her publications. A
memorandum entitled, "To social scientists concerned about forensic and related
issues dealing with new religious movements," states:
Singer’s position is typically couched in the notion that
brainwashing is "irresistible, irreversible, and that it takes place
subtly without the "victim" really being aware of what is happening." It
seems to us fairly clear that this does not happen. BUT, Singer’s
testimony weaves back and forth between this proposition and "normal"
social influence theory.
If she and/or others, were to back away from the "irresistible,
irreversible and subtle" definition, how does this change the
battleground? Would our task be easier or more difficult? (p. 3)
The "task" referred to is presumably the protection of "new religious
movements." Unfortunately, the single-mindedness with which some approach this
task introduces significant distortions into their work. An example occurs in
the quote above. "’Irresistible, irreversible, and subtle’ definition" refers to
the "brainwashing" caricature, from which the writer apparently fears Dr. Singer
is retreating. The fact is, Singer never advocated this nondiscriminating view,
although it could be argues that in isolated cases the process is
tantamount to being irresistible, irreversible and subtly applied for a
particular individual. Perhaps it is merely convenient for Dr. Singer’s
detractors to attribute the "brainwashing" caricature to her. Nowhere to my
knowledge do they actually quote her saying the kinds of things they claim she
affirms. It appears that the issue for many of these people is not the
correctness of Dr. Singer’s views, but the implications of her testimony, that
is, that certain new age groups and cults do indeed deceive, manipulate, and
harm followers. This form of adversarial advocacy corrupts discourse and makes
genuine dialogue very difficult.
The distortions described above magnify the common tendency to
overgeneralize. Sometimes, as noted earlier, this may be related to
unrepresentative samples. But at other times it may be related to selective
reporting. Some "pro-cult" researchers, for example, seem to discount all harm
associated with cultic groups by labeling ex-members reports "atrocity tales"
(Bromley & Shupe, 1981). Yet they seem to accept current cult member reports
uncritically and conclude that cults are on the whole beneficial, similar in
function even to psychotherapy (Kilbourne & Richardson, 1984). Balch (1985),
though unsympathetic to the "anti-cult" position (but Balch too makes the error
of equating that position with the "brainwashing" caricature), takes Bromley and
Shupe to task for glossing over the seamier side of cults.
While I appreciate their effort to counter the impression that cults are
somehow uniquely different and dangerous, I wonder if Woodward and Bernstein ever
would have broken the Watergate case if they took the same approach to
government that Bromley and Shupe use with cults. (p. 26)
As "pro-cultists" can deny harm, "anti-cultists" can deny benefit, or at
least the absence of harm. Although some have reasoned cogently on theoretical
grounds that all members of bona fide cults are adversely affected to some
degree (Ash, 1984), the variety of cults, the variety of individual reactions to
cults, clinical experience, and certain research studies (e.g., Galanter, 1989)
incline me to contend that psychological harm is not universal in cults, though
it may be quite common or even normative. Unfortunately, some cult critics do
not explicitly acknowledge this; they overgeneralize from their own work in
which harm is common.
A tiresome "anti-cult" overgeneralization is what I call the "Moonification"
of the cult phenomenon. I agree with Balch (1985):
I have six articles in front of me as I write, each by a well-known
expert in the field of new religions, which purport to explain how cults
recruit members. All are based on the "Moonie model" of recruitment and
they each use the Moonie terms "love-bombing" and "heavenly deception."
The usual scenario involves a carefully orchestrated sequence of steps
where unsuspecting recruits are love-bombed into submission with hugs,
flattery, deception, and an exhausting round of group activities that
lasts from sun-up until late at night. (p. 28)
Although the literature describing "non-Moonie" forms of recruitment and
indoctrination is growing (e.g., Hochman, 1984; MacDonald, 1987/88; Singer,
Temerlin & Langone, 1990), there is still insufficient appreciation of the
variety of contexts in which cultic process of persuasion and control can
Unwarranted causal inference
Balch (1985) notes:
In order to make a causal inference, three conditions must be met.
First, there has to be a relationship between two variables. Second, the
alleged cause must precede the observed effect. And third, the
relationship has to persist when possible contaminating variables are
held constant…While causal statements abound in the literature, these
conditions are virtually never satisfied. (p. 29)
A conspicuous example of unwarranted causal inference is the tendency to
conclude, even if only implicitly and after formal disclaimers, that a positive
correlation (which is surprisingly low in the studies that have been done)
between negative attitudes toward cults and contact with the "anti-cult
movement" implies that negative attitudes toward cults are caused by association
with the anti-cult movement (Lewis, 1986). This is unwarranted causal inference
for several reasons. First of all, exposure to cult critics may help ex-cult
members better understand that they were really exploited, which would
understandably cause them to hold more negative attitudes toward their cults
than they would if they did not have this understanding. Cult sympathizers tend
to assume the "innocence" of cults and assume that critical reports are mere
"atrocity tales’ (Bromley & Shupe, 1981). But abundant evidence has falsified
this assumption. Secondly, a self-selection process may incline those most
harmed by cults to seek help from "anti-cultists." Thirdly, confusion and
ambivalence among ex-members who do not receive counseling may make them less
able to identify negative aspects of their experience. Wright’s (1983) quotation
of Beckford (1978, p. 111) is pertinent to this point: "The result is that most
informants show considerable confusion about the overall meaning to them of the
events making up their withdrawal from the Unification Church and remain
strongly ambivalent" (p. 107). And lastly, those who have not had the support of
"anti-cult" helping sources may have more difficulty acknowledging to themselves
that they were deceived." Consequently, emphasizing the positive in their
evaluations of their cult experiences can be face-saving.
In 1984 the Cult Awareness Network (CAN) compiled a list of more than 2,000
groups about which they had received inquiries (Hulet, 1984). The frequency with
which CAN and the American Family Foundation have encountered previously
unheard-of-groups – at least 6-12 a week – suggests that 2,000 is a low estimate
for the number of cultic groups in the U.S. today, even given the fact that many
about which inquiries are made are probably not cults.
Most cults appear to be small, having no more than a few hundred members.
Some, however, have tens of thousands of members and formidable financial power.
Zimbardo and Hartley (1985), who surveyed a random sample of 1,000 San
Francisco Bay area high school students, found that 3% reported being members of
cults groups and 54% had had at least one contact with a cult recruiter.
Bloomgarden and Langone (1984) reported that 3% and 1.5% of high school students
in two suburbs of Boston said they were cult members. Bird and Reimer (1982), in
surveys of the adult populations of San Francisco and Montreal, found that
approximately 20% of adults had participated in new religious or para-religious
movements (including groups such as Kung Fu), although more than 70% of the
involvements were transient. Other data in this study, and Lottick (1993),
suggest that approximately two percent of the population have participated in
groups that are frequently thought to be cultic. It seems reasonable, therefore,
to estimate that at least four million Americans have been involved with cultic
However, as West (1990, p. 137) says, "cults are able to operate successfully
because at any given time most of their members are either not yet aware that
they are being exploited, or cannot express such an awareness because of
uncertainty, shame, or fear." Therefore, in any survey, however random, the
actual number of cultists is likely to be much greater than the number of
persons who identify themselves as members of cultic groups or even of groups
that other people might deem cultic. Because the victims do not identify
themselves as such, they are not likely to be identified as cult-affected by
psychotherapists or other helpers unless the helpers inquire into the
possibility that there might be a cult involvement.
Changing Cult Population
A much larger number of walk-aways (i.e., people who have left cults on their
own) and cast-aways (people who have been ejected by cults) have approached
helping organizations in recent years. Nearly 70% of the subjects in one study
(Langone, et al.) were walk-aways or cast-aways, a reversal of earlier studies
in which only 27% of subjects fell into these two categories (Conway et al.,
1986). Former members appear to come from a wider variety of groups, with fewer
coming from eastern groups than in the 1970s and more coming from fringe
Christian or new age groups. Whereas the overwhelming majority (76%) of Conway
et al.’s (1986) 426 subjects came from only five (the Unification Church,
Scientology, The Way, Divine Light Mission, and Hare Krishna) of 48 groups, one
study (Langone, et al.) of 308 subjects from 101 groups, who were selected in
much the same manner as Conway and Siegelman’s, were much more dispersed, with
the largest five groups accounting for only 33% of the total subject population.
Former Scientologists comprised Langone et al.’s largest group – 16%, compared
to 11% for Conway and Siegelman. The Way, Hare Krishna, and the Divine Light
Mission were barely represented in Langone et al., comprising 2%, 2%, and 1%
respectively, compared to 6%, 5% and 11% for Conway and Siegelman. Former
Unification Church members accounted for 44% of Conway and Siegelman’s subjects,
but only 5% of Langone et al.’s,
Given the methodological limitations discussed in an earlier section, what
does the literature tell us?
Some research studies suggest that the level of harm associated with
religious cults may be less than clinical reports indicate, at least for some
groups. Levine and Salter (1976) and Levine (1984) found little evidence of
impairment in structured interviews of over 100 cultists, although Levine and
Salter did note some reservation about "the suddenness and sharpness of the
change" (p. 415) that was reported to them. Ross (1983), who gave a battery of
tests, including the MMPI, to 42 Hare Krishna members in Melbourne, Australia,
reported that all "scores and findings were with the normal range, although
members showed a slight decline in mental health (as measured on the MMPI) after
1.5 years in the movement and a slight increase in mental health after 3 years
in the movement" (p.416). Ungerleider and Wellisch (1979), who interviewed and
tested 50 members or former members of cults, found "no evidence of insanity or
mental illness in the legal sense" (p. 279), although, as noted earlier, members
showed elevated Lie Scales on the MMPI. In studies of the Unification Church
(Galanter, Rabkin, Rabkin, & Deutsch, 1979; Galanter, 1983), the investigators
found improvement in well being as reported by members, approximately one-third
of whom had received mental health treatment before joining the group.
Otis (1985) examined data from a survey of 2,000 members of Transcendental
meditation in 1971. Dropouts reported significantly fewer adverse effects than
experienced meditators, and "the number and severity of complaints were
positively related to duration of meditation" (p. 41). There was a consistent
pattern of adverse effects, including anxiety, confusion, frustration, and
depression. The "data raise serious doubts about the innocuous nature of TM" (p.
The Institute for Youth and Society in Bensheim, Germany reported that TM
members tended to be withdrawn from their families (57% of subjects), isolated
in social relations (51%), anxious (52%), depressed (45%), tired (63%), and
exhibited a variety of physical problems, such as headaches and menstrual
Former members of a psychotherapy cult (Knight, 1986) reported that they had
had sex with a therapist (25% of subjects), had been assigned love mates (32%),
had fewer than 6 hours sleep a night (59%), and in therapy sessions were shoved
at least occasionally (82%), were hit at least occasionally (78%), and were
verbally abused (97%). These subjects, 86% of whom felt harmed by the
experience, also reported depression (50%) and menses cessation (32%).
In Conway et al. (1986) ex-members reported the following experiences during
their time in the cult: sex with leaders (5%; 60% in the Children of God),
menstrual dysfunction (22%) and physical punishment (20%). Conway and Siegelman
(1982) reported that ex-members experienced floating (52% of subjects),
nightmares (40%), amnesia (21%), hallucinations and delusions (14%), inability
to break mental rhythms of chanting (35%), violent outbursts (14%), and suicidal
or self-destructive tendencies (21%).
Galanter (1983) studies sixty-six former Moonies, who, according to Barker’s
(1983) statistics, should represent about half of those who joined. Galanter
reports that "the large majority (89%) felt that they ‘got some positive things’
out of membership, although somewhat fewer (61%) did feel that ‘Reverend Moon
had a negative impact on members,’ and only a bare majority (53%) felt that
‘current members should leave the Unification Church’" (p. 985). These findings
were consistent with clinical reports during the 1970s and early 1980s. It is
interesting, however, that Galanter was sometimes inclined to put a positive
"spin" on the findings, e.g., his choosing to write that "only (emphasis added)
a bare majority (53%) felt that ‘current members should leave the Unification
Church.’" This is quite a large percentage given that, according to clinical
investigations and countless ex-member reports, Unification church members are
indoctrinated to assume that the Church is always right and they, when
dissenting, are always wrong. Indeed, Langone (unpublished manuscript) found
that the suppression of dissent was one of the give most highly rated cult
characteristics in a subject pool of 308 former cultists from 101 different
groups. Thus, Galanter’s indices of harm, though indirect and not low, are
The study mentioned above (Langone, unpublished) paints an even more negative
picture of the cult experience. Eighty-eight percent of the subjects saw their
groups as harmful (37%) or very harmful (51%). During an average time of
membership of 6.7 years, 11% of the subjects reported being sexually abused.
Sixty-eight percent of the subjects each knew an average of 28 former members
who had not contacted helping resources. Thus, approximately 5,500 persons known
to these subjects had not sought help. Yet subjects estimated that "all or
nearly all" of their friends and acquaintances had difficulty adjusting to
post-group life, 21% felt that ""most" had difficulty, 4% "about half," 13%
"some," 6% "hardly any," and 25% were unsure.
Martin, Langone, Dole & Wiltrout (1992) used a variety of instruments,
including the Millon Clinical Multiaxal Inventory (MCMI) to assess the
psychological status of 111 former cultists. These researchers state:
This sample of ex-cultists can be characterized as having abnormal
levels of distress in several of the personality and clinical symptom
scales. Of those subjects completing the MCMI-I, 89% had BR’s ["Base
Rates" – indicates presence of a disorder] of 75 or better on at least
one of the first eight scales. Furthermore, 106 out of the 11 subjects
(95%) who completed the MCMI at Time I had at least one BR score on one
of the MCMI scales. The contention that this population of former
cultists is indeed distressed is further buttressed by their mean score
of 102 on the HSCL (Hopkins Symptom Check List), for which scores of 100
are considered to be indicative of the need for psychiatric care.
Moreover, these ex-cultists had a mean of 72 on the SBS-HP [Staff
Burnout Scale), which is suggestive of burnout and more than one
standard deviation above the mean from Martin’s (1983) sample of
parachurch workers. (pp. 231-234)
Yeakley (1988) gave 835 members of the Boston Church of Christ (BCC) the
Myers-Briggs Type Indicator (MBTT), a psychological instrument that classifies
people according to Carl Jung’s type system. Individuals may differ in the way
in which they tend to perceive (some being more sense oriented, others more
intuition oriented), the way they judge (thinking oriented versus feeling
oriented), and their basic attitudes (extraversion versus introversion). Isabel
Myers and Katherine Briggs, the developers of the MBTI, added a dimension to
Jung’s typology: the person’s preferred way of orienting himself to the outside
world. This orientation may be judging or perceiving. The MBTI thus produces 16
personality types based on the permutations of these variables. Yeakley asked
subjects to answer the questions in the MBTI as they think they would have
answered before their conversion, as they felt at the time of testing, and as
they think they will answer after five more years of discipling in the BCC. He
found that "a great majority of the members of the Boston Church of Christ
changed psychological type scores in the past, present, and future versions of
the MBTI" (p. 34) and that "the observed changes in psychological type scores
were not random since there was a clear convergence in a single type" (p. 35).
The type toward which members converged was that of the group’s leader.
Comparisons with members of mainstream denominations showed no convergence, but
members of other cultic groups did show convergence, although toward different
types than that on which the BCC members converged. Yeakley concludes that
"there is a group dynamic operating in that congregation that influences members
to change their personalities to conform to the group norm" (p. 37). Although
this study did not directly examine harm, it does indirectly support clinical
observations, which contend that the personalities of cult members are bent, so
to speak, to fit the group.
Clinical observations (Ash, 1985; Clark, 1979; Langone, 1991) and research
studies (Galanter, 1989; Langone et al., in preparation) suggest that people
join cults during periods of stress or transition, when they are most open to
what the group has to say. Approximately one-third appears to have been
psychologically disturbed before joining, as evidenced by having participated in
pre-cult psychotherapy or counseling (with figures varying from 7% to 62% of
subjects among eight studies – Barker, 1984; Galanter et al., 1979; Galanter &
Buckley, 1978; Knight, 1986; Spero, 1982; Schwartz, 1986; Sirkin & Grellong,
1988). The majority, however, appear to have been relatively normal individuals
before joining a cult.
Certain studies cited earlier (Levine, 1984; Ross, 1983; Ungerleider &
Wellisch, 1979) found cultists to score within the normal range on psychological
tests or psychiatric interviews. Galanter (1983) found some improvement in the
general well being of cult joiners, which he attributed to a psychobiologically
grounded "relief effect" of charismatic groups.
Wright (1987) and Skonovd (1983) found that leaving cultic groups was very
difficult because of the psychological pressure, a finding consistent with
clinical observations. There is much evidence, reviewed earlier, of
psychological distress when people leave cultic groups.
And yet, the majority eventually leaves. Why? If they were unhappy before
they joined, became happier after they joined, were pressured to remain, left
anyway, and were more distressed than ever after leaving, what could have
impelled them to leave and to remain apart from the group?
The inescapable conclusion seems to be that the cult experience is not what
it appears to be (at least for those groups that deem it important to put on a
"happy face"), either to undiscerning observers or to members under the
psychological influence of the group. Clinical observers, beginning with Clark
(1979) and Singer (1978), appear to be correct in their contention that
dissociative defenses help cultists adapt to the contradictory and intense
demands of the cult environment. So long as members are not rebelling against
the group’s psychological controls, they can appear to be "normal," much as a
person with multiple personality disorder can sometimes appear to be "normal."
However, this normal appearing personality, as West (1992) maintains, is a
pseudoidentity. When cultists leave their groups, the floodgates open up and
they suffer. But they don’t generally return because the suffering they
experience after leaving the cult is more genuine that the "happiness" they
experienced while I it. A painful truth is better than a pleasant lie.
Andersen, S. (1985). Identifying coercion and deception in social systems.
In B. K. Kilbourne & J. T. Richardson (Eds.), Scientific research of new
religions: Divergent perspectives. Proceedings of the annual meeting of
the Pacific Division of the American Association for the Advancement of
Science. San Francisco: AAAS.
Ash, S. (1984). Avoiding the extremes defining the extremist cult. Cultic
Studies Journal 1 (1), 37-62.
Ash, S. (1985). Cult-induced psychopathology, part 1: Clinical picture.
Cultic Studies Journal, 2, 31-91.
Balch, R. (1985). What's wrong with the study of new religions and what we
can do about it. In B. Kilbourne (Ed.), Scientific research and new
religions.- Divergent perspectives. Proceedings of the annual meeting of
the Pacific Division of the American Association for the Advancement of
Science. San Francisco: AAAS.
Barker, E. (1983). The ones who got away: people who attend unification
Church workshops and do not become Moonies. In E. Barker (Ed.), Of Gods and
men: New religious movements in the West. Macon, GA: Mercer University
Press, pp. 309-336.
Barker, E. (1984). The making of a Moonie -- Choice or brainwashing?
Beckford, J. A. (1978). Accounting for conversion. British Journal of
Sociology, 29, 249-262.
Bromley, D. G., & Shupe, A. D. (1981). Strange gods: The great American cult
scare. Boston: Beacon.
Bromley, D. G., Shupe, A. D., & Ventimiglia, J. C. (1979). Atrocity tales,
the Unification Church and the social construction of evil. Journal of
Clark, J. G., Langone, M. D., Schecter, R. E., & Daly, R. C. B. (1981).
Destructive cult conversion: Theory, research, and treatment. Weston (MA):
American Family Foundation.
Clark, J. G. (1979). Cults. Journal of the American Medical Association, 242,
Conway, F., Siegelman, J. H., Carmichael, C. W., & Coggins, J. (1986).
Information disease: Effects of covert induction and deprogramming. Update:
A Journal of New Religious Movements, 10, 45-57.
Deutsch, A., & Miller, M. J. (1983). A clinical study of four Unification
Church members. American Journal of Psychiatry, 146(6), 767-770.
Farber, I. E., Harlow, 11. F., & West, L. J. (1956). Brainwashing,
conditioning, and DDD (debility, dependency, and dread). Sociometry, 20,
Gaines, M. J., Wilson, M.A., Redican, K. J., & Baffi, C.R. (1984). The
effects of cult membership on the health status of adults and children.
Health Values: Achieving High Level Wellness, 8(2), 13-17.
Galanter, M. (1983). Unification Church ("Moonie") dropouts: Psychological
readjustment after leaving a charismatic religious group. American Journal
of Psychiatry, 140, 984-989.
Galanter, M. (1989). Cults, faith healing and coercion. New York: Oxford
Galanter, M., & Buckley, P. (1978). Evangelical religion and meditation:
Psychological effects. Journal of Nervous and Mental Disease, 166, 685-691.
Galanter, M., Rabkin, R., Rabkin, I., & Deutsch, A. (1979). The "Moonies": A
psychological study of conversion and membership in a contemporary religious
sect. American Journal of Psychiatry, 136, 165-170.
Gonzales, L. (1986). Inquiry into the mental health and physical health
effects of the new religions: Implications of present empirical research.
Honors B.A. Thesis. Department of Psychology and Social Relations, Harvard
Hochman, J. (1984). Iatrogenic symptoms associated with a therapy cult:
Examination of an extinct "new psychotherapy" with respect to psychiatric
deterioration and "brainwashing." Psychiatry, 47, 366-377.
Hulet, V. (1984). Organizations in our society. Hutchinson, KS: Virginia
Institute for Youth and Society. (1980). The various implications arising
from the practice of Transcendental meditation. Bensheim, Germany.
Kilbourne, B. K. (Ed). (1985). Scientific research of new religions:
Divergent perspectives. Proceedings of the annual meeting of the Pacific
Division of the American Association for the Advancement of Science. San
Kilbourne, B.K., & Richardson, J.T. (1984). Psychotherapy and new religions
in a pluralistic society. American Psychologist, 39, 237-51.
Knight, K. (1986). Long-term effects of participation in a psychological
"cult" utilizing directive therapy techniques. Master’s Thesis, UCLA.
Langone, M. D. (1989). Social influence: Ethical considerations. Cultic
Studies Journal, 6(1), 16-24.
Langone, M.D. (1991). Assessment and treatment of cult victims and their
families. In P. A. Keller & S. R. Heyman (Eds.), Innovations in clinical
practice: A source book, Volume 10. Sarasota, FL: Professional Resource
Langone, M.D., Report on a survey of 308 former cult members. Unpublished
Langone, M. D., & Clark, J. G. (1985). New religions and public policy:
Research implications for social and behavioral scientists. In B. K.
Kilbourne & J. T. Richardson (Eds.), Scientific research of new religions:
Divergent perspectives. Proceedings of the annual meeting of the Pacific
Division of the American Association for the Advancement of Science. San
Levine, S. T. (1984, August). Radical departures. Psychology Today, 18,
Levine, S. F., & Salter, N. E. (1976). Youth and contemporary religious
movements: Psychosocial findings. Canadian Psychiatric Association Journal,
Lewis, J. (1989). Apostates and the legitimation of repression: Some
historical and empirical perspectives on the cult controversy. Sociological
Analysis: A Journal in the Sociology of Religion, 49, 386-397.
MacDonald, J. P. (1987/88). "Reject the wicked man" – coercive persuasion
and deviance production: A study of conflict management. Cultic Studies
Journal. 4(2)/5(1), 59-91.
Martin, P., Langone, M., Dole, A., & Wiltrout, J. (1992). Post-cult symptoms
as measured by the MCMI before and after residential treatment. Cultic
Studies Journal, 9(2), 219-250.
Milgram, S. (1974). Obedience to authority: An experimental view. New York:
Harper & Row.
Ofshe, R. (1989). Coerced confessions: The logic of seemingly irrational
action. Cultic Studies Journal, (6(1), 1-15.
Otis, L. (1985). Adverse effects of Transcendental meditation. Update: A
Quarterly Journal of New Religious Movements, 9, 37-50.
Reimers, A. J. (1986). Charismatic Covenant Community: A failed promise.
Cultic Studies Journal, 3(1), 36-56.
Ross, M. (1983). Clinical profiles of Hare Krishna devotees. American
Journal of Psychiatry, 140, 416-420.
Schuller, J. (1983), March). Review of The Great American Cult Scare, by D.
G. Bromley & A. D. Shupe. Cultic Studies Newsletter, 8-11.
Schwartz, L. L. (1986). Parental responses to their children’s cult
membership. Cultic Studies Journal, 3, 190-204.
Singer, M. T. (1979, January). Coming out of the cults. Psychology Today,
Singer, M. T. (1987). Group psychodynamics. In R. Berkow (Ed.), The Merck
manual of diagnosis and therapy (15th ed.) (pp. 1467-1471). Rahway, NJ:
Singer, M. T., Temerlin, M., & Langone, M. D. (1990). Psychotherapy cults.
Cultic Studies Journal, 7(2), 101-125.
Sirkin, M., & Grellong, B. A. (1988). Cult vs. non-cult Jewish families;
Factors influencing conversion. Cultic Studies Journal, 5, 2-23.
Skonovd, B, (1983). Leaving the cultic religious milieu. In D. G. Bromley &
J. T. Richardson (Eds.), The brainwashing/deprogramming controversy:
Sociological, psychological, legal and historical perspectives (pp.
106-121). Lewiston, NY: The Edwin Mellen Press.
Spero, M. (1982). Psychotherapeutic procedure with religious cult devotees.
The Journal of Nervous and Mental Disease, 170, 332-344.
Ungerleider, T. J., & Wellisch, D. K. (1979). Coercive persuasion
(brainwashing), religious cults and deprogramming. American Journal of
Psychiatry, 136, 279-282.
Webster’s Ninth New Collegiate Dictionary. (1983). Springfield, MA:
West, L. J. (1990). Persuasive techniques in contemporary cults: A public
health approach. Cultic Studies Journal, 7, 126-149. (Reprinted from
Galanter, M., Ed., Cults and new religious movements. Washington, DC:
American Psychiatric Association, pp. 165-192.)
West, L. J. (1992). Presentation to the American Family Foundation Annual
meeting, Arlington, VA.
West, L. J., & Singer, M. T. (1980). Cults, quacks, and nonprofessional
psychotherapies. In H. Kaplan, A. Freedman, & B. Sadock (Eds.),
Comprehensive textbook of psychiatry (Vol. III, 3rd ed.) (pp.
3245-3257). Baltimore, MD: Williams and Wilkins.
Wright, S. A. (1983). Defection from new religious movements: A test of some
theoretical positions. In D. G. Bromley & J. T. Richardson (Eds.), The
brainwashing/deprogramming controversy: Sociological psychological, legal
and historical perspectives. New York: Edwin Mellen.
Yeakley, F. (Ed.). (1988). The Disciplining Dilemma. Nashville:
Gospel Advocate Company.