If This is so Good, Why Isn't Everyone Using It?
A Few Words About Science


by Ray B. Smith, Ph.D., M.P.A.

If people only read the good science that is in the CES literature, the whole world would be beating a path to our door. Unfortunately, as one of my statistics professors once said, "Strange things march under the banner of science."  It is those "strange things" in the published literature that have obscured the validity of CES among the decision makers in the Federal Government and in the medical field.

Good science begins with good study designs. In modern America, we have developed the habit of piling large numbers of subjects into studies in lieu of good scientific protocols.  For example, back in the late 1970's an administrator in a Federal regulatory agency said to me, "You have good data from open clinical trials and a good double blind study. One more double blind study and we will give you clear sailing."  These were studies of 40 to 100 patients in each. The next time we spoke, administrators in that same agency were demanding nationwide studies involving a minimum of 2,000 patients. That is because they are unwittingly marching to the beat of researchers in pharmaceutical companies who are adding "strange things," under the banner of science. Let me show you what I mean by this. Then you will have no trouble understanding the real value of CES and microcurrent therapy when you read our literature.

Listed below are several basic types of scientific study designs, given in the order of increasing scientific control.

The Open Clinic Trial: In this type of study one gives a new type of treatment to a group of patients, all of whom have the disorder the researcher is interested in treating.  For example, one gives a new herbal preparation to a group of patients, all of whom have colon cancer, or all of whom have AIDS. It the cancer goes away, or the AIDS disappears, then one can assume that the new herbal preparation works and he publishes his study results.  This type of study has no scientific controls built in to assure that it was the herb and not something else that cured the patients.

A typical problem that can confound the open clinical trial, is that patients with colon cancer or AIDS are invariably trying everything else they can think of, or everything their friends recommend, or every suggestion that comes in over their email, to cure their condition. Some will be drinking a gallon of carrot juice a day, some will have a medical practitioner take samples of their blood plasma, then inoculate them with it.  Others will be deep breathing, through a tube, specially prepared vapors across the border in Mexico three times a week, and so on. 

If you had colon cancer or AIDS, the open clinical trial results would be OK with you. "Just give me the herb," you would say, "I'll take my chances."  That is because cancer and AIDS are serious, lethal conditions.  If the herbal preparation was intended to treat something less lethal, such as asthma or depression, you would be less prone to jump on the bandwagon when the new results came in, since the herbal preparation will cost you money, maybe for weeks, months or years, and you want to be more sure it was actually the herbal preparation that cured the condition.  So you wait for controlled studies to be done.

Single Blind Study: In this scientific protocol, the patient doesn't know whether or not he is getting actual treatment but the therapist applying the treatment does.  In this type of study, some patients are given the real herbal preparation, and some are given a syrup that has the same green color and slimy texture, but none of the herb. Since the patients do not know whether they are getting the real or the sham (fake) treatment, one would suppose that any positive response the patients have  is really due to the treatment and not to a placebo effect from drinking the slimy, green syrup.

The Placebo effect is a big problem in medical research. A patient's unconscious plays the single most important part in the brain's immune system responses, and it is easily fooled by fake treatments, especially if these are given by persons in a medical clinic wearing white coats and stethoscopes around their necks.  Fortunately, it can also react very positively to just having attractive, intelligent people give it lots of concentrated, focused attention during such studies.  It can prod the immune system to make the patient significantly better or even totally well, just to please the pleasant staff member who gave them the little paper cup full of slimy green syrup three times a week.

That is the major problem with single blind studies. If the therapist knows who is actually getting the real and which are getting the sham medication, his/her subconscious may make some slight behavioral signal that will be picked up by the patient's subconscious during the experiment, neither of them, nor the senior researcher looking on, being consciously aware of it.  The patient gets better, even though he didn't get any of the herbal preparation. To avoid that possibility, one does double blind research.

In double blind research, neither the patient nor the therapist knows who is getting real treatment and who is getting sham treatment.  Now the subconscious of neither the patient nor the therapist can wreck the study. To make the study even more carefully controlled, you have the person who measures the patient, or evaluates the study outcome, not know who received the real treatment and who received the sham treatment.  Finally, the person who does the statistical analysis of the data following the study can also be kept blind to the treatment conditions. 

In the last two cases mentioned above, you don't normally refer to the study being triple blind or quadruple blind. "Double Blind Study" is the usual cut off term describing well controlled science.  But why should one blind the treatment evaluator and the statistician?  As luck has it, it is not uncommon for researchers, including those who will evaluate the study results, or do the statistical analysis, to be interested one way or another in how a study turns out.  They can do things to the study without realizing they are doing it, just like when you drive your car to the corner stop sign, look both ways, then drive right out into oncoming traffic.  "But Officer, I swear I didn't see him," you plead as you wait for the tow truck. Our subconscious can make us totally blind to things that it doesn't want to see, or having seen them, make our conscious mind do strange things with the information.  A specific example of this are the numerous studies in which someone rates the improvement of patients on a 10 point or 100 point scale.  That is not science.  It can be turned into science if there is more than one rater and their inter rater reliability is found to be reliable by statistical means.  Another way is to check the ratings against another, external measure of the same or a highly similar thing that is also used in the study.  Finally, if two groups being rated turn out to be statistically significantly different on the ratings, it can be inferred that the rater was rating in a reliable, nonrandom manner.  This assumption can not be made if the two groups which are rated do not separate out statistically.

A very good example of this kind of science is a negative CES study in which the authors state, "The rater, a board certified psychiatrist…"  and went on to publish that psychiatrist's rating results in a scientific journal, with no check at all on the validity of his ratings, even though he had rated the two study groups equal in outcome. That is not science. To indicate the problems this kind of non science is heir to, the study was of the effectiveness of CES in alcoholic patients.  As a matter of record, Alcoholics Anonymous (AA) programs are prevalent in such treatment centers, and as a group AA members tend to dislike the idea of putting "machines" on patients… a decidedly "unspiritual" approach. Now if that board certified psychiatrist was also an AA member, or even a heavy AA supporter, his ratings had every likelihood of being suspect, at least on the subconscious level. But they were never checked, and that study still appears as an important study in every CES science review that is written, and even appeared in the meta-analysis done at Harvard.

Another subject: If you want to know if there was a placebo effect in a study you must add a third group of subjects who were eligible for the study in every way, and who were randomized into the third group at the same time patients from the larger group were randomized into the treatment and sham treatment groups. (The reason we randomize, putting the same number of patients into each group, is so that by chance alone, we will have three, five and two Harvard graduates in each group respectively, fourteen, eleven and ten patients with hemorrhoids in each group, three, six and one lower class WASP, in each group and two, five, and three cephalic idiots in each group and so on. You don't know what, if anything any of the above has to do with colon cancer or AIDS, but you can't take a chance, so you distribute all possible things that could contribute "error scores" to your study, randomly, ensuring the probability that they will be more or less evenly distributed and not all pile up into group one or group three, for example.

A really bad habit in science: As you watch your television, you will see pharmaceutical companies marketing prescription drugs by making sales pitches directly to the consumer.  Federal law requires them to tell you the down side to the medication as well as the good side. So you will hear things like, "Among the subjects in the study, three percent got cluster headaches which lasted a week, two percent had spontaneous abortions, five percent had grand mal seizures, and four percent had heart attacks. These were the same approximate percentages that were found among patients taking sugar pills."  They are referring to the sham treated group who were given fake medication during the study. These sham capsules, like the real ones containing the medication, might have been two toned, puce and beige, with chartreuse diagonal stripes.  Well, if I were putting something like that in my mouth three times a day, I might well have a placebo induced heart attack. But were these people having a placebo reaction? You can never know, unless the study had the third, placebo control group.  If they did have the placebo control group and none of those patients had the above symptoms, then you could assume there was a placebo effect from the capsules themselves, and you would have to decide whether you still wanted to beg your physician to prescribe them for you.

The bad habit in science: In double blind studies, many scientists all over America erroneously refer to the group getting the sham treatment as the "placebo" group.  That is not science. There is a "sham treated" group, and in good science, a "placebo control" group, but there is never a "placebo group".   That habit crept into our thinking when the fake pill was unfortunately called "a placebo" pill. In truth, if there were a placebo effect later found in the study it would be among those taking "the placebo pill," but only as contrasted to placebo control subjects who did not receive any pill. 

In studies in which the sham treated group is incorrectly called the "placebo group", the investigator unwittingly assumes that any improvement in the incorrectly named group was a placebo effect, and writes that up in his journal article (after it has been reviewed by three peers who also didn't know any better).  To talk about a placebo effect in a study you have to subtract the improvement shown by the third group, the placebo control group (who were drinking three gallons of carrot juice, and four quarts of cranberry juice every day during the study), from the gains made by the sham treated group (who were probably on the same juice regimen, statistically speaking). Any significant improvement experienced by the sham treated group over and above that of the placebo control group, can now properly be called placebo effect.  Without that third, placebo control group, you can not discuss a placebo group in your scientific publication, but it is done all the time.

This habit crept into science when pharmaceutical companies began the sloppy habit of referring to the group who were given the placebo pill, as the "placebo group". They were not the "placebo group," since, as already noted, no such entity can exist in science.  They were the sham treated group who got the look alike or placebo pill, tablet, or capsule.  To restate, if in the process, they had a placebo reaction to the pill, tablet, or capsule, that can only be known if and when the study also had a "placebo control group". As example of the unfortunate things that happen when people don't understand that simple principle, a negative study in the CES literature was done in a psychiatric hospital in which two groups were studied: A CES group and a "placebo" (sham CES) group.  During the several weeks of the study, all patients were medicated with highly potent antianxiety and antidepression medications. At the end of the study, both the "placebo" group and the CES treated group were found to have significantly less anxiety and depression than they had going into the study.  The author concluded, therefore, that the improvement in his "placebo" group was placebo effect.  In his discussion, he concluded that CES is an ineffective treatment, being all placebo effect. The study was passed through three scientific peer reviewers and published in a scientific journal. It still remains as a mill stone around our neck, because physicians take that study at face value, not knowing any better.  Now you do.

Proceed to Research Policy

Proceed to Research Index