Review of Progress in Cold Fusion

Subpage of Morrison

Review of Progress in Cold Fusion was published in the Proceedings of ICCF-4. There is a copy on New Energy Times with no indication of provenance. However, it is explained on Krivit’s copy of Cold Fusion Update #9, issued for December 1993-January 1994, that the paper was included with Update #9 and was delivered at ICCF-4. In fact, the paper refers to ICCF-4 events, so the paper must have at least been modified later. As I write this, I have not verified the identity of this paper with the Proceedings copy. Krivit’s copy, being clean text, will be used here and variations, if any, noted, except for minor corrections of errors and lacunae in Krivit’s copy.

My comments will be in indented italics with a larger font to distinguish them. Links have been added to the Morrison text facilitate research, including links to anchors in the text in the “Subjects.”

Douglas R.O. Morrison
CERN, Geneva 23, Switzerland.


Experimental papers published over a 12 month period are summarized and the theoretical papers are abstracted. What one would have expected to see is listed and compared with what was published. The conditions for good experiments are listed, in particular “try to prove yourself wrong”. The list of four miracles required for Cold Fusion to be fusion are explained. The contradiction is noted between experiments which observe Cold Fusion effects with deuterium and as a control find no such effects with hydrogen, and those experiments which find Cold Fusion with hydrogen. Information is requested on the boundary layer between the inside of the lattice where Cold Fusion is claimed to occur, and the rest of the Universe where the normal laws of Science apply and Cold Fusion is not claimed. Since a claim in 1989 that a working Cold Fusion device existed, the time delay to such a practical device has steadily increased.

1. Introduction
2. Data Base for Review
3. Classification of Published Papers
4. What do we Expect to See?
5. Do Good Experiments
. . .5.1. Do Not use Poor Detectors
. . .5.2. Do Use Detectors that Discriminate
. . .5.3. Do Look for Correlations
. . .5.4. Do Use Adequate Data Recording Instrumentation
. . .5.5. Design Experiments to Avoid Problems
. . .5.6. Try to Prove Yourself Wrong
. . .5.7. Do Experiments to Test Theories
. . .5.8. Do Reproducible Experiments
6. List of Miracles if Cold Fusion is Fusion
. . .6.1. D-D Separation
. . .6.2. Excess Heat with Hydrogen
. . .6.3. Lack of Nuclear Ash with respect to Excess Heat
. . .6.4. Ratios of Nuclear Ash Components
7. Theory – General. Boundary Layer between Cold Fusion and Rest of the Universe
8. List of Theories published in last 12 Months
9. When a Cold Fusion Working Device?
10. Conclusions.

Presented at the Fourth International Conference on Cold Fusion,
6th to 9th December 1993, Maui, Hawaii.


At the Third Cold Fusion Conference in Nagoya, October 1992, gave a Review of [Proc: on] Cold Fusion[1].

Morrison drops the subject of sentences, when it’s him. It’s ironic, because this is the error of pseudoskeptics: they forget about themselves and their own attachments and beliefs and the assumptions that guide their logic. Who gave a a review/ Obviously, he did. If he wanted the passive, he’d write “a review was given,” or if he wanted the neutral voice, it would be “Morrison,” but why not just “I”? The lacuna is common in this paper. It’s like a collection of notes.

Everyone makes assumptions, the perceptive state them explicitly, especially when their assumptions seem to conflict with those of others. Unstated assumptions are behind much conflict.

For this Fourth conference, the papers published in the following 12 months are reviewed and a comparison is made between what we expected to see performed in these 12 months and what actually happened. With 5 groups reporting at Nagoya that they had observed excess heat and other effects using not deuterium but normal light hydrogen, new results were expected. The question of when a working device giving useful power, would be produced (or may already have been made) is discussed.

There is a vague lost performative in “we expected.” Who expected? By this time, cold fusion research was heavily discouraged, by many. Was Morrison encouraging better research, i.e, funding for such, or was he just blaming cold fusion researchers for not doing what he thought they should do?

I will look at Morrison’s overall behavior, as it had historical effects, elsewhere. This review is of this paper. No working device was ever confirmed as existing, giving useful power. There were some claims, some reasonably known to be in error or even possibly fraud. There were overconfident predictions aplenty. That’s a characteristic of the history.

Morrison claims in Cold Fusion Update #9, linked above, that Ikegami, who hosted ICCF-3, was “impolite,” and how remarkable that was for a Japanese scientist. It would be, I’m  sure, that culture is heavily focused on protocol and courtesy. I suspect Ikegami may have been faced with a severe trial, in the presence of Morrison.


A review should look at ALL the data, both positive and null experiments. ICCF meetings are unsuitable as very few of the experiments which find no effect (null experiments) are presented even though as shown in ref. 1, most published experimental results find nothing. Hence have taken all the published papers which are said to have been refereed, from the bibliography of Dieter Britz covering his period; October 1992 to September 1993. This is a continuation to the compilation presented at Nagoya [1] which covered his period of April 1989 to September 1992.

It is agreed by all that statistics of published papers alone are not decisive in any controversy and hence NO CONCLUSIONS WILL BE DRAWN FROM STATISTICS ALONE. However it is interesting to see trends.


During the Dieter Britz’s period Oct. 1992 to Sept. 1993, 76 papers concerning Cold Fusion were published in refereed journals. 27 were experimental, 26 theory, and 22 were “Others”. Of the experimental papers, 13 were null (i.e. no effect found), 10 positive (some effect found) and 4 with no decision. The theory papers had 3 predicting no effect, 20 predicting an effect, and three discussing a possible explanation of claimed effects.

We have a recent version of the Britz bibliography and database, which we have taken responsibility for continuing. We have the document collection as well. Morrison is here referring to the Britz classifications, as fields in the BibTex entries, res0, res+ and res-, which are quite rough and, in hindsight, all experimental reports are of interest and conspire to creating a fuller understanding of the family of effects very loosely, and somewhat misleadingly, called “cold fusion.” We can see in much of Morrison’s writing how deep the confusions engendered by the name and premature conclusions were, and continued to be, and still are, in some circles. Maybe most!

The prediction of “no effect” would require a study of a proposed mechanism, but what was claimed was an unknown nuclear reaction. How can one predict the effects of an unknown reaction? This is obviously insane. What was happening then, and it is still happening, is that many took on the challenge of “how could nuclear reactions happen in such an environment, and so people, who can be endlessly ingenious, came up with what they considered possibilities. So they create and idea and apply it to predict “an effect.” And the theory was designed to predict that. It’s called “ad hoc theorization.” If X and Y and Z, then an effect! In ad hoc theorization, XYZ sometimes gets quite complicated, a great example is Widom-Larsen theory. There are a series of possibilities, in the view of the theorists, that are asserted as explaining LENR. Many of them are quite unlikely, but, after all, any unknown nuclear reaction, by this time, would be considered unlikely. There are few theories that proceed from basic principles and actually predict, using standard physics, LENR. There are some exceptions that come close, but so far, nothing has been proposed that I consider conclusive. Takahashi is onto some leads, so is Hagelstein. Widom-Larsen theory is popular in some circles, because it is “not fusion.” Even though it simply is pushing words around to claim that. It can sound plausible if one does not look too closely at the details. It falls apart when examined closely, there are many reviews on this.

Mostly, theory formation at this time (and certainly back then) remains premature. We need more data.

Comparing with previous years, the rate of publishing is slightly higher than the first 9 months of 1992 (43 papers) but appreciable lower than 1989 (237 papers in 9 months), 1990 (305 papers) and 1991 (154 papers). Thus the rate is now about 6 papers per month (of which 2 are experimental) compared with a 1990 peak of 25 papers per month (of which 11 were experimental).

Publication rates trended in two directions: more “positive” publications accrued, eventually surpassing “negative,” and numbers declined, to a nadir in roughly 2004, after which publication rates increased, substantially. But then the field developed and began supporting its own journal, which sucked contributions away from mainstream journals. What actually would matter, overall, would be peer-reviewed reviews of the field or aspects of it. Those trended almost entirely to positive. Skeptics stopped writing reviews, and then provide the excuse that nobody is interested. But positive reviews have been appearing, the excuse gets old, itself. There was an information cascade, the social science term, for a consensus that appears without actual closure through clear and carefully examined evidence, which never happened with this field.

The experimental papers were then classified according to the claim (neutrons, tritium etc.) so that a paper could give several entries. It was found that new classifications were needed, After each class two numbers are given, the first is the number of null experimental results and the second is the number of positives. They are;

3.1. New Classifications
Fracto-fusion 2 null ; 2 positive. Laser-induced 1 ; 0.
Transmutation 0 ; 1
Light Hydrogen (i.e. not deuterium) ? ; 1. Mossbauer 1 ; 0.
Black Holes ? ; 1 Gammas 2 ; 0
Excess heat but no input (“Life after Death”) 0 ; 2

Fracto-fusion would not be cold fusion. It is intrinsically hot fusion, simply in a place where one doesn’t expect “hot,” in fracturing crystals through a piezoelectric effect that can apparently create, at very low levels, high particle energies. With high particle energies, one can get some level of normal hot fusion. So there are reports of a few neutrons. There is now an available hand-held neutron generator that works with a piezoelectric effect to accelerate deuterons to cause fusion. It’s similar. This isn’t cold fusion.

Excess heat without input is a relatively ordinary finding. Normally, though, the reaction rate increases with temperature and loading, and it normally takes energy to maintain those. When power input to an electrolytic cell is turned off, some cells show persistent heat. That was called “Heat After Death” (Fleischmann), “Heat after Life” (McKubre), and apparently, “Life after Death” was Morrison. I never encountered the phrase anywhere else. HAD is the most common acronym used. With gas-loading, temperature is not, strictly speaking, an energy input, but can look like one. (i.e, if temperature is being maintained with electrical input that looks like energy being supplied. That would only be true if the heat is being used to do work, rather than to maintain an environmental temperature. One can keep something hot with insulation. If there is enough excess heat to compensate for heat leakage, that would be HAD. There are some experiments that, on the face, clearly show that. Skeptics then come up with “explanations,” and Morrison is one who has done that. They feel no necessity at all to confirm the explanations, it is completely enough if an explanation seems plausible to them, even if experts say, no, that doesn’t happen. 

But there is also a lot of garbage research out there. 

3.2. Previous Classifications
3He 3 ;1 4He 1 ; 3 X-Rays 0 ; 0 Protons 1 ; 0
Tritium 6 ; 2
Neutrons 12 ; 9 Excess Heat 3 ; 10.

It may be recalled that for 1989 to Sept. 1992, all the classes gave more null experiments than positive ones. Adding the new statistics does not change this – for all major classifications there are more null results, than positive ones, e.g. for Excess Heat the totals are 50 null results and 37 positive claims.

Negative results for neutrons confirms what is now expected from the FP Heat Effect. Few to no neutrons, with neutron levels varying greatly with conditions that seem to be unrelated to the heat effect. Right now, if a report claims neutrons or intense radiation, I suspect this is artifact. There is an exception, the SPAWAR neutron work, which shows good evidence for very low levels of neutrons, maybe ten times background, but clearly related to the cathode, i.e., spatially correlated. In some experiments, neutron levels may be higher. But very, very low compared to the number of possible reactions happening. The basic effect does not produce neutrons, nor does it produce high energy gammas, nor hot charged particles above the Hagelstein limit, about 10 – 20 keV.

This doesn’t look like fusion. However, it’s producing helium, which, if that analysis is correct, requires a nuclear reaction. What reaction? That’s a trillion-dollar question, perhaps. We might actually have commercial applications before we really know what is going on, though some believe we need theory first. (And I do not consider practical applications close … but there are rumors …. )

It is now a known characteristic of the FPHE that results are quite variable. This is quite well identified with very dificult-to-control variations in the material, so much work is now focused on attempting to generate materials that will produce reliable results. At ICCF-21, the Japanese were reporting some substantial progress. Some of this work might be approaching “lab rat” status, a reliable protocol for study across multiple research groups. Again, experimental results showing no heat are simply evidence of what we already know. It’s not terribly useful to focus too much on repetition of old negative results, so most work that involves replication is of “positive results.” So any review looking at numbers of papers can be misleading. Morrison knows that, but also knows that he’s creating impressions, while denying that “any conclusions” can be drawn. I’d call that being an asshole. But this is 2018 and I have the benefit of massive hindsight. Still, I find Morrison relatively transparent.


Previous meetings held in 1989 to 1992, gave guides for future experiments – these recommendations are hard to find as the meetings tended not to have summary speakers and the concluding Round Table discussions were not written up; so had to rely on notes taken. The advice for future work was;
4.1. Do Good Experiments
4.2. Make Experimental Results Reproducible
4.3. Theory that Fits All Data
4.4. Make a Working Model
We will now consider how far these requirements have been met.

One will find some good experiments, and many lousy, mixed up, with little review, and with results and confused interpretations all mixed up.

The field became rather allergic to criticism, reluctant to criticize the work of others in the field, since the entire field was under continuous attack. Simon has documented this pathology in Undead Science, a quite good book. The phenomena occur with what becomes fringe science, and it occurs in spades if the dismissal as fringe is premature, not based on clear experimental evidence about results, but more on impossibility arguments and other impossibilities. (It’s impossible to prove that something cannot exist, and especially something unknown. It was easy to show that d-d fusion through crashing the Coulomb barrier was very unlikely, but even that’s a bit iffy, because assumptions are made about collision probabilities that depend on plasma assumptions. We can see this in many discussions. For example, a claim is made of very high deuterium density in palladium deuteride. As to a stable state, the density is lower in PdD than it is in liquid deuterium. However, the deuterium moves with relative freedom through the lattice, and it moves in confined channels. As well. there may be special configurations that catalyze many-body effects. I learned this from Feynman, it’s one of the few things that I remember him specifically saying, that we don’t have the math to predict the solid state. It’s too complicated!

How do you “make” experimental results reproducible? What would that mean? Morrison would doubtless be thinking that you create the device, and it predictably makes heat or other effects, every time. If someone had figured out how to do that back then, the world would be a very different place. There are people who would raise and spend a billion dollars to do that, if they knew how to do it. If anyone expected that to happen in the year before ICCF-4, they were delusional, wishful thinkers. Yes, hope springs eternal. But this is a very, very difficult field, and Morrison et al converted that to Wrong. After all, if there was an effect, surely by now it would be behind devices for sale in Home Depot!

Where does that argument come from? It is not logical, there is no logical necessity for effects to exist for our convenience. Fleischmann esitmated a Manhattan-scale project would be necessary to commercialize cold fusion. Right now, the results are not strong enough to justify that, my opinion, and this is almost thirty years later. There is enough evidence to support careful basic science in the field. I wrote in my 2015 Current Science review that if any funding agency was in doubt about the reality of the effect, they should support measurement of the heat/helium ration in the FP Heat Effect with increased precision. When I wrote that, I did not know that a few months before, that work had been very adequately funded, and that it was under way. We are waiting for results.

That is, by the way, how to “create reliable results.” Change what you are measuring! What the reproducible experiments do is to create heat and helium results with a large number of samples, where the heat in a period is measured together with helium collected in the period. Do they correlate? And if so, what is the ratio? Is it consistent within experimental error? This does not require reliable heat, and, in fact, it uses the variability in heat as self-control. This work should have been known to Morrison, it was announced by Huizenga in his book about this time, it had first been announced in 1991, at ICCF-2. I have no copy of that paper. I just found a copy of the Conference Proceedings at the New York Public Library, so I might be able to obtain copies. I could find no copy of the book for sale at any price.

Miles did present at ICCF-3, and Morrison was there..


The field might be expected to be mature now since it is more than four and a half years since the Fleischmann and Pons Press Conference of 23 March 1989, and over 10 years since Fleischmann and Pons started working hard experimentally on Cold Fusion (they claim that they began to work intensively five and a half years before their Press Conference). Also many groups have been well-funded for some period.

“Might be expected.” Again, by whom? Some groups were “well-funded,” until it is realized that the materials problem (with some other issues) is far more complex than the ordinary, and no group, to date, has been adequately funded to truly solve that problem. The extreme skeptical positions, and Morrison certainly worked to spread those ideas, made it very difficult to obtain research support, and the confusion caused by polarization of opinion, created an environment where the truly basic research that might have been afforded did not much happen. There were exceptions, and there is a field today, my opinion, because of them.

The main points are;

5.1. Do not use Poor Detectors

This is actually nonsense. Science can use data from any source; however, it’s essential to know the issues with detection methods and to factor for them. It’s completely obvious that one should use the best detectors available, according to the experimental purpose, and within budget and other practical limits. Correlation studies with appropriate controls and sound statistical analysis can cut through the noise of poor detector precision and other possible artifacts.

Certain detectors are notorious for giving artifacts, e.g. BF3 counters which easily give false signals due to vibration, humidity etc., or X-ray plates which can be stained by many effects.

Again, this boils down to knowing what one is doing. X-ray plates can provide very good evidence, cheaply, properly used and with controls. Problems mostly occur when there is only a single measure, say an anecdotal X-ray, that is not necessarily a clear indication, with an overheated interpretation. Better advice might be, for researchers who are not expert with the detection method being used, to consult with experts and not rush into announcements that have not been well-reviewed. Much of Morrison’s advice boils down to this:

Don’t make any mistakes, you dolts!.

5.2. Do Use Detectors that Discriminate

Instead of using an X-ray plate that records a vague darkening, it much better to use an X-ray detector which can measure the energy of the individual X-rays – for example the observation of the 21 keV line from Palladium would be an important result.

With X-rays, a “vague darkening” is not a terribly useful finding, isn’t that obvious? However, how about a radioautograph that clearly shows “darkening” spatially correlated with what is, from that, obviously the source, and with no plausible alternate explanation than the source emitting X-rays, of a certain minimum energy? (And with controlled shielding, that energy can be better estimated.) Obviously, being able to measure photon energies is superior, under two conditions: first, one can afford the necessary tools, they are available, and, second, the X-radiation is at an intensity that allows electronic detection. One of the advantages of X-ray film is that it is an integrating detector, so it can accumulate information over a long period of time, thus it is able to detect radiation at levels where it is lost in the noise with electronic detectors.

Steve Jones has made such a detector which is so small that it can easily be inserted into anyone’s experiment.

No citation. I’ll keep my eye out for it.

He himself has not observed the 21 keV line in his experiments. For over a year he has offered his detector free to anyone who seriously wants to measure X-rays, but no one took up his offer, though now at Maui, Prof. Oriani has accepted one. Similarly instead of simply counting neutrons, it would be more convincing to measure the energy spectrum and see if there is a peak at 2.45 MeV as expected – the original 1989 Nature paper of Steve Jones et al. [2] reported such a peak and even though its statistical significance was rather small, the fact that it was at 2.45 MeV was impressive; similarly the Turin group has recently reported [3] a peak at the desired value of 2.5 MeV.

The entire issue of “expected” was a rotting red herring. There were erroneous reports, based on artifact. High energy radiation is either totally absent from cold fusion experiments, or is very rare. Jones was pursuing a very low-level effect, and low-level effects, it is possible, might be seen in many environments if we look with very sensitive detection methods. This is Morrison pursuing a ghost

To underscore this, the mechanism of “cold fusion” is unknown, so “expected” can be very deceptive. Expected if what?

5.3 Do Look for Correlations

It is unsatisfactory to measure only one effect, e.g. only neutrons, when many effects are predicted to occur simultaneously, e.g. excess heat is expected to occur with 4He on some theories and with 3He on others and with neutrons, tritons, protons, gammas, X-rays, 14 MeV neutrons on conventional models. By making measurements of these effects simultaneously, the value of the results is greatly increased. It may be commented that up to now when the better experiments have looked for several effects simultaneously, they have observed no effects at all [1]. In this connection it is interesting to recall the statement by Dr. Fleischmann “1992 must be the year of mass spectroscopy” so it is clear that he is in favour of seeking correlations but it is surprising that we are still waiting for such results.

In this case, Morrison was belaboring what should have been obvious. There were correlation results by this time, and Morrison was present for at least some of those reports, Miles et al on the heat/helium correlation and ratio. Was he paying any attention, or was he only listening for what stupid mistakes he could point out? There may be people in the community who spoke with him about correlations. I would be greatly interested in the reports of any witnesses.

The SRI work showed correlations of excess heat with loading ratio, current density, and material batch. Because current density could be associated with some consistent artifact, (and loading ratio the same), this is less convincing than the heat/helium correlation. There are also correlations with tritium and experimental conditions that, in my view, is awaiting better review. Many leads developed in early work have never been followed up. 

5.4 Do Use Adequate Data Recording Instrumentation

Using a single thermister to record temperature changes in a fast changing environment as in the latest Fleischmann and Pons paper [4] is not satisfactory or convincing – it does not allow an adequate check on the detailed heat flow calculations such as the assumption that the heat loss is 100% by radiation whereas the original Fleischmann and Pons paper [5] emphasized that Newton’s Law of Cooling was used and the heat flow was 100% conduction – it is the difference between assuming that the heat loss was proportional to the difference in the temperature to the fourth power or to the first power – vastly different assumptions.

There was an extensive debate in J. Electroanal. Chem over that paper. There is the beginning of a review of that here.

Morrison appears to completely miss the simplicity of the boil-off experiments. They don’t involve “detailed heat flow calculations.” The temperature is actually controlled by the boiling point of the liquid, and so temperature is not a critical variable. The method was very different from the original work. Besides, they could have made him some tea. (To do it with the heat they were showing, they would need to heavily insulate the cells, and they might even need to pressurize them. Nevertheless, conceptually, if the heat they were showing was real, and it appears to be, they could have made tea, but who would want to spend thousands of dollars on a design to give the fellow a cup of tea that could be made on the stove for almost nothing?

Similarly in ref. 4, Fleischmann and Pons describe how the cell boils vigorously and about half-empties in 600 seconds – but this work only has a single temperature recording device and the current and voltage data are only recorded every 300 seconds which means that only about two (or three) readings were recorded during the crucial 600 seconds.

I don’t like the paucity of data either. However, the voltage is only going to swing wildly in the last minute or so, when the current is controlled, as it was. In those experiments, what was measured was time to boil-off, from set initial conditions, and this was a comparative measure. Morrison is not following his own advice here, to actively try to prove himself wrong. Rather, he’s advising others Give the man his cup of tea, he needs it. From the experimental evidence, the electrode supports melting, the temperature continued to increase after the cell boiled dry and the current was cut off. Morrison thinks that this is the “cigarette lighter effect,” but where does the oxygen come from at that point? For some time, no more deuterium is being stored in the cell, and the cathode, when the current is cut off, will be deloading, so any oxygen in the cell will be driven out.

Fleischmann’s explanations are difficult to follow, and Morrison takes full advantage of that. That’s why I set up that study of the debate. Until I did that, I really had not understood what had been done, and the boiling cells seemed terribly complicated. It appears that they are simple, in fact.

This is highly inadequate and casts serious doubts on the claims of huge excess heat. Similarly with the claim that the cell stayed hot for some three hours after the electrolyte had boiled off so that it was believed that there was no input – now called “Life after Death” – it is hard to believe when there is only one local isolated measuring device. It is to be hoped that these experiments will be repeated with adequate convincing measuring instruments.

Morrison is ignoring the obvious in order to focus on what wasn’t done. Notice how what would be obvious to them, that “the cell stayed hot.” If there was no XP, there would be two sources of heat: Joule heating from electrolysis power, which turns off when the cell boils dry, and recombination, which requires oxygen, yet after the power is turned off, there will only be gas flow out of the cell, from the very substantial evolution of deuterium, any oxygen in the cell would have been quickly recombined. The evolution of deuterium, absent recombination, is endothermic at normally low loading. (It would be mildly exothermic at high loading, not enough to explain the temperature rise after the cell boils dry, my sense. Three hours would take a hot of heat, and extra temperature sensors would make little difference if they have seen this phenomenon repeatedly. (I.e., sensors can fail, but generally not repeatably). For the purpose there, the instrumentation was adequate. I’d have been far happier about that work if they had measured and reported helium in the outgas. By that time, they knew it was possible.

It is normal experimental practice to make redundant measurements and to have more than the strict minimum number of detectors – this allows checks of assumptions. Unfortunately many Believers in Cold Fusion follow Fleischmann and Pons in under-equipping their experiments – more detailed comments on their recent Physics Letters A paper is given in ref. 6. The conclusion is; make redundant measurements to check and to avoid theoretical assumptions such as whether the heat loss is 100% conduction or 100% radiation.

He was  such an asshole. “Believers” is a blatant insult when applied to scientific researchers. Ref. 6 was Wilson (1992). That is not a critique of the Physics Letters A paper, which was published in 1993, so the above is an obvious error. Good calorimetry is calibrated, not dependent on assumptions. I know that Fleischmann responded to the Wilson critique (which appears, on the face, far more cogent than Morrison), and I hope to review that, but I don’t know if this has ever been independently reviewed. Otherwise, much of this is “he said, she said,” with people lining up on which “side” to support. Bad Idea, if the goal is science.

5.5 Design Experiments to Avoid Problems

Duh! Gee, without Morrison, how could we ever think of that?

The design of some experiments is such that a large number of assumptions are needed to analyze the data and many calibrations are required – an example is the open cell calorimetry with no measurements of the out-going gases, of Drs. Fleischmann and Pons where the[y] make assumptions such as (1) that there is no recombination of the deuterium and the oxygen although the anode and cathode are very close, (2) that the heat outflow is 100% radiative or alternatively is 100% conductive, (3) that no lithium is carried out of the cell, (4) that the gas escaping does not carry any liquid with it or is blown out near boiling temperature, etc. Many of these doubtful assumptions (and which are doubted [7]) are treated by calculations. However it would be better if they could be largely avoided by using standard electrochemical technique e.g. employing a closed cell with a catalyser inside and the anode and cathode well-separated. Best technique is to use a null measurement method as in the Wheatstone bridge. Here one could use three baths at temperatures kept by heaters at temperatures of say 30, 40 and 45 degrees C. If there is excess heat produced by a cell in the inner bath, then its heater is turned down and this excess heat measured – this system is easy to calibrate. All is with no change in the temperatures of the three baths so that complicated calculations and doubtful assumptions are not needed.

Since Fleischmann and Pons often use small specks of palladium (e.g. 0.04 cm3 of Pd in ref. 4), they then only observe a small effect at lower temperatures and have to multiply by a large factor – it would be better technique to use a larger piece of palladium so that it could be seen if the effect is larger than the background and error assumptions. Note in Polywater, all the experiments produced very small quantities of the controversial water, less than one cc, and the authors did not try to use large samples, thus they did not try to prove themselves wrong.

5.6. Try to Prove Yourself Wrong

“The easiest person in the World to deceive, is yourself” is a well-known warning in Science and one is taught by good professors, such as Phillip I. Dee in my case, to go out and actively try and find ways to prove yourself wrong.

If one wishes to assume there is no recombination of the hydrogen and oxygen, then one should not do it by calculation, but do clear active experiments to try and prove yourself wrong, e.g. by varying the distance between the anode and cathode. This has in fact been done by Prof. Lee Hansen at BYU [8] who varied the separation of the anode and cathode. He found that assuming no recombination, there was an calculated excess heat but this disappeared when the electrodes were separated suggesting that the origin of the calculated excess heat was recombination. To check this further, he blew in nitrogen gas from the bottom when the electrodes were close together, and again the calculated excess heat vanished. It is surprising that after ten years intensive work, that Fleischmann and Pons have never published any such experiments to test their assumption that there is no recombination.

The actual excess heats claimed by Fleischmann and Pons in their 1989 and 1990 papers [5, 9] are small, but they are then multiplied up by dubious assumptions e.g normally one uses the well-known fact that the power used to separate the deuterium and oxygen is (1.54 Volts times the current), but in their 1989 paper, Fleischmann and Pons use (0.5 Volts times the current). It is this and other assumptions that allow Fleischmann and Pons to use the of-repeated claim of “one watt in and four watts out”. This number of 0.5 Volts seems to be unknown apart from this paper and it is surprising that experiments to justify such a crucial number have not been done. This story of Fleishmann and Pons’s unusual excess heat calculations, is clearly explained on pages 351 to 353 of Frank Close’s book “Too Hot to Handle” [10].

The contradiction between Fleischmann and Pons’ 1989 and 1990 papers as to whether the heat loss is 100% by conduction [3] or 100% by radiation [8] could be resolved by experiment, but seems not to have been done (they silvered the top part of the cell later but as it was claimed this changed the heat loss from 100% radiative to 100% radiative (i.e. no change!), this can hardly be considered a decisive experiment). The estimate of the heat losses is critical to calculations of the xcess heat.

The message is, do more experiments, vary parameters and seriously try to prove yourself wrong.

5.7. Do Experiments to Test Theory

There are many theories and it is surprising that people do not seriously design experiments to make critical tests of the theories. For example the crucial point about Nobel Prize winner Julian Schwinger’s theory[11] is that pd fusion is much more likely than dd fusion. And pd fusion would give 3He rather than 4He in the electroweak mode. Hence one would have expected that someone would have varied the hydrogen to deuterium content and looked for the excess heat and for 3He and 4He as a function of the H to D ratio. For example one could try the following mixtures in the electrolyte;

H2O 1% 25% 50% 75% 99%
D2O 99% 75% 50% 25% 1%.

5.8 Do Reproducible Experiments

So far the only reproducible experiments that have been achieved are by those who find no Cold Fusion effect. Those who find positive Cold Fusion effects do not claim 100% reproducibility.


6.1. D-D Separation

The great problem of D-D fusion is the difficulty of overcoming the Coulomb potential barrier. This can be overcome by using fast deuterium nuclei as in the Sun (keV energies), or in tokamaks, or ion implantation, energetic arc or glow discharges, etc. but these are called Hot Fusion and well appreciated. For Cold Fusion the thermal energies are too small and the probability is very, very small, e.g. Koonin and Nauenberg [12] have calculated that for a separation of 0.74 Angstroms, it is only 10 E-64 fusions per dd pair per second – that is negligible as can be seen that if the mass of deuterium was as large as the solar system mass, there would be only one fusion per second which would give a power of a million millionth of one watt.

Under normal conditions, in D2 gas or deuterium liquid, the separation of the deuterium nuclei is 0.74 A. As explained the probability is negligible except when a thermal muon (effectively almost zero velocity) with a mass some 200 times greater than the electron mass, approaches the dd pair and displaces one of bound electrons and this causes the dd pair to be pulled closer together giving a separation of about 0.035 A when fusion can occur – this is called muon-catalyzed fusion. However with the very short lifetime of the muon it can easily be shown that this is not an economic process but it does indicate how the fusion probability varies very steeply with the d-d separation.

There is an enormous literature on hydrogen and deuterium in palladium and other metals – see for example, Fukai at the Third Cold Fusion Conference [13]. The basic fact is that palladium is normally a face-centred crystal with a side of about 3.9 A – if hydrogen is forced into it, the crystal expands slightly e.g to 4.03 A for a D to Pd ratio of 0.8. The normal separation of d-d particles is 2.85 A – this is when they are in the orthohedral sites. When the deuterons are forced into the palladium e.g by ion implantation, then tetrahedral sites can be occupied and the separation is reduced to 1.74 A, but this value is still much greater than the normal 0.74 A.

Thus the deuterium nuclei are further apart in the Pd lattice than normal – it goes the wrong way for Cold Fusion, to put deuterium into metal lattices.

There are thousands of experiments, papers and many books on hydrogen and deuterium in metals and there is a unifying theory which fits the data – except Cold Fusion data. To claim excess heat from Cold Fusion is Miracle Number 1.

6.2 Excess heat with Hydrogen

If one observes fusion with D-D then one does not expect to observe it with H-H as the rate is many orders of magnitude lower. Thus one would then observe it with D2O but not with H2O. In the period April 1989 to 1991, Fleischmann and Pons and others claimed to have observed D-D fusion but not H-H fusion so they used H2O as a control and stated that the excess heat claimed was from D-D fusion and was a nuclear process. However at the Third Cold Fusion conference in October 1992, five groups claimed to have obtained excess heat using hydrogen. Further some produced theories stating that the excess heat was not from a nuclear reaction, e.g. Vigier [14] who said it was quantum chemistry. At this fourth conference seven groups have reported experimental data supporting the claims of excess heat with hydrogen – (and still living under the banner of Cold Fusion).

There is an enormous contradiction here – most of the Cold Fusion community claim that Cold Fusion is deuterium fusion and the excess heat has a nuclear origin and this is confirmed because it is NOT observed with light hydrogen, but there is a strong minority which claim excess heat with light hydrogen and sometimes say it is not nuclear. Surprisingly this contradiction was not discussed at the Third meeting and seems to be being ignored here at the Fourth.

This the second miracle.

6.3 Lack of Nuclear Ash with respect to Excess heat

If Cold Fusion has its origin in nuclear reactions as Fleischmann and Pons and others have claimed, then there must be some nuclear particles produced – called the Nuclear Ash by Frank Close [10].

Thousands of experiments have established what this nuclear ash is, both at high energies (hot fusion) and at thermal energies (cold fusion – in muon-catalyzed fusion). The conclusion is that for one watt of power, the products are;

10 E12 particles per second of tritons, neutrons, protons, and 3He

10 E7 particles per second of 4He and gammas of 24 MeV.

Such numbers of particles are not observed. For watts of power, the above numbers would give fatal doses of radiation but no such casualties have been reported and it appears that most scientists and laboratory assistants or cleaners do not take radiation precautions or do radiation monitoring by wearing film badges.

This is miracle number three.

6.4. Ratios of Nuclear Ash Components

The ratios of the nuclear products given in 6.3 above are very well established, e.g see Cecil et al. [15] in the proceedings of the Second Cold Fusion conference where he shows that the (neutron plus 3He ) channel is equal to the (tritium plus proton) channel as would be expected from charge symmetry, while the (4He plus gamma) is indeed a factor of ten million lower. This is not observed in experiments making Cold Fusion claims, and indeed the tritium to neutron ratio is said to be between 10 E4 to 10 E9.

This Cold Fusion miracle number four.

For many, four miracles is a bit too much, especially before breakfast, as Alice would say.


There are many experiments on fusion and there is a well-established theory [13] which fits fusion data and many other aspects of Science. However it is remarkable (a miracle) that this theory is claimed not to apply to Cold Fusion experiments when performed by Fleischmann, Pons and some others but does apply to the larger number of experimenters on Cold Fusion who find no effects. There are a large number of theories which have been proposed to account for the Cold Fusion claims. They concern the behaviour of deuterium (and sometimes hydrogen) in a lattice.

In such theories it is remarkable that this lattice in which Cold Fusion is claimed to occur, is not defined. Questions such as can Cold Fusion occur in ice, are not given clear answers. Some such as Dr. Preparata use their theory that justifies Cold Fusion, to support the claims of Dr. Benveniste et al. [16] that water has a memory and that after diluting it many times (up to 10 E120), this memory is retained. One would then expect their theory of Cold Fusion would also apply to water – do they then predict that Cold Fusion should occur in water? It may be noted that Hirst et al. [17] have tried to reproduce the findings of Benveniste et al. using dilutions from 10 E2 to 10 E60, but could no reproduce the results of Benveniste et al.

In general it is good that theorists try to prove themselves wrong by applying their theories to other applications as Preparata has done for the memory of water, and this scientific approach is to be encouraged.

A further important question that does not seem to have been considered, is what happens at the boundary of the lattice? Outside the lattice the normal laws of Science seem to apply but on entering the lattice, the four miracles listed in section 6, come into operation. It would be important to study and understand this transition layer – it may be a way to distinguish between the very different theories of Cold Fusion.


In the period of 12 months of Dieter Britz, given in section 2, a variety of theories that might explain Cold Fusion have been published in refereed journals (references and author list are in Dr. Britz’s bibliography). These are summarized;

a). Gerlovin; New unified field theory. The Earth’s movements with respect to the vacuum of space are important and best results should be obtained at 10.00 hours, 11.00 hours, and noon.

b). Hagelstein; coherent and semi-coherent neutron transfer with increased phonon coupling. Under some conditions, gammas should be observed.

c). Matsumoto; new elementary particle, the Iton which gives di-neutrons and higher neutron assemblies. The theory explains the gravity decays and transmutations observed.

d). Mendes; ergodic motion. Three-body collisions dominate, especially dde.

e). Bockris; high fugacity (as Fleischmann and Pons [5]) giving 1026 atmospheres. Electron capture by deuterium.

f). Matsumoto; Nattoh theory. Collapse of neutron clusters giving Black Holes.

g). Swartz; Quasi-one-dimensional model of loading. Crystal structures are important (defects, dislocations, shape, small surface features – spikes).

h). Yasui; Fracto-fusion, cracks.

i). Fisher; polyneutrons.

j). Yang; D captures electron giving D plus a di-neutron. Theory explains neutron bursts (this claim now withdrawn by Steve Jones).

k). Cerofolini; Binuclear atoms (dd)ee, capture thermal neutrons giving D, T, 4He, tritium enrichment, neutron bursts.

l). Matsumoto; double iton explains warming for three hours afterwards. Could this be the theoretical explanation of the “Life after Death” claimed by Fleischmann and Pons who about a year later also said they had observed a similar three-hour effect?

m). Takahashi; high loadings give 3-body and 4-body fusions.

n). Bracci; Collective effects ruled out (contrary to Hagelstein and to Preparata, Bressani and Del Gudice). Explains by high effective electron masses, 5 to 10 times greater.

o). Lo; Densely coupled plasmas.

p). Stoppini; Superconductivity, < 11oK.

q). Hora; Dense plasma. Transmutation by neutron swapping, e.g Pd + D —> Rh + 4He.

r). Filimonov; Deuteron soliton coherent with palladium anti-soliton – should coat electrode with palladium black.

s). Lipson; Super-condensates – fracto-fusion mechanism is improbable.

t). Chatterjee; stochastic electron accumulation.

u). Gammon; Negative Joule-Thompson effect.

v). Granneau; Ampere force.

w). Hagelstein; n-transfer, 3-phonon.

x). Ichimark; coherent plasmas. One to two fusions per year per cm3.

Note – in reply to a question as to whether Cold Fusion could be observed in water, Dr. Preparata declared that he had never written a paper applying his theory of Cold Fusion to Benveniste’s work. Dieter Britz has written “We have an article from an Italian Magazine, where Preparata and Del Guidice describe their theory of long-range effects in water, and relate this to both cold fusion and homeopathy (i.e. Benveniste claims)”.

It is interesting that there is a rather wide spread of journals – not just Fusion Technology (10 times here) and Physics Letters A where J.-P. Vigier is an Editor (quoted twice here).


8 December 1993; the previous speaker, Dr. H. Fox, giving he said, a business man’s point of view, declared he expected a working Cold Fusion device in TWENTY YEARS.

November 1993. Dr. S. Pons said that by the year 2000 there should be a household power plant – SIX YEARS.

1992. Dr. M. Fleischmann said a 10 to 20 Kilowatt power plant should be operational in ONE YEAR.

July 1989. The Deseret News published an article by Jo-Ann Jacobsen-Wells who interviewed Dr. S. Pons. There is a photograph in colour, of Dr. Pons beside an simple apparatus with two tubes, one for cold water in and one for hot water out. This working unit based on Cold Fusion was described as; ” ‘It couldn’t take care of the family’s electrical needs, but it certainly could provide them with hot water year-round’ said Pons”.

Later in the article it was written “Simply put, in its current state, it could provide boiling water for a cup of tea”. Time delay to this working model – ZERO YEARS.

Thus it appears that as time passes, the delay to realisation of a working model increases.


No conclusions are presented – everyone can judge for themselves. However some questions can be asked;

Are Cold Fusion results consistent in claiming Cold Fusion effects in Deuterium but not in normal Hydrogen, while other groups claim Cold Fusion effects with hydrogen?

Is the ratio of tritium to neutron production about unity as Fleischmann and Pons originally claimed [5] or is the ratio in the wide range 104 to 109 as most other workers claim?

Are transmutations, Black Holes, Biology [18] part of the normal world of Cold Fusion?

To explain the null experiments there is one theory – the conventional theory of Quantum Mechanics, but there are a wide variety of theories to explain positive Cold Fusion results – can they all be valid simultaneously – if not, which should be rejected?

When can we have a cup of tea? Acknowledgements

It is a pleasure to thank Dieter Britz for the use of his Bibliographic compilation.

1. D.R.O. Morrison, Cold Fusion Update No. 7, Email. [copy on NET]
2. S.E. Jones et al., Nature 338(1989)737. [Britz Jone1989]
3. T. Bressani et al. 3rd Intl. Conf. on Cold Fusion, “Frontiers of Cold Fusion”, Ed. H. Ikegami, Univ. Acad. Press, Tokyo, (1993), p 433. 
4. M. Fleischmann and S. Pons, Phys. Lett. A 176(1993)1. [Britz Flei1993].
5. M. Fleischmann and S. Pons, J. Electroanal. Chem. 261(1989)301.
6. J. Wilson et al., J. Electroanal. Chem. 332(1992)1. [Britz Wils1992]
7. D.R.O. Morrison, CERN preprint CERN-PPE/93-96 and to be published in Phys. Lett. A.

94-1051 Morrison, D R O
Comments on claims of excess enthalpy by Fleischmann and Pons using simple cells made to boil [CERN PPE 93-96] Phys. Lett., A 185 (1994) 498-502 [Britz Morr1994]

Fleischmann and Pons have claimed to have performed a “simple” experiment and to have observed excess enthalpies larger than 1 kW/cm3 of palladium. It is shown that in fact the system they use is exceedingly complicated, is under-instrumented and that they have ignored several important factors so that it is unclear whether or not they have observed any excess heat.

8. L. Hansen, priv. comm.
9. M. Fleishmann et al., J. Electroanal. Chem. 287 (1990) 293. [Britz Flei1990] (copy on web)
10. F. Close, “Too Hot To Handle”, W.H Allen Publ., London, (1990).
11. J. Schwinger, 1st Annual Conf. on Cold Fusion, National Cold Fusion Institute, Salt Lake City, (1989), p 130.
12. S.E. Koonin and M. Nauenberg, Nature 339 (1989) 690[Britz Koon1989]
13. Y. Fukai, 3rd Intl. Conf. on Cold Fusion, “Frontiers of Cold Fusion”, Ed. H. Ikegami, Univ. Acad. Press, Tokyo, (1993), p 265.
14. J.-P. Vigier, 3rd Intl. Conf. on Cold Fusion, “Frontiers of Cold Fusion”, Ed. H. Ikegami, Univ. Acad. Press, Tokyo, (1993), p 325.
15. F.E. Cecil and G.M. Hale, 2nd Annual Conf. on Cold Fusion, “The Science of Cold Fusion”, Ed. T. Bressani, E. Del Guidice, and G. Preparata, Soc. It. di Fisica, Bologna, (1991), p. 271.
16. Davenas et al. Nature 333(1988)816-818. [abstract]
17. S. J. Hirst et al, Nature 366 (1993) 525-527.
18. The IgNobel Prize for Physics was awarded to L. Kervran for his book “Biological Transmutations” in which he argues that a cold fusion process produces the calcium in eggshells – Science, 262(1993)509.

Leave a Reply