The Production Of Helium In Cold Fusion Experiments

DRAFT of book chapter for review from ResearchGate, this may differ substantially from the final version:

The Production Of Helium In Cold Fusion Experiments
Melvin H. Miles
College of Science and Technology
Dixie State University, St. George, Utah 84770, U.S.A.


It is now known that cold fusion effects are produced only by certain palladium materials made under special conditions. Most palladium materials will never produce any excess heat, and no helium production will be observed. The palladium used in our first six months of cold fusion experiments in 1989 at the China Lake Navy laboratory never produced any measurable cold fusion effects. Therefore, our first China Lake result were listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the DOE-ERAB report issued in November 1989. However, later research using special palladium made by Johnson-Matthey produced excess heat in every China Lake D2O-LiOD electrolysis experiment. Further experiments showed a correlation of the excess heat with helium-4 production. Two additional sets of experiments over several years at China Lake verified these measurements. This correlation of excess heat and helium-4 production has now been verified by cold fusion studies at several other laboratories. Theoretical calculations show that the amounts of helium-4 appearing in the electrolysis gas stream are in the parts-per-billion (ppb) range. The experimental amounts of helium-4 in our experiments show agreement with the theoretical amounts. The helium-4 detection limit of 1 ppm (1000 ppb) reported by CalTech and MIT was far too insensitive for such measurements. Very large excess powers leading to the boiling of the electrolyte would be required in electrochemical cold fusion experiments to even reach the CalTech or MIT helium-4 detection limit of 1000 ppb helium-4 in the electrolysis gas stream.

My research on cold fusion at the China Lake Navy laboratory (Naval Air Warfare Center Weapons Division, NAWCWD) began on the first weekend following the announcement on March 23, 1989 by Martin  Fleischman and Stanley Pons. It was six months later (September 1989) before our group detected any sign of excess heat production. By then, research reports from CalTech, MIT, and Harwell had given cold fusion a triple whammy of rejection. Scientists often resorted to ridicule to discredit cold fusion, and some were
even saying that Fleischmann and Pons had committed scientific fraud.

Most palladium sources do not produce any cold fusion effects [1]. The palladium made by Johnson-Matthey (J-M) under special conditions specified by Fleischmann was not made available until later in 1989. I was likely one of the first recipients of this special palladium material when I received my order from Johnson-Matthey of a 6 mm diameter palladium rod in September of 1989. Our first reports of excess heat came from repeated use of the same two sections of this J-M palladium rod [1-3]. However, our final verification of these excess heat results came late in 1989, thus China Lake was listed with CalTech, MIT, Harwell and other groups reporting no excess heat effects in the November 1989 DOE-ERAB report [4].

These same two J-M Pd rods were later used in our first set of experiments (1990) showing helium-4 production correlated with our excess heat (enthalpy) results [5-7]. Two later sets of experiments at China Lake using more accurate helium measurements, including the use of metal flasks for gas samples, confirmed our first set of measurements [8].

Following our initial research in 1990-1991 on correlated heat and helium-4 production, other cold fusion research groups reported evidence for helium-4 production [9]. This report, however, will focus mainly on the research of the author at NAWCWD in China Lake, California during the years 1990 to 1995 [1,8].

1. First Set of Heat Helium Measurements (1990)

The proponents of cold fusion were being largely drowned out by cold fusion critics by 1990. In fact, the first International Cold Fusion Conference (ICCF-1) was held March 28-31, 1990 in Salt Lake City, Utah. I found this to be a very unusual scientific conference with a mix of cold fusion proponents, many critics, and the press. Most presentations were followed by unusual ridicule by critics in the question period with comments such as “All this sounds like something from Alice in Wonderland”. Two valid questions by critics, however, were: “Where are the Neutrons?” and “Where is the Ash?”. If the cold fusion reactions were the same as hot fusion reactions, as most critics erroneously thought, then the amounts of excess power being reported (0.1 to 5 W) would have produced a deadly number of neutrons (more than 1010 neutrons per second). Also, if there were a fusion reaction in the palladium-deuterium (Pd-D) system, then there should appear a fusion product – sometimes incorrectly referred to as ash. Some researchers, such as Bockris and Storms, were reporting tritium as a product, but the amounts were far too small to explain the excess enthalpy. The reported production of neutrons in cold fusion experiments was
even smaller (about 10-7 of the tritium).

Julian Schwinger, a Nobel laurate, suggested at ICCF-1 the possibility of a D+H fusion reaction that produces only helium-3 as a product and no neutrons [10]. Because of this, I considered measurements for helium-3 in my next experiments, but the mass spectrometer at China Lake was designed for only larger molecules made by organic chemists.

However, later in 1990, Ben Bush called to discuss both a possible temporary position at China Lake and my cold fusion results. He held a temporary position at the University of Texas in Austin, and the instrument there could measure helium-3 at small quantities. We worked out details in following telephone conversations about how to collect gas samples and ship them to Texas for both helium-3 and helium-4 measurements by their mass spectrometry expert. My next two experiments, fortunately, produced unusually large excess power effects for our first set of correlated heat and helium measurements [5-7].

These helium results were first published as a preliminary note [5], then in the ICCF-2
Proceedings [6], and eventually as a detailed publication [7]. There was no detectable
helium-3, but there was evidence for helium-4 correlated with the excess enthalpy. I had
never met Ben Bush and decided to code the gas samples with the birthdays of my family
members. My own measurements of excess power were recorded in permanent laboratory
notebooks before the samples were sent to Texas for analysis. These were single blind tests because Dr. Bush did not know how much, if any, excess power was being produced when a gas sample was collected. I am glad, in retrospect, that this was done because I later learned that Dr. Bush was gung-ho on proving cold fusion was correct. Scientists must always leave it completely up to experimental results to answer important scientific questions. It seems to me, on the other hand, that scientists at MIT and CalTech in 1989 were focused only on proving that cold fusion was wrong. There was a “Wake for Cold Fusion” held at MIT at 4 p.m. on June 16, 19891 even before their cold fusion experiments were completed [11].
When all results for this study were in (early 1991), I thought about how this research could be published quickly as a preliminary note. All research, except for the helium measurements, was done at China Lake. However, critics of cold fusion were prominent in 1991, and any publication from China Lake had to be first cleared by several management levels. This publication could be held up or even rejected for publication by Navy personnel at China Lake. As a solution, I had this manuscript submitted by Bush and
Lagowski at the University of Texas where they were listed as the first authors. A few months later, Dr. Ronald L. Derr, Head of the Research Department at China Lake, admonished me for the publication of this work from China Lake in this manner. However, Dr. Derr, along with my Branch Head, Dr. Richard A. Hollins, were among the few supporters of my cold fusion research at NAWCWD in 1991. Many others thought that such work damaged the reputation of this Navy laboratory.

1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by the Center for Contrived Fantasies”.

2. Analysis of the First Set of Helium Measurements.

Neither Ben Bush nor I really knew how much helium should be produced in my experiments by a fusion reaction, but my quick calculations showed that it might be quite small because of its dilution by the electrolysis gases. Recently, I have found an easier and accurate method to calculate the amount of helium-4 theoretically expected from the experimental measurements of excess power. It is known that D+D fusion to form helium-4 produces 2.6173712 x 1011 helium-4 atoms per second per watt of excess power. This is based on the fact that each D+D fusion event produces 23.846478 MeV of energy per helium atom from Einstein’s E = Δmc2 equation. Multiplying the number of atoms per second per watt by the experimental excess power in watts gives the rate of helium-4 production in atoms per second. The rate of electrolysis gases produced (D2+O2) per second is given by Molecules/s = (0.75 I/F) NA (1) where I is the cell current in Amps, F is the Faraday constant, and NA is Avogadro’s
1The flyer for this “Wake” at MIT ridiculed cold fusion with statements like “Black Armbands Optional” and “Sponsored by the Center for Contrived Fantasies”.

number. Note that the electrolysis reaction for one Faraday written as 0.5 D2O → 0.5 D2+0.25 O2 produces 0.75 moles of D2+O2 gases. The largest excess power in the first set of helium-4 measurements was 0.52 W at a cell current of 0.660 A. Therefore, the theoretical rate of helium-4 production divided by the rate of the D2+O2 molecules produced by the electrolysis gives a ratio (R) for helium-4 atoms to D2+O2 molecules as shown by Equation 2.

(2.617 x 1011 He-4 atoms/s W)(0.52 W)
[(0.75)(0.660 A)/(96,485 A.s/mol)] (6.022 x 1023 D2+O2 molecules/mol)

This calculation yields R = 44.0 x 10-9 or 44.0 parts per billion (ppb) of helium-4 atoms. This is the theoretical concentration of helium-4 present in the electrolysis gases for thisexperiment if no helium-4 remains trapped in the palladium. Normally, about half of this theoretical amounts of helium-4 is experimentally measured in the electrolysis gas.

The first set (1990) of our China Lake results are shown in Table 1. The theoretical amount of helium-4 expected (ppb) based on the measured excess power and the cell current is also listed. This is compared with the 1990 mass spectrometry results from the University of Texas in terms of large, medium, small or no observed helium-4 peaks. The dates for the gas sample collections are also listed. Two similar calorimeters (A,B) were run simultaneously, in series, in the same water bath controlled to ±0.01ºC [1-3].

Table 1. Results for the 1990 China Lake Experiments.

Sample Px(W) Theoretical He-4
Measured He-4
12/14/90-A 0.52a 44.0 Large Peak
10/21/90-B 0.46 48.7 Large Peak
12/17/90-A 0.40 42.4 Medium Peak
11/25/90-B 0.36 38.1 Large Peak
11/20/90-A 0.24 25.4 Medium Peak
11/27/90-A 0.22 23.3 Large Peak
10/30/90-B 0.17 18.0 Small Peak
10/30/90-A 0.14 14.8 Small Peak
10/17/90-A 0.07 7.4 No Peak
12/17//90-B 0.29b 30.7b No Peak
a I = 0.660 A. For all others I = 0.528 A
b Calorimetric Error Due to Low D2O Solution Level
c The University of Texas Detection Limit was about 5 ppb He-4 Based on Table 1

The theoretical helium-4 amounts generally follow the peak size reported experimentally for helium-4 except for the one sample where there was an apparent calorimetric error. Also, theoretical amounts of helium-4 vary only by a factor of three between the large and small peaks. Previous estimates [6-8] of the number of helium-4 atoms in these flasks were in error because the rate of helium production is directly proportional to the excess power. Finally, the detection limit for helium-4 measured at the University of Texas was about 5 ppb based on Table 1. This is in line with the ±1.1 ppb experimental error reported later by the U.S. Bureau of Mines laboratory in Amarillo, Texas [8]. The rate for atmospheric helium diffusing into these glass flasks was later measured to be 0.18 ppb/day, thus 28 days of flask storage would be needed to reach the 5 ppb detection limit. No correlation was found for the helium-4 amounts and the flask storage times [6,7]. Six control experiments using the same glass flasks and H2O+LiOH electrolysis produced no excess enthalpy at China Lake and no helium-4 was measured at the University of Texas [5-8].

Secondary experiments were also conducted for these heat-production cells. Dental films within the calorimeter was used to test for any ionizing radiation, and gold and indium foils were used to test for any activation due to neutrons. These dental films were clearly exposed by radiation in both calorimetric cells A and B [6,7]. A nearby Geiger counter also recorded unusually high activity during this time period. No activation of the gold or indium foils were observed, hence the average neutron flux was estimated to be less than 105 neutron per second. Similar dental film studies in the H2O+LiOH controls gave no film exposure and no other indications of radiation [6,7].

3. Experimental Measurement of Helium-4 Diffusion

One of the main questions raised by our first report in 1991 of the correlation between the excess heat and helium-4 production in our experiments [5-7] was the possible diffusion of helium-4 from the atmosphere into our glass collection flasks. This was certainly possible, but would the rate of such diffusion be fast enough to affect our results? I addressed this question in my presentation at ICCF-2 in Como, Italy where I suggested that since D2 also diffuses through glass, then the much greater outward diffusion of deuterium gas across the flask surface in the opposite direction might impede the small flow of atmospheric helium-4 into the flask. Experimental measurements of the rate of helium diffusion into these same glass flasks later answered these important questions. The rate of atmospheric helium-4 flowing into our glass flasks was too slow to have affected our first report on the heat/helium-4 correlations. These experiments also showed that large amounts of hydrogen or deuterium in the flask somewhat slow the rate of helium diffusion into the flask. Theoretical calculations using q = KP/d gave good agreement with the experiment measurements [1,5-7] where q is the permeation rate, K is the permeability for Pyrex Glass, P is the partial pressure of atmospheric helium-4 and d is the glass thickness
(d = 0.18 cm and A = 314 cm2 for our typical glass flask).

The results for eight experimental measurements of the helium-4 diffusion rate into the same glass flasks used in our experiments are presented in Table 2.

Table 2. Experimental Measurements of Helium-4 Diffusion into the Glass Flasks used at China Lake Conditions Laboratory
a He-4 Atoms/Day Ppb/Dayb
Theoretical q=KP/d 2.6 x 1012 0.23
N2 Fill HFO 2.6 x 1012 0.23
N2 Fill HFO 3.4 x 1012 0.30
N2 Fill RI 3.7 x 1012 0.32
D2O+O2 Fillc RI 1.82±0.01 x 1012 0.160
D2+O2 Filld RI 2.10±0.02 x 1012 0.184
D2+O2 Fille RI 2.31±0.01 x 1012 0.202
H2 Fillf RI 1.51±0.11 x 1012 0.132
Vacuumf RI 2.09±0.04 x 1012 0.183
aHFO (Helium Field Operations, Amarillo, Texas)
RI (Rockwell International, Canoga Park, California)
bBased on 1.141 x 1022 D2+O2 Molecules per Flask
cGlass Flask #5
dGlass Flask #3
eGlass Flask #4
fBoth Experiments Used Glass Flask #2

For our experimental condition of flasks filled with D2+O2, the mean helium-4 diffusion rate is 0.182±0.021 ppb/day. Thus, it would take a flask storage time of 28 days to just reach the helium-4 detection limit of about 5 ppb (see Table 1). The theoretical 44.0 ppb in Table 1 would require a flask storage time of 242 days to reach this amount of helium-4. Because of the large excess power measured, the flask storage time was not a factor for the results in Table 1. Also, the flasks filled with N2 had larger experimental rates for helium-4 diffusion than the flasks filled with the D2+O2 electrolysis gases. The various flasks had somewhat different values for helium-4 diffusion because it was unlikely that any two flasks would be exactly the same. Furthermore, filament tape was used on each Pyrex round-bottom flask to help prevent breakage during shipments. However, the measured helium-4 diffusion using the same glass flask in Table 2 for both a H2 fill and a vacuum show a significant slower diffusion rate for helium-4 for the flask filled with hydrogen [7]. The outward diffusion of D2 or H2 across the glass surface apparently does slow the inward diffusion of atmospheric helium-4.

4. Second set of Helium Measurements (1991-1992)

Unfortunately, our 6 mm diameter palladium rods from Johnson-Matthey were cut up for
helium-4 analysis, and it took nearly a year to find another palladium electrode that
produced excess heat2. This was a 1.0 mm diameter J-M wire, and the excess power was
small due to the much smaller palladium volume used (0.020 cm3 vs. 0.34 cm3). However,
Rockwell International provided significantly more accurate helium-4 measurement with
a reported error of only ±0.09 ppb [1,8]. Brian Oliver, who performed these studies, was
recognized as a world expert for helium-4 measurements. The helium-4 measurements
were carried out over a period of more than 100 days, thus the helium-4 results could be
accurately extrapolated back to the time of the gas samples collection [8]. This eliminated
any effect due to the diffusion of atmospheric helium-4 into the glass flasks. These were
double blind experiments because neither Rockwell International nor the China Lake
laboratory knew the results for both the excess power and helium measurements until this
study was completed and all results were reported to a third party.

The experimental and theoretical results of this set of experiments in 1991-1992 are presented in Table 3.
Table 3. Results for the Second Set of Experiments (1991-1992)
Sample Px (W) Theoretical He-4 (ppb) Experimental He-4
12/30/91-B 0.100a 10.65 11.74
12/30/91-A 0.050a 5.33 9.20
01/03/92-B 0.020b 2.24 8.50
I = 0.525 A
I = 0.500 A
cReported Rockwell error was equivalent to ±0.09 ppb

There is considerable information contained in this accurate helium-4 analysis by Rockwell International that supports a D+D fusion reaction producing helium-4 and 23.85 MeV of energy per helium-4 atom. First, Rockwell reported their results as the measured number of helium-4 atoms in each of the 500 mL collection flasks at the time of collection. These numbers were 1.34 x 1014, 1.05 x 1014, and 0.97 x 1014 helium atoms per 500 mL [8,12]. The reported error (standard deviation) by Rockwell was only ±0.01 x 1014 helium-4 per 500 mL. Therefore, there is a 29 σ effect between the two highest numbers and a 37 σ effect between the highest and lowest numbers. Except perhaps for the cold fusion field, any measurements that produce even 5 σ effects are considered to be very significant by the scientific community. Note that the numbers reported by Rockwell are also in the correct order for the excess power measured (Table 2) for this double-blind experiment.
If one finds palladium electrodes that produce large excess power effects, hang onto them! Also, do not use them for H2O controls.

The number of helium-4 atoms per 500 mL can be converted to ppb, as used in Table 3, by calculating the total number of gas molecules contained in the flask. From the Ideal Gas Equation, this number is (PV/RT)NA or 1.141 x 1022 molecules for our laboratory condition during the flask collection time (P=0.92105 atm, V=0.500 L, and T=296.15 K). In terms of ppb, the Rockwell reported error of ±0.01 x 1014 helium-4 atoms per 500 mL becomes about ±0.09 ppb. Later experiments using metal collection flasks established that the background helium-4 in our collection system was 5.1 x 1013 atoms per 500 mL or 4.5 ppb [1,8]. Based on theoretical calculations, the diffusion of helium-4 into our collection system was not due to any glass components, but rather due to the use of thick rubber vacuum tubing to make the connections to the collection flask and oil bubbler. We kept our calorimetric system and gas collection system at China Lake exactly the same for several years for the purpose of making comparisons between experiments done at different times. The correction for this background helium-4 actually helped to bring the Rockwell helium-4 measurements closer to theoretical values based on the D+D fusion reaction to form helium-4. This is shown in Table 4.

Table 4. Results For the Second Set of Experiments With Corrections For the
Background Helium-4 (4.5 ppb)
PX (W) Theoretical He4 (ppb)
Corrected He-4
He-4/sWc MeV/He-4
0.100a 10.65 7.24 1.8 x 1011 35
0.050a 5.33 4.70 2.3 x 1011 27
0.020b 2.24 4.00 4.7 x 1011 13
I = 0.525 A
I = 0.500 A
cTheoretical Value: 2.617 x 1011 He-4/sW
dTheoretical Value: 23.85 MeV/He-4
The corrected helium-4 measurements by Rockwell are reasonably close to expected values based on the D+D fusion reaction to form helium-4 as the main product. Only the results for an excess power of 0.020 W suggests a problem because the corrected experimental value (4.00 ppb He-4) is larger than the theoretical value (2.24 ppb Hel-4). This is not unexpected because 0.020 W is near the measuring limit for the calorimeter used. The correct experimental excess power may have been closer to 0.040 W3. Also, the rate of work done by the generated electrolysis gases (Pw) was not considered. This alone would add another 0.010 W to give 0.030 W for the excess power. This small Pw term is less important for higher excess power measurements.

3Using 0.040 W gives 2.4×1011 He-4/sW and 25 MeV/He-4

An example of the experimental calculation of He-atoms per Ws (or J) is presented in Equation 3 for the measured excess power of 0.100 W (I = 0.525 A).
(1.34 x 1014
-0.51 x 1014) He atoms/500 mL
(4644 s/500 mL)(0.100 W)
where 4644 seconds is the time required to generate 500 mL of D2+O2 electrolysis gases at a cell current of 0.525 A.
The value for MeV per helium-4 atom readily follows as shown by Equation 4.
[(1.8 x 1011 He-4/J)(1.602 x 10-19 J/eV)]-1 = 35 MeV/He-4 (4)
A mean value for the three experiments in Table 3 yields 25±11 MeV/He-4. Omitting the smallest excess power measured gives 30.5±5.0 MeV/He-4. The results given in Table 3 are reasonable considering the rather small excess power measured. This was probably due to the small volume of the palladium electrode (0.020 cm3). Typical excess power for the Pd/D system is about 1.0 W/cm3 of palladium for our current densities used [13]. The experimental corrected values for helium-4 compared to the theoretical amounts in Table 3 are 68% and 88% for the two largest values for excess power. There would likely be a smaller percent of helium-4 trapped in the palladium for the two small volume cathodes used.

5. An Analysis of the Third Set of Helium Measurements (1993-1994)

Many cold fusion critics refused to accept the correlation of excess heat and helium-4 production in our experiments because of the diffusion of atmospheric helium into glass containers. Therefore, metal flasks were used in place of glass flasks to collect gas samples from our experiments for helium analysis. The use of these metal flasks prevented the diffusion of atmospheric helium into the flasks after they were sealed. Even the flasks valves were modified to provide a metal seal by using a nickel gasket. All other components of the cells, gas lines, and oil bubblers remained the same in order to relate these new measurements to the previous measurements using glass flasks [1]. However, it was difficult to get the large excess power effects observed in our first set of measurements that used the special 6 mm J-M palladium rods. The helium-4 analyses for these experiments using the new metal flasks were performed by the U.S. Bureau of Mines laboratory at Amarillo, Texas. This was another laboratory with special skills in making such measurements. By this time, we were using four similar calorimeters (A,B,C,D) in two different water baths for calorimetric studies.

Table 5 presents helium-4 results for seven experiments that produced small excess power effects. The theoretical calculated amounts expected for helium-4 are also presented.

Measurements in similar experiments where no excess power was measured gave a background level of 4.5±0.5 ppb (5.1×1013 He-4 atoms) for our system [1].

Table 5. Hellium-4 Measurements Using Metal Flasks
Theoretical He-4
Experimental He-4
0.120a 13.4 9.4±1.8
0.070a 7.8 7.9±1.7
0.060 8.4 6.7±1.1
0.055 7.7 9.0±1.1
0.040 5.6 9.7±1.1
0.040 5.6 7.4±1.1
0.30a 3.4 5.4±1.5
I = 0.500 A. For all others I = 0.400 A

It should be noted that the largest excess power in Table 4 (0.120 W) was for a palladium boron rod (0.6 x 2.0 cm) made by Dr. Imam at the Naval Research Laboratory (NRL). We had been testing palladium materials made by NRL for several years, but none had produced a significant excess enthalpy effect. However, seven of eight experiments using Pd-B rods from NRL produced significant excess heat effects before this Navy program on palladium-deuterium systems ended in June of 1995 [1]. Most of the other excess power effects reported in Table 5 were produced by J-M palladium materials. Five experimental values for helium-4 in Table 5 are larger than the theoretical values reported. Assuming that the excess power reported is correct, then this is readily explained by the need to subtract the background of 4.5 ppb from each experimental value. These results are shown in Table 6 along with the electrode volume and the experimental rate of helium-4 production per second per watt of excess power.

Table 6. Background corrections For Helium-4 Measurements Using Metal Flasks
Corrected He-4
Percent of
Theoretical %
Electrode Volume
0.120 4.9 37 0.57 1.0 x 1011
0.070 3.4 43 0.63 1.1 x 1011
0.060 2.2 26 0.04 0.7 x 1011
0.055 4.5 59 0.51 1.5 x 1011
0.040 5.2 93 0.02 2.4 x 1011
0.040 2.9 52 0.01 1.4 x 1011
0.030 0.9 27 0.29 0.7 x 1011
a4.5 ppb subtracted from reported He-4 measurements

Because of the small amounts of excess power reported in Tables 5 and 6, it is difficult to reach any strong conclusions from the use of metal flasks except that helium-4 production is observed in experiments that produce excess power and no helium-4 production above background is measurable in experiments with no excess power. Furthermore, both the uncorrected and corrected experimental amounts of helium-4 are close to the theoretical amounts expected. Larger excess power, such as in our first set of helium-4 measurements would be needed before more definite conclusions could be made. Perhaps these results suggest that a larger percent of helium-4 is released into the gas phase for the palladium cathodes that have the smaller volume of material.

6. Discussion of China Lake Heat/Helium-4 Results

Some critics claimed that our results must be wrong because the experimentally measured helium-4 is only in the ppb range. However, this manuscript shows that the theoretical amounts of helium-4 for our experiments should be in this ppb range. Many other critics attribute our heat and helium-4 results to some form of contamination from atmospheric helium-4 normally present in air at 5.22 ppm [12]. Such contamination sources would be random and equally likely to be found in controls or experiments which show no excess enthalpy results. In summary, for all such experiments conducted at NAWCWD (China Lake), 12 out of 12 produced no excess helium-4 when no excess heat was measured and 18 out of 21 experiments gave a correlation between the measurements of excess heat and helium-4. The three failures either had a calorimetric error or involved the use of a different palladium material, i.e. a palladium-cerium alloy that perhaps traps most of the helium-4 produced. An exact statistical treatment that includes all experiments shows that the probability is only one in 750,000 that the China Lake set of heat and helium measurements (33 experiments) could be this well correlated due to random experimental errors [1]. Furthermore, the rate of helium-4 production was always in the appropriate range of 1010 to 1012 atoms per second per watt of excess power for D+D fusion or other likely nuclear fusion reactions that produce helium-4 [1,8].
All of our theoretical calculations for helium-4 production have assumed that the main fusion reaction is D + D → He-4 + 23.8 MeV. However, other fusion reactions producing helium-4 could also be considered such as D + Li-6 → 2 (He-4) + 22.4 MeV or D + B-10 → 3 (He-4) + 17.9 MeV. Neither of these two possible reactions seem to fit well with our experimental measurements. Both reactions lead to large increases in the theoretical amounts of helium-4 for each experimental measurement of excess power. For example, the D + B-10 reaction would increase the theoretical amount of helium-4 by a factor of 3.991. In Table 3, the theoretical amount of helium-4 corresponding to PX = 0.100 W would be 42.50 ppb rather than 10.65 ppb. For likely fusion reactions that produce helium4, the D + D reaction seems to fit best with our experimental results. Other proposed fusionreactions produce less than 23.8 MeV of energy per helium-4 atom. At about the same time period of our first heat and helium measurements in 1990, two different theories were proposed that predicted helium-4 as the main cold fusion product and that this helium-4 would be found mostly outside the metal lattice in the electrolysis gas stream. These two independent theories came from Scott and Talbot Chubb [14] and Giuliano Preparata [15]. Both Scott Chubb and Preparata called me shortly after our first publication on correlated excess heat and helium-4 in 1991, and Preparata soon made a visit to my China Lake laboratory. I first met Scott and his uncle, Talbot Chubb, at ICCF2 in Como, Italy, and our friendship lasted many years. Some of the most boisterous ICCF moments involved loud debates between Scott Chubb and Preparata over their two theories.

7. Related Research By Other Laboratories

There are presently more than fifteen cold fusion groups that have identified helium-4 production in their experiments. A summary for these groups reporting helium-4 has been reported elsewhere by Storms [16]. Publications by Bockris [17], Gozzi [18] and McKubre [19] relate closely to our electrochemical cold fusion studies at China Lake. McKubre and coworkers at SRI report on several different experiments using three different calorimetric methods that gave a strong time correlation between the rates of heat and helium production [19]. Using sealed cells, the helium-4 concentration exceeded that of the room air. These SRI experiments gave a near-quantitative correlation between heat and helium-4 production consistent with the fusion reaction D + D → He-4 + 24 MeV (lattice). Special methods were used by SRI to remove sequestered helium-4 from the palladium cathode [19]

8. The CalTech and MIT Helium-4 Experiments in 1989

Both CalTech and MIT looked for helium-4 production in the electrolysis gases in their 1989 experiments and reported that there was none [20,21]. However, both institutionsalso reported that they found no excess enthalpy. We have never observed any helium-4 production in our experiments when there was no measurable excess heat. There were actually some signs of small excess heat in both the CalTech and MIT experiments, but these were zeroed out either by changing the cell constant or by shifting experimental data points [22,23]. Major calorimetric errors were also present in the Cal Tech and MIT publications [22,23]. Nevertheless, the reported helium-4 detection limit by both CalTech and MIT was one part per million (ppm) or 1000 ppb. By using Equations 1 with R = 1000 ppb (1.0×10-6), the excess power would have to be 8.94 W. From Table 1, 1000 ppb helium-4 would require more than 20 times the highest excess power listed for our experiments or about 10 W. With such a large excess power, most calorimetric cells would be driven to boiling just by the fusion energy alone. Such large amounts of excess enthalpy would be very obvious even without the use of calorimetry, but the amounts of helium-4 produced would barely reach the detection limit reported by these two prestigious universities. Why was such a glaring error in the CalTech and MIT results missed by the reviewers for these publications? It seems like almost anything was accepted by major journals, such as Nature and Science, in 1989 if it helped to establish the desired conclusion that reports of cold fusion were not correct.


Long term support for my cold fusion research has been received from an anonymous fund at the Denver Foundation through the Dixie Foundation at Dixie State University. An Adjunct faculty position at the University of Laverne and a Visiting Professor at Dixie State University are also acknowledged.

1. M.H. Miles, B.F. Bush and K.B. Johnson, Anomalous Effects in Deuterated Systems, Naval Air Warfare Center Weapons Division Report, NAWCWPNS TP8302, September 1996, 98 pages. See
2. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Evidence For Cold Fusion in the Palladium-Deuterium System”, J. Electroanal. Chem., 296, 1990, pp. 241-254. Britz Miles1990b
3. M.H. Miles, K.H. Park and D.E. Stilwell, “Electrochemical Calorimetric Studies of the Cold Fusion Effect” in The First Annual Conference in Cold Fusion Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 328-334.
4. Cold Fusion Research – A Review of the Energy Research Advisory Board to the United States Department of Energy, John Huizenga and Norman Ramsey, Cochairmen, November 1989, p. 12.
5. B.F. Bush, J.J. Lagowski, M.H. Miles and G.S. Ostrom, “Helium Production During the Electrolysis of D2O in Cold Fusion Experiments”, J. Electroanal. Chem., 304, 1991, pp. 271-278. Britz Bush1991b
6. M. H. Miles, B.F. Bush, G.S. Ostrom and J.J. Lagowski, “Heat and Helium Production in Cold Fusion Experiments”, in The Science of Cold Fusion Proceedings of the II Annual Conference on Cold Fusion, T. Bressani, E. Del Guidice and G. Preparata, Editors, Italian Physical Society, Bologna, Italy, 1991, pp. 363-372. ISBN 88-7794-045-X.
7. M.H. Miles, R.A. Hollins, B.F. Bush, J.J. Lagowski and R.E. Miles, “Correlation of Excess Power and Helium Production During D2O and H2O Electrolysis Using Palladium Cathodes”, J. Electroanal. Chem., 346, 1993, pp. 99-117. Britz Miles1993.
8. M.H. Miles, “Correlation of Excess Enthalpy and Helium-4 Production: A Review”, in Condensed Matter Nuclear Science, ICCF-10 Proceedings 24-29 August 2003, P.L. Hagelstein and S.R. Chubb, Editors, World Scientific, Singapore, 2006, pp. 123-131. ISBN 981-256l-564-7. lenr-canr version.
9. M.H. Miles and M. C. McKubre, “Cold Fusion After a Quarter-Century: The Pd/D System” in Developments in Electrochemistry: Science Inspired by Martin Fleischmann, D. Fletcher, Z-Q Tian, and D.E. Williams, Editors, John Wiley and Sons, U.K., 2014, pp. 245-260. ISBN 9781118694435.
10. J. Schwinger, “Nuclear Energy in an Atomic Lattice” in The First Annual Conference on Cold Fusion: Conference Proceedings, March 28-31, 1990, Salt Lake City, Utah, pp. 130-136.
11. S.B. Kirvit and N. Winocur, The Rebirth of Cold Fusion: Real Science, Real Hope, Real Energy, Pacific Oaks Press, Los Angeles, USA, 2004, p. 84. ISBN 0-9760545-8-2.
12. N. Hoffman, A Dialogue On Chemically Induced Nuclear Effects: A Guide for the Perplexed About Cold Fusion, American Nuclear Society, LaGrange Park, Illinois, 1995, pp. 170-180. ISBN 0l-l89448-558-X.
13. M. Fleischmann, S. Pons, M.W. Anderson, L.J. Li and M. Hawkins, “Calorimetry of the Palladium-Deuterium-Heavy Water System”, J. Electroanal. Chem., 287, 1990, pp. 293-348. (See Fig. 12, P. 319). lenr-canr copy.
14. S.R. Chubb and T.A. Chubb, “Lattice Induced Nuclear Chemistry”, in Anomalous Nuclear Effects in Deuterium/Solid Systems, S.E. Jones, F. Scaramuzzi and D. Woolridge, Editors, American Institute of Physics, New York, USA, 1990, pp. 691-710. ISBN 0-88318-l833-3.
15. G. Preparata, QED Coherence in Matter, Chapter 8: “Towards a Theory of Cold Fusion Phenomena”, World Scientific, Singapore, 1995, pp. 153-178.
16. E. Storms, The Explanation of Low Energy Nuclear Reaction: An Examination of the Relationship Between Observation and Explanation, Infinite Energy Press, Concord, N.H., USA, 2014, pp. 28-40. ISBN 978-1-892925-10-7.
17. C.-C. Chien, D. Hodko, Z. Minevski and J.O.M. Bockris, “On an Electrode Producing Massive Quantities of Tritium and Helium”, J. Electroanal. Chem., 338, 1992, pp. 189-212.
18. D. Gozzi, R. Caputo, P.L. Cignini, M. Tomellini, G. Gigli, G. Balducci, E. Cisbani, S. Frullani, F. Garibaldi, M. Jodice and G.M. Ureiuoli, “Quantitative Measurements of Helium-4 in the Gas phase of Pd+D2O Electrolysis”, J. Electroanal. Chem., 380, 1995, pp. 109-116.
19. M. McKubre, F. Tanzella, P. Tripodi and P. Hagelstein, “The Emergence of a Coherent Explanation for Anomalies Observed in D/Pd and H/Pd Systems: Evidence for 4He and 3H Production” in Proceedings of the 8th International Conference on Cold Fusion, F. Scaramuzzi, Editor, Italian Physical Society, Bologna, Italy, 2000, pp. 3-10. ISBN l88-7794-256-8.
20. N.S. Lewis, C.A. Barnes, M.J. Heben, A. Kumar, S.R. Lunt, G.E. McManis, G.M. Miskelly, R. M. Penner, M.J. Sailor, PG. Santangelo, G.A. Shreve, B.J. Tufts, M.G. Youngquist, R.N. Kavanagh, S.E. Kellogg, R.B. Vogelaar, T.R. Wang, R. Kondrat and R. New, “Searches for Low-Temperature Nuclear Fusion of Deuterium in Palladium”, Nature, 340, 1989, pp. 525-530.
21. D. Albagli, R. Ballinger, V. Cammarata, X. Chen, R.M. Crooks, C. Fiore, M.P.S. Gaudreau, I. Hwang, C.K. Li, P. Lindsay, S.C. Luckhardt, R.R. Parker, R.D. Petrasso, M.O. Schloh, K.W. Wenzel and M.S. Wrighton, “Measurements and Analysis of Neutron and Gamma-Ray Emission Rates, Other Fusion Products, and Power In Electrochemical Cells Having Pd Cathodes”, J. Fusion Energy, 9, 1990, pp. 133-148.
22. M.H. Miles, B.F. Bush and D. Stilwell, “Calorimetric Principles and Problems in Measurements of Excess Power During Pd-D2O Electrolysis”, J. Physical Chem., 98, 1994, pp. 1948-1952.
23. M.H. Miles and M. Fleischmann, “Twenty Year Review of Isoperibolic Calorimetric Measurements of the Fleischmann-Pons Effect”, in Proceedings of 14th International Conference on Cold Fusion (ICCFf-14), D.J. Nagel and M.E. Melich, Editors, University of Utah, Salt Lake City, U.S.A., 2008 Volume 1, pp. 6-10. (See also

The moment of truth has already passed

Mats Lewan continues to believe, long after the frauds of Andrea Rossi became crystal clear. From his blog, An Impossible Invention:

The moment of truth is getting close with launch on January 31st

“An Impossible Invention” is the title of Lewan’s book about Rossi and the “E-cat.” The reference is to the alleged impossibility of a device, an “energy catalyzer,” to generate heat from nickel and hydrogen. Lewan, a science journalist originally, was right, my opinion, to treat the “invention” as “possible,” not “impossible.” However, the problem isn’t impossibility, it is that Rossi was shown, by incontrovertible evidence in the trial, Rossi v. Darden, to have lied repeatedly. Case guide. 

On January 31, 2019, inventor and entrepreneur Andrea Rossi will hold an online presentation on the commercial launch of his heating device, the E-Cat. Thereby, the moment of truth is approaching for the carbon free, clean, abundant, cheap, and compact energy source that could potentially replace coal, oil, gas, and nuclear, and also solve the global climate crisis.

This is fluff. The moment of truth passed long ago. Rossi claimed to have a 1 MW reactor ready for sale before the end of 2011. That reactor was actually purchased by Industrial Heat, for $1.5 million, and delivered in 2013. With that, and a payment of $10 million, Rossi also agreed to disclose whatever was needed to build the reactors, and to license the technology to Industrial heat, for regions covering half the planet. In addition, subject to a “guaranteed performance test,” IH was to pay Rossi $89 million more. Rossi remained free to market or use the technology independently in the other half of the world.

It appears that Lewan has refused or failed to read the evidence from that trial, consisting of documents, almost entirely unchallenged, plus depositions under oath. We can assume that the unchallenged evidence is authentic, there are detailed responses from both sides, in motions to dismiss and answers to those.

The trial began, the jury was seated, and opening arguments were made. It was obvious to me how this was going to go. Rossi’s claim for $89 million was going to be rejected, for many reasons, IH was not going to be able to recover their investment paid to Rossi (because of estoppel), but IH would be able to claim fraud from the “Doral test,” and be able to collect damages from Rossi and those who assisted him perpetrate the fraud.

Obviously, Lewan could dispute that, but not reasonably unless he actually looks at the evidence, evidence that I studied and documented intensely, in order to make it available.

Since I started reporting on Andrea Rossi’s E-Cat technology in 2011, he always told me that his main goal, and the only thing that would convince people about the controversial physical phenomenon it was built on, would be to put a working product on the market.

What is truly odd about Lewan is that he says this, but actually ignores it. There was an allegedly “working product” on the market in 2011, with a price of $1.5 million, and it was purchased by an eager customer, IH. The guaranteed performance test did not take place in a timely fashion. Rossi blames IH for that, but the evidence shows otherwise, but Rossi then convinced IH to allow the reactor to be installed in Florida for a sale of power to a “customer” he had found, and he argued that an independent customer would be more convincing as a demonstration than what IH had proposed, an installation in North Carolina in a related company.

And Rossi clearly represented that the customer was actually Johnson-Matthey, Rossi’s emails show how he then attempted to create plausible deniability. A jury would have seen right through that. The customer was, in fact, a company set up by Rossi’s attorney, Johnson, who was also the President of Leonardo Technologies, Rossi’s Florida company. There was no “chemical company” other than Rossi’s activity, he controlled it entirely.

But if the reactor worked, so what? At least that is what many on Planet Rossi think. IH claimed that they had been unable to create any success with Rossi reactors, other than what appeared in some tests, later considered to be artifact (such as the Lugano test: IH had made that reactor).

This was the ultimate market test. IH was not about to pay $89 million for a “test” that did not satisfy the terms of the Agreement, but, because, the thinking would go, perhaps Rossi, known to be paranoid, had not disclosed to them the “secret.” So, having paid Rossi $11.5 million (and more in various ways), they would have wanted to keep the license, just in case it turned out to work.

They had four or five lawyers sitting there in the trial in Miami, it was costing them millions of dollars. They might not have been able to recover their legal costs, and there would be other reasons to avoid a trial. They are working to support inventors, and prosecuting a fraud claim against an inventor would not be the kind of publicity they would want.

So when Rossi, having claimed for a year that he was going to wipe the floor with Darden and Industrial Heat, proposed a walk-away, that no money change hands, he gives up his $89 million claim, and they give back the reactors (there were actually two 1 MW plants plus other prototypes), and the license was cancelled, they accepted.

They knew more about the Rossi technology than anyone other than Rossi. They had worked for about three years trying to get it to work. If it worked even modestly well, it would have been worth many billions of dollars, maybe trillions. With that knowledge, instead of spending a few million more, they chose to walk away, and focus on other LENR technology.

To me, this is beyond-a-reasonable-doubt evidence that Rossi technology was worthless. And the kicker: After the case settled, Rossi had people screaming for a plant, and he had two of them. If the technology actually worked, he could have installed it in a real customer’s facility, or could have sold heat to heating co-ops in Sweden. He’d have been making money hand over fist.

Instead, he dismantled the plants, destroying them, and focused on his “improved product,” which is what the upcoming demo is about.

Now, eight years later, after events taking unexpected and amazing turns which I told in my book An Impossible Invention and in this blog, Rossi claims to be ready to do so. His plan is to sell heat from remotely monitored devices at a price per kWh 20 percent below market price, with no carbon emissions from the operation of the devices.

The book did not cover the revealed information about the IH/Rossi affair. He has mentioned it on the blog, with shallow, very incomplete coverage that gives full voice to Rossi deceptive descriptions. Lewan has become a Rossi shill.

The Doral installation was a sale of power at $1000 per megawatt-day. So he already had, over eight years ago, a plant that could be installed to do what he now “plans” to do. Unless he was lying, then, and if he was lying then, why would we imagine he is not lying now?

(Note: The business model of selling a service rather than a product is a strong megatrend driven by digitalisation and by internet of things, making remote monitoring more effective, and it is already used by e.g. Rolls-Royce and GE, selling flight hours rather than aero engines).

This is basically irrelevant. Software is also licensed, not sold, etc.)

While this already implies a substantial cost-saving for the customers, it is most probably only the start of what the E-Cat technology can provide ahead, if it works as claimed.

There is no news here, only a “plan” which is not binding on anyone. On what basis does Lewan claim “probable.” Yes, he hedges it, “if it works as claimed.” Does he attempt to assess the odds of it working? Would past performance be a way of assessing this? Some who has failed many times to deliver what he promised, how much credence should be placed on new promises, in advance of a independently testable product?

At the online presentation (more info at Rossi plans to show a two-hour video of a device already in operation, reportedly heating an industrial premises of about 250 square meters in the US to 25°C since Nov 19, 2018. At the presentation, he will provide details regarding the commercial launch, but here is what I have been told and what I have concluded so far:

We know that what Rossi says is utterly unreliable. Does Lewan know that? Has he looked at the evidence, or does he just run on his gut?

A demonstration like that described can be faked six ways till Sunday. Rossi claimed that the reactor in Florida actually delivered a megawatt for most of the one-year period, based on measurements that he controlled, completely.

The problem was that a megawatt in that warehouse (is this the same “industrial premises”?), given the lack of a powerful heat exchanger, would have made it uninhabitable, fatal to occupants. That was one of the facts to be brought out at trial.

Rossi, last minute, as discovery was closing, contradicting what he had written on his blog for a year, claimed to have made a heat exchanger, didn’t keep receipts or take photographs, and he used the labor of guys who drive around in trucks looking for work, and … it would have had to have been there for the whole year, without anyone visiting noticing it, and it would have been noisy as hell and very visible.

No, he lied again, this time under oath, so that’s why his attorney had little trouble convincing him to settle if he could. He was facing not only losing millions of dollars, but also a possible criminal prosecution for perjury. Rossi was used to lying to the public, which is not necessarily illegal. He was playing a new game in U.S. federal court, where lying is a Very Bad Idea.

Lewan then goes on to give the alleged characteristics of the E-Cat SK. It is all “what he has been told,” and he reports what he was told with no sign of caution or skepticism. Lewan has had enough experience with Rossi to know he can be deceptive. This is my theory: if he were to ask inconvenient questions, he’d lose his access to Rossi. And he’s now made it a business, selling the book, which he is planning to update.

These characteristics are entirely Rossi Says. When we talk about generations of development of devices (Lewan calls the SK the “fourth generation”), it’s assumed that the earlier generations worked and the later generations are improved. If in mercato veritas, what is the truth of the earlier generations?

Bottom line, they were worthless. If they actually worked, they were worth, even as prototypes, at least hundreds of millions of dollars. The market has spoken the truth, but Lewan is ignoring it.

Lately, I have reported little on the E-Cat, simply because there has been essentially no new information that could be confirmed. Also in this case, in theory we will not be able confirm any of the claims presented, specifically since the existing customer will not be disclosed at the presentation on Jan 31, as far as I know.

There was a great deal of information revealed in 2016, in the trial. Lewan ignored it, relying only on what Rossi told him, apparently. Now, we still have no verifiable information. So why would January 31 be the “moment of truth”? Why is Lewan hyping this non-event, where Rossi will just present more smoke and mirrors?

But let’s assume that the there’s no working E-Cat device. Then either Rossi is fooling himself, and there’s nothing that makes me believe this now, or it’s a fraud, which hardly makes any sense at this point.

We already know that Rossi lies and that if the Doral plant worked, it was not working at anything like the level claimed. If it were a weak technology, but working, IH would have held onto it fiercely. They could afford it. (Prepping for the trial, Rossi claimed that IH wasn’t paying because they didn’t have the money to pay, but, in fact, IH had lined up $200 million ($150 million beyond what was already invested in other technology), plenty to pay Rossi and have money for development, but … they were not about to spend that when the frikkin’ reactors didn’t work!

It wasn’t even a weak technology. Before they made the deal with Rossi, they knew Rossi had a checkered past, but they decided they needed to find out. So they found out. It didn’t work.

It also “hardly made any sense” that a fraud would sue their defrauded customer. But he did. Basically, Lewan appears to have no idea how Rossi might actually think and operate, he has ignored the experience of those who worked closely with him for years.

In the fraud case, the E-Cat SK would be an electric heater consuming as much power as it outputs. But after at least a decade of hard work, without asking money from any third party, having earned USD11.5M from his ex US partner Industrial Heat, why would Rossi get back now and sell heat at a loss? To a customer that would immediately discover the fraud by looking at the electricity consumption of the device?

This is absolutely appalling. Rossi asked for and got funding from Ampenergo, so when IH bought the license from Rossi, Ampenergo was part of the deal, signed on, and IH paid Ampenergo millions in addition to what they paid Rossi. And then Rossi not only asked for and received $11.5 million from IH, he was also demanding $89 million. In Doral, there was no customer, but the fake customer agreed to pay $1000 per day for power, and Rossi approved invoice requests for IH to issue for those amounts. IH wasn’t convinced that there was a real power sale; for whatever reason, they didn’t issue those invoices, but the customer had no income, no business, so who would have paid those invoices?

Obviously, Rossi was willing to pay invoices, and it would then have strengthened his case to collect the $89 million. Spending $360,000 to gain $89 million? Lewan has the brain of a cockroach.

(Sorry, cockroaches, you are smarter than that.)

We don’t know anything about the conditions of a power sale. We don’t know how large the container for the reactor is. It must be large enough to protect the reactor from intrusion, and what kind of power source could be inside? We don’t know. This is all speculation, not news. Bottom line, a sale of power could be a fake demonstration of power generation, and, in addition, what if the “customer” is in collusion with Rossi? What would be the goal? Most likely, to gain investment.

Let’s suppose this is a 40 KW reactor.Say that power costs 10 cents/kW-h, that’s $4 per hour, $48 per day if it is 24/7, or under $18,000 per year, if the input power were free. Rossi could easily afford that for a time, and being able to report a satisfied customer — and he could create more than one –, how much more investment could he obtain?

(In this scenario, Rossi could smuggle fuel into the reactor, say propane, which would fuel an ordinary water heater.. So he could have apparent input power far below the heat output. He would be able to charge 80% of the going rate for heat, so, yes, he would be losing money, but not nearly as much as it might seem. Ponzi scheme!)

Clearly, only when at least one customer, having used the heat from the E-Cat SK for some time, will speak publicly about the service, the moment of truth will arrive.

No. There was “one customer” in Florida, apparently an independent company, with a lawyer representing it. In fact, it was a blind trust, in fact, it was not independent, and did not, contrary to the installation agreement with IH, measure the heat delivered independently. Lewan doesn’t think of the possible problems because he has paid no attention to what actually happened in Florida.

I looked above, and Lewan did hedge his claim. The moment of truth is not January 31. It is rather “the moment of truth is getting close with launch on January 31.” Except this is not a “launch.” With a product launch, the product becomes available. Is a product becoming available?

Once again, Rossi claimed an available product, a “1 MW reactor” in 2011. So was that “close to launch”? Lewan is more like “out to lunch.”

Meanwhile, everything else that I have observed and witnessed during these eight years, including my own measurements on the previous E-Cat versions, and the one-year test of a one megawatt plant in Doral, FL, during which Rossi started developing the E-Cat QX with its electronic/electromagnetic control system, indicates that the E-Cat is a working device, although many would call it An Impossible Invention.

About that “one year test” in Florida, it didn’t work, it was fraud. “Impossible Invention” is totally irrelevant. All the prior tests had glaring defects. Lewan was present for the Hydro Fusion test, which failed, and at which Rossi argued that they were not measuring input power correctly. Lewan argued with him, apparently think that this was just an honest mistake. But if Rossi could make that mistake with the Hydro Power test, how about with his own? Again and again, basic problems existed with the tests, never resolved because Rossi kept changing the device operation, so a possible artifact in one test could not be verified (or otherwise) in the next.

This is all obvious to many, many observers, so why not to Lewan?

By the way, I would like to share my impression that the groundbreaking control system of the E-Cat QX and the SK, is the result of a kind of dreamteam consisting of the genius Andrea Rossi, with elusive and creative ideas about physics and about what he thinks could be possible, and of electric engineer and computer scientist Fulvio Fabiani, not only being an expert on electronics but also being capable of interpreting Rossi’s wild and hard-to-grasp ideas, transforming them into real electronic circuits actually performing the job Rossi had in mind.

What a flack! Fabiani played a role in Florida, and I’m not going to go over it, but he was in line to lose substantial sums from his professional incompetence. He destroyed evidence belonging to IH.

I will develop this story further in the updated third edition of my book, which I hope to be able to conclude within a year or so, once the moment of truth has arrived.

And when the moment arrives, the E-Cat technology will most probably start providing clean, cheap, abundant, and sustainable energy to everyone in the world, in combination with solar and wind (which are a long way from replacing fossils on their own, and furthermore also require problematic large scale world-wide chemical battery implementations for energy storage).

Until then, the champagne remains on ice. And when I open it, I will be thinking of Sven Kullander and of late Prof. Sergio Focardi who played a fundamental role, helping Rossi to develop the E-Cat technology.

And Lewan has announced (twice, cancelled twice) a New Energy conference, featuring Rossi technology. He has lost all credibility. Here are his announcements:

UPDATE: The New Energy World Symposium was postponed in March 2017, waiting for an upcoming commercial launch of LENR based power. Read more here.

UPDATE 2: An online presentation regarding commercial launch of LENR based power will be held on January 31, 2019. Please get back to this blog for a report shortly.

I’m happy to announce that registration for the New Energy World Symposium is now open, with an Early Bird discount of EUR195 valid until February 17, 2018.

He knows that January 31 is unlikely to be the “moment of truth.” So why is he plowing ahead? (and this. scheduled for June, 2019, was also postponed indefinitely)


Andrea Rossi today published, on ResearchGate, a “preprint,” E-Cat SK and long range particle interactions. This is a theoretical paper standing on unverifiable experimental results, but it does disclose some data not seen before.  The paper begins:

The E-Cat technology poses a serious and interesting challenge to the conceptual foundations of modern physics.

There is no challenge until there are confirmed experimental results. Previous reports of SK performance were based entirely on RossiSays, with no verification allowed of necessary measurements. The device demonstrated in Stockholm was periodically stimulated with a high voltage, which would strike a plasma, which would then have low resistance. That strike would be relatively high voltage and would input power into the system. No measurements were allowed of the full input power, or, in fact, even of operating power, i.e., both the voltage and current in steady state operation.

This paper gives this description:

5 Experimental Setup

The plausibility of these hypotheses is supported by a series of experiments made with the E-cat SK. The E-cat SK has been put in a position to allow the eye of a spectrometer view exactly the plasma in a dark room: an ohm-meter has measured the resistance across the circuit that gives energy to the E-Cat; the control panel has been connected with an outlet with 220 V , while from the control panel departed the two cables connected with the plasma electrodes; a frequency meter, a laser and a tesla-meter have been connected with the plasma for auxiliary measurements; a Van der Graaf electron accelerator (200 kV ) has been used for the examination of the plasma electric charge. Other instruments used in the experimental
setup: a voltage generator/modulator; two oscilloscopes, one for the power source and one for monitoring the energy consumed by the E-Cat; Omega thermocouples to measure the delta T of the cooling air; IR thermometer; a frequency generator.

There are no useful details in this. What was the experimental procedure? In what is a plasma created? How is the plasma created? “Energy consumed” is a standard Rossi trope. Energy is not consumed, unless there is an endothermic reaction, we could then use that language.

The voltage across the device is given as 0.25 volt and the current 3.2 mA. He claims a resistance of 75 ohms. Previously he claimed that the operating resistance was zero. 3.2 mA might maintain a plasma, but would not strike it. Periodically, in the Stockholm demonstration, there was a zapping sound and a flash of light. He was striking the plasma, which would take a far higher voltage. There is no mention of striking a plasma in the paper.

In any case, no confirmed experimental results, no challenge.



subpage of iccf-21/videos/
Video link, slide PDF, abstract from pre-conference distribution.

The Fleischmann-Pons heat and ancillary effects. What do we know, and why? How might we proceed?

#Michael C.H McKubre1
1SRI (Retired), New Zealand

After almost 30 years of studying seemingly anomalous nuclear effects in metal hydride systems what
can we say that we have learned with high-level confidence? For some of us it has been a nearly full-
time journey; for a rare few nearly fully-funded. After this time and effort what is it that we can assert
and defend about our new knowledge of nuclear reactions in condensed matter? These questions are
subjective and I will focus my answers on what I have learned by direct experiment and analysis, and
from the experience of a few close colleagues – mostly ENEA (Italy), Energetics (Israel), MIT and
various Navy Labs around the US.
One must seek scientific truth via correlation. Isolated “facts” are rightly called “anomalies”. These
are useful in alerting the world to the presence of potential novelty but are not particularly useful in
themselves and I have resisted characterizing our field as “anomalous”. Anomalies exist to be
explained or rejected – in either case forsaken as anomalies. At one point I recommended not
accepting papers for presentation at one of our conferences unless more than one variable was
measured and a correlation shown between them. In our work at SRI, initially the correlation sought
was excess heat and loading. Under EPRI sponsorship (and gentle duress) we searched diligently for
correlation between excess heat and any plausible nuclear product: neutrons, gammas, X-rays or low
energy gammas, betas, photo-radiographic evidence of any photons, tritium (indirectly) and (finally)
helium-4 and helium-3.
This exercise of seeking multi-correlation is, however, exquisitely painstaking and, therefore
expensive, requiring the physical presence of experts covering a wide range of specialized knowledge
and specialized hardware. In FPHE experiments more subtle difficulties are added by the challenge
(or impossibility) of optimizing experiments to satisfy the constraints of: electrochemistry, without
which there is insufficient loading and apparently no effect; calorimetry, without which there is no
believable effect to correlate; and whichever of the pantheon of potential nuclear products (not ash)
that one strives to correlate. Obviously everyone except the experimenter would prefer the search for
products in all plausible output channels, in real time, and in situ. But this is experimentally not
possible at our present level of investment – and perhaps not at all unless the effect is made larger
and triggerable. Gozzi et al [1] have reported a nuclear multi-correlation (X-ray, heat excess and
helium-4 in the D/Pd system) but they were cautious in their interpretation and this work was not
How might we proceed rationally? Nearly 30 year old anomalies should have grown to adult maturity
and self-sustainability or been buried and forgotten. By various factors we have been heavily
constrained from pursuing and accomplishing the one thing that would make anomalies go away:
correlation, preferably multi-correlation. Correlation is one thing that can rapidly advance our
research and cause. Practical reality is another – a working device even if only a toy, but with net
gain and easily observed utility. The search for correlation would be vastly simplified by an ability
to trigger the effect on demand thus permitting phased analysis. A working device demands even
more – the capability to control the effect; the ability to turn it on/off, up/down. What indications do
we have to encourage us and guide us to tread either of these paths?
[1] D. Gozzi, P. L. Cignini, M. Tomellini, S. Frullani, F. Garibaldi, F. Ghio, M. Jodice and G. M.
Urciuoli, Proc. ICCF2, p. 21 (1991).

00:00 well good morning! it’s exactly 4 00:04 o’clock in the morning my time.
I will 00:07 attempt to hold your attention and mine. 00:11 the slides move by asking? next slide 00:16 please.
00:29 it looks like that worked.
so, Dave 00:34 gave a fairly flowery and flattering 00:37 introduction. I just have a little list 00:42 of my background here I knew. . . . most of you 00:45 guys know me , I’ve been doing this a long 00:46 time.  I’ve been doing cold fusion since 00:52 March 23rd,1989.
entered the field 00:56 mostly because I knew Martin.
I knew him 01:00 to be an extraordinarily inventive and 01:05 imaginative guy, and I think had it been 01:08 anybody except Martin I wouldn’t have 01:10 bothered to go into the laboratory.
the 01:12 idea was transparently ridiculous.
but he 01:17 was Martin.
I also had better 01:21 reasons to reject his claims, having 01:26 worked in the deuterium palladium system 01:28 for already a decade at that point.
so 01:30 I’d been exploring palladium in heavy 01:35 water for 10 years for an entirely 01:38 different purpose, and I thought I 01:39 understood the system, so in 1989 01:43  I understood. deuterium-palladium. got it.
01:47 nearly 30 years on, I know a lot less now 01:52 than I knew then.
my windows of 01:56 ignorance have opened wide.
until two 02:00 years ago, as Dave said my effort was 02:02 essentially full-time on this topic, 02:07 essentially fully funded, and at least 02:11 half of my focus for the 02:13 thirty years has been on cold fusion and 02:16 related matters.
I was sort of head down 02:19 buried in a trench, looking at this 02:25 world in a particular way, and, in a way, 02:27 retirement to New Zealand has given me 02:32 an opportunity for reflection.
so what 02:34 I’m doing, while I’m sitting on my deck 02:36 overlooking the ocean, drinking beer, is 02:39 thinking about what did I do?
you know, 02:43 what do I know?
what what have I been 02:45 saying to you, over the years which is 02:49 not actually completely true? 02:52 what things have I 02:54 convinced myself of, by repetition, that 02:57 aren’t quite as solid as I maintained 03:03 and believed, and so I covered a lot 03:07 of this yesterday [in the Short Course].
you know, a little bit 03:08 of retrenchment and retraction. I don’t 03:11 back off any of the big claims, but some 03:13 of the details need a little tweaking.
so 03:19 this reflection has done me some good. 03:24
I’m still learning to drive this tool [the slide control]
03:27 there you go. just a reminder for those 03:32 that haven’t attended every one of these 03:34 conferences, the contributions that my 03:38 group — which is of which is a broad group, 03:40 I have a slide at the end which 03:44 acknowledges my important collaborations, 03:48 which has fifty nine people on it. 03:50 unfortunately, I think 13 of them no 03:53 longer with us, but it was a large group, 03:56 a large effort, a large focus on the 04:00 things that we knew very well at the 04:03 beginning. the loading diagnostic 04:05 resistance ratio was something that I’ve 04:07 been using as a tool since 1978, when I 04:13 arrived at SRI, so I was well poised to 04:16 use resistance methods to measure the 04:20 loading of deuterium into palladium.
I 04:22 have an electrochemistry background 04:24 specifically electrochemical kinetics, can 04:26 so I understood the tools and means 04:29 and tricks that one might need to employ 04:32 in order to push deuterium as hard as 04:35 possible into a palladium.
I had no 04:38 background in calorimetry, I had no intention 04:41 of having any background in calorimetry, or 04:44 desire, but if you’re looking for heat 04:47 and you’ve got to use calorimetry.
so we 04:49 basically taught ourselves calorimetry, 04:51 and pretty much dragged mass flow 04:55 calorimetry kicking and screaming into 04:58 the 20th century, and I mean 20th century.
05:04 we’ve done work on observation of 05:09 tritium, both directly as tritium and as 05:14 its decay product, helium-3, done a fair 05:18 amount of work with helium-4 05:20 measurements, and in the early days with 05:25 EPRI sponsorship, the charge was examine 05:31 all possible nuclear products 05:36 simultaneously.
. . . sounds scientifically 05:39 reasonable, but it is practically 05:41 impossible and hideously expensive, but 05:44 the biggest problem is: to optimize one 05:48 detection method, you always compromise 05:51 another.
so you have to be quite 05:53 selective in what it is you set out to 05:56 measure.
but every one of the experiments 05:58 we did was continuously monitored for 06:01 neutrons, for radiation health  06:04 purposes, and gammas and other things.
so 06:07 we’ve done a lot. that actually has 06:09 nothing to do with my talk I just wanted 06:10 to, you know, boast a little.
so all that 06:15 time, and all that work. . . . I think it’s time 06:18 for condensed matter nuclear science, 06:22 cold fusion, to be anomalous no more. I 06:26 want it to be normal, 06:28 interesting, physical science. I don’t 06:30 want to curb and hold my tongue in 06:38 meetings of learned physicists.
I want 06:42 them to engage in dialogue, and until 06:47 such time as we can engage in free and 06:51 open dialogue, with the broader 06:53 scientific community, I don’t think we’re 06:54 going to make very much progress.
but if 06:59 the name of condensed matter nuclear 07:01 science is accurate and meaningful, we 07:05 have the possibility to wield the power 07:07 of nuclear physics on a tabletop.
heat is 07:12 a poor product. it’s the worst product.
07:15 what else might one be able to do if one 07:18 can wield some — or more — of the power 07:23 of nuclear physics, directly under your 07:26 control, in a tabletop 07:30 experiment.
everyone here wants this to 07:34 happen. 07:35 you’ve self selected.
the reason you’re 07:37 in this room is you’re curious, and 07:39 you’re curious because you would like it 07:41 to happen.
okay, in the face of cognitive 07:47 dissonance, and Tom spoke some about this, 07:51 I love this quote from Irwin 07:55 Shapiro. he’s a Harvard astrophysicist, 07:58 and he meant it ironically, of course, but 08:01 “the best explanation for the 08:03 Moon is observational error, the Moon 08:05 doesn’t exist.” it’s a much more 08:09 comfortable situation, rather than 08:13 explain the anomalies of its, you know, 08:15 situation, spin, geology, density in 08:19 whatever it’s. . . .
we could just say 08:21 — just pretend — it doesn’t exist.
08:23 unfortunately, or fortunately, every night 08:26 — almost every night — 08:28 it proves and reproves itself.
so this 08:31 this hypothesis is is08:34 repeatedly demonstrated to be unsound. 08:37 but so equally is the notion that cold 08:42 fusion doesn’t exist.
it repeats 08:46 itself, 08:47 almost daily, in one or other 08:50 laboratories around the world.
so we 08:53 can’t just shy away from the things that 08:57 we’ve seen.
quite honestly, it 09:01 wasn’t until I saw the effect with my 09:03 own eyes, in my own laboratory, under my 09:06 own control, that I was prepared to 09:09 accept that this was a real thing, 09:12 despite the fact that my teacher and 09:15 good friend, Martin Fleischmann, one of 09:17 the brightest people I ever, ever engaged 09:20 with, had told me it was true 09:22 I still needed to prove it for myself. 09:24 and I think that’s fair.
that’s fair, but 09:27 what also is fair is that if you’re 09:29 going to reject the notion of cold 09:32 fusion, you have to dig into the 09:35 literature.
read it, understand i,t and 09:37 make sure that you go back to primary 09:39 sources.
so. what about pathways to break 09:45 through this log jam?
my five “tions”: 09:50 verification, correlation, replication, 09:53 demonstration, utilization — and these 09:55 don’t have to be sequential.
I submit to 10:00 you that the first verification has 10:02 already happened. it happened, 1991, 1992.
10:10 I’ll give you some backup for that 10:13 claim in a following slide, but at ICCF-20, 10:17 which I didn’t attend — very very sad 10:20 not to have attended, the first one I 10:21 didn’t attend — but Matt’s terrific. in 10:24 my name, [he] charged the community with a set 10:32 of what I saw as the the problems that 10:35 we were having.
self-censorship 10:39  — we seemed to be a little timid. 10:45 I wonder if we’re not concealing and 10:50 hiding our research for fear of 10:54 disturbing angry large people, 10:58 who might cause us harm, 11:00 financially or otherwise.
are we guarding our 11:04 secrets for fear that others 11:06 might take credit?
and, yes. the answer is 11:09 yes, and every one of you researchers is 11:11 holding something back, because you don’t 11:13 want to give it up.
you don’t want 11:15 somebody else to take it and run with it 11:17 faster, than you’re running with it
11:19 but, however, anyone that 11:22 that believes that we’re working on a 11:24 problem that has the potential to 11:26 relieve mankind’s primary problem. our 11:29 primary problem is where are we going to 11:31 get our energy from in the future, 11:33 preferably without further messing up 11:35 the planet.
you believe that. you 11:39 you must collaborate, cooperate, 11:42 communicate.
now, I’ve spent a career in 11:47 confidential research. I understand the 11:49 problem, but I’m here, now, free to 11:55 do so, to exhort a much higher level of 11:59 collaboration and cooperation and 12:01 communication.
we communicate poorly our 12:06 publication record is not very good, and 12:09 not very well codified.
I want to call 12:12 attention to two people who I think have 12:14 done a very large amount to reverse that. 12:19 I see them right here in front of me 12:21 Jed Rothwell and his website is a 12:25 marvelous resource.
when I left SRI, I 12:28 threw away six dumpsters worth of stuff.
12:33 my library in New Zealand has not 12:38 even been built, so I have no access to a 12:41 lot of the material.
I can go on Jed’s site, 12:43 and pull down my own papers, which is 12:45 just a wonderful, wonderful tool.
and 12:52 and Jean-Paul [Biberian]‘s stewardship of the 12:58 journal, which is apparently getting 13:01 increasing submissions, and the 13:03 quality is increasing. thank you, Jean-Paul.
13:05 but our communication record is bad. 13:14
this failure is both accidental and 13:20 deliberate. we are concealing, you are 13:25 concealing, I am concealing, in fact.
I am 13:27 concealing things from you, because I’ve 13:30 signed my life away on a piece of paper.
13:33 know that I’m not at SRI, I don’t 13:37 have institutional lawyers to back me up, 13:39 so I have to be even more careful about 13:41 whose secrets I don’t give away.
but 13:47 there’s also been a enormous amount of 13:49 repetition, and I hit on this yesterday. 13:52 we’ve discovered the same thing several 13:55 times over .
sometimes because it’s been 13:58 concealed but sometimes because it 14:01 wasn’t read. It was stated orally in 14:06 presentations, conference proceedings, and 14:10 journal publications, and several 14:14 things which I knew very well, and spoke 14:17 about in 1990 have been rediscovered two 14:22 or three times later by research groups 14:24 at huge expense of effort and labor and 14:28 time and skill.
the literature is 14:32 not in the best possible shape, but 14:34 there is a literature, and there is much 14:37 good work in it, it really needs to 14:39 be studied better.
so what Matt said — I’ll 14:47 blame it on Matt — at ICCG-20 was a 14:52 grand challenge, and what I said through 14:55 Matt, was that I would come here and I 14:57 would rank the community against its 15:01 success against these 15:03 objectives.
and the numbers on the right 15:07 in blue are my ranking on, out of ten, of how 15:12 well we’ve achieved in these various 15:15 things.
produce fresh experimental 15:18 results of non chemical anomalies. I’m 15:24 embarrassed 15:25 frequently to give presentations and go 15:29 back to data that I measured in 1990, 15:34 1991. there was a huge flux of new 15:40 information came out, but fresh 15:43 things are not really happening at the 15:45 rate that I would like to see. I don’t 15:48 want to use me as a source. I want to use 15:51 you guys, in your research 15:54 as a source of moving forwards.
new 16:00 anomalies need to be seen observed, 16:02 measured, and replicated in multiple 16:05 laboratories.
these results need to be 16:09 communicated clearly, technical articles.
16:12 JCMNS is fine. if we can do it other 16:15 places, that’s good too, and I would like 16:18 to see us as a group, as a community, 16:21 identify the three or four best 16:23 experiments, and organize some sort of 16:28 multi-laboratory efforts to replicate it. 16:30 write some papers, peer review.
but as you 16:35 see my scale, here only communication 16:38 sort-of gets a four, everything else is 16:42 one, two, or zero. 16:49 we have not done a good job of any 16:53 of the things. now it’s my challenge, so 16:55 it’s not up to you to do 16:58 it’s up to me to somehow stimulate 17:01 this.
but we just didn’t do a very good 17:03 job against the challenges that Matt set 17:06 or I set through Matt.
17:12 how are we doing on these “tions,” you know, 17:16 verification, correlation replication 17:18 whatever.
verification I’d say it’s done. 17:23 in preparing this I went back to a 17:26 review that Jed Rothwell had written in 1996, 17:31 about the first EPRI report, which 17:36 was published in 1994, of work that was 17:40 all completed by 1992. 17:43 I’ll read Jed’s words — these are Jed’s 17:46 words, not mine — 17:47 EPRI and SRI, followed the rules, 17:49 they’ve done what scientists are supposed 17:51 to do they published impeccable  — Thank 17:53 You, Jed — utterly convincing research, in 17:56 top-ranking peer-reviewed journals, then 17:59 they stood quietly aside, politely 18:02 waiting for applause and recognition. 18:04 — Jed got that wrong, I didn’t stand 18:07 politely aside — and by now it’s obvious 18:12 that it will never come. 18:13 McKubre feels that cold fusion will only 18:15 attract attention when it can be 18:17 demonstrated as a viable technology. 18:20 those words in red came as a shock to 18:23 me, because I’ve started saying that 18:25 again. I didn’t realize I was so smart in 18:28 1996. smart and sad actually, this 18:32 shouldn’t be the way that science 18:35 progresses.
but it I think it has to be.
18:38 in this last half of the talk is, how we 18:40 might do this. and Jed quotes me 18:47 in a radio interview, now these are my 18:49 words, people’s attention need to be 18:50 grabbed by something that’s simple, 18:53 unarguable, concrete and rugged, and it 18:56 has to be simple enough to explain to 18:58 the average person or average politician, 19:01 who’s slightly lower level [laughter], and it really 19:06 has to be a lot more robust than 19:08 anything that we’ve generated so far so. 19:10 I recognized where we needed to go and 19:13 that we weren’t there.
now I’m gonna try 19:15 and take you in a pathway that might 19:18 allow us to get there. 19:22 we covered verification I 19:25 say it’s done I’d say — Jed says it’s done.
19:29 Jed says that SRI-EPRI work provided 19:32 the verification in 1992. Mel Miles, heat-helium 19:38 correlation, did that job too.
19:42 that verified the nuclear basis of a 19:46 heat effect all done. 1992. here we are 19:53 20-something years later and we’re still 19:56 talking about it.
19:59 you can see scientific truth through 20:02 correlation here. I’m quoting Abd Lomax 20:06 in the issue of Current Science that 20:10 Srinavasan and Meulenberg very nicely 20:14 put together. Abd focuses attention on 20:22 correlation as having the power to 20:26 overcome potential systemic errors in 20:30 single measurements.
if measurements are 20:32 correlated, then this possibility of 20:35 systemic error becomes much, much less.
20:38 quoting Abd, now: coldfusion 20:40 effects have often been called 20:41 ‘unreliable,’ even by those convinced of 20:45 their reality.
in this community, as close 20:47 as it is, and as is filled with as 20:50 nice a people, as it is, we still find 20:55 charges: “my work isn’t unreliable, my work 21:00 could be heat, your work is unreliable. I 21:04 don’t believe in the tritium, or I don’t 21:05 believe in the helium.
so you know … but 21:09 that’s that’s good 21:10 that’s a good thing, as long as it has a 21:11 scientific basis.
Abd again: the chaotic 21:15 nature of material conditions so far has 21:18 made ordinary reliability elusive. 21:21 Fleischmann Pons Heat Effect produces 21:24 more than one effect, two being heat and 21:28 helium. 1991, Mel measured both, found they 21:32 were correlated 21:34 and this was replicated well.
21:44 the the statistical chance of Mel’s 21:48 correlation between heat observation and 21:51 helium observation is nearly a million 21:54 to one.
that this was done 21:58 you know stochastically. the insanity of 22:02 common scientific response to cold 22:03 fusion is that the evidence of reality 22:06 was discovered by Miles in 1991, 22:09 recognized as significant by Huizenga in 22:12 1993, and was confirmed by multiple 22:15 research groups. this is 22:21 Abd’s words.  as pointed out there 22:23 while there is an i to dot and a t to cross, the 22:26 preponderance of evidence is now very 22:29 much that cold fusion is real the 22:33 evidence of heat and helium has passed 22:34 beyond a reasonable doubt.
so 22:38 verification correlation, we’re good with 22:41 those “tions.”
how about the rest?
to make 22:48 progress, I claim that one of two things 22:52 needs to be done, preferably both. we need 22:56 an unmistakeable and irrefutable 22:58 scientific proof that nuclear effects 23:01 take place in condensed matter, by means, 23:04 at rates, with products different from 23:09 similar reactions occurring in free space.
23:12 that is, condensed matter nucleus science 23:16 is a real thing. the fact that it occurs 23:19 in a lattice is significant. as Julian 23:22 Schwinger, the prince of condensed matter 23:25 nuclear physics, said very, very early on 23:29 in 1989.
theory is going to help us here 23:35 and Peter Hagelstein presented 23:38 interesting new concepts. yes, I’ve been 23:41 listening to Peter so long that I have 23:44 his voice in my head at night. I can see 23:48 his equations.
that theory is coming 23:55 along as I’ve often said the problem 23:58 with theory isn’t theorists it’s 23:59 experimentalists. we haven’t given the 24:02 theorists sufficient ammunition to fuel 24:05 their imagination.
point 2. 24:14 demonstration must be made of a 24:16 practical use of the energy created. 24:19 when I wrote those words I didn’t 24:23 realize that I’d already said them in 24:24 1996.
so how do we do this 24:29 demonstration and utilization? what is 24:33 the problem today? 24:35 scientific proof without practical 24:37 reality hasn’t worked to convince the 24:40 world.
I’ve made the arguments that we 24:43 have the scientific proof: we have heat [and] 24:44 helium correlated. we have tritium, and 24:52 the tritium evidence came to us already 24:55 pre-replicated the work of Claytor, 25:00 Storms, Srinivasan, and Bockris, four 25:06 highly talented individuals, working in 25:08 highly competent research groups, told us 25:11 that these experiments sometimes produced 25:15 tritium. a single atom of tritium 25:18 produced in any of our experiments is 25:20 proof that CMNS is real. that didn’t 25:26 work.
25:28 our approach to replication has been 25:31 poor. at ICCF-14 25:37 (they will remember when that was, but it 25:38 was a while ago) I made the claim,  25:42 well, the charge, that if the claim is 25:45 made that replication is crucial to the 25:48 development of our field, to determine 25:50 the parameters for advancement, 25:52 to prove reality to critics, or to uncover 25:56 systematic error, then it is astonishing 25:58 that attempts to replicate the 26:00 Fleischmann Pons Effect have been so few, 26:03 methodologically so limited.
this lack of 26:06 attention to detail is precisely the 26:09 reason that replicability remains on the 26:12 table. our efforts at replication 26:16 have been shocking. we’re all so 26:23 fertile in our experimental imagination, 26:27 that given the chance to replicate, we 26:29 can’t avoid the impulse to improve.
there 26:34 have been very few legitimate, honest, 26:38 direct engineering replications of any 26:43 of the effects that we’ve talked about. 26:45 one that was done, again Jean-Paul was 26:48 involved, and Longchamps, a direct 26:53 engineering replication of the 26:55 Fleischmann-Pons high-temperature heat-producing  26:59 experiments.
the conclusions of 27:02 the authors of the paper, that came out 27:05 of that replication, were that Martin’s 27:07 right. what he said is right 27:09 it’s true, it’s there, it’s real.
I’ve 27:13 already spoken about our publication 27:15 record. basically it’s not easy for 27:18 somebody who wants to enter into 27:20 the field to go to any place and 27:23 find a set of papers that they 27:27 should read in order to 27:30 prepare themselves to enter the field 27:32 experimentally. it needs to be better 27:35 codified and I think in time that will 27:37 be.
to my knowledge there is no 27:41 written 27:44 replicable procedure or 27:47 protocol. nobody’s written down on a 27:49 piece of paper a procedure that, if 27:51 followed, will always work.
27:57 so I want to demonstrate, I want to do 28:00 something simpler than a steam engine, 28:03 connected to a generator, the generator 28:06 feeding the the dumbed-down 28:10 at steam engine providing its ignition 28:11 initiation and control. make the thing a 28:16 closed loop, and have some power left 28:17 over to do something. the talisman that 28:20 we create for that purpose has got to 28:23 work on two levels: be sufficiently 28:25 simple and obvious that no hidden error 28:27 can possibly exist to negate the result. 28:31 we’re gonna have to communicate this to 28:35 people who are going to be bright, 28:36 they’re going to be possibly technical 28:39 or somewhat technical, but they’re 28:41 not going to be specialists.
the 28:44 individual that we’re trying to 28:45 communicate with will hire himself a 28:47 physics professor to do due diligence as 28:52 a consultant, and I’ve been a consultant’
28:55 you cannot make mistakes as 29:00 a consultant. you can’t make an error. if 29:03 you do that, you don’t get any more 29:04 consulting jobs, if you’re known to be a 29:06 bumbler.
so the physics professor is 29:09 gonna find some fault, some 29:12 reason it’s not true, or some reason to 29:16 to cage his statements.
we had an 29:23 experience of that at EPRI. the work at 29:26 SRI, we were confident of it, happy with 29:29 it, realized we weren’t breaking through, 29:31 so we brought in two senior people, Dick 29:35 Garwin and Nate Lewis, to review the 29:39 work at SRI , to kick the experiments 29:43 poke ’em, and find out you know what the 29:45 what the problem was, and write a report 29:46 thereafter. we did this twice actually. 29:50 the other group I called the four B’s. Allen Bard 29:53 was involved [YT: how it burned down]  29:55 basically that these guys come in, they 29:59 write report, we let them see 30:02 everything. experiments, data, direct 30:05 access to whatever they wanted. the 30:07 reports always say the same 30:09 these: are good people doing good 30:11 experiments. we couldn’t find any error, 30:15 but it’s a complicated experiment and 30:18 there might be one.
it’s actually worse 30:22 than having no report.
and I say that 30:28 we’re going to need to make the energy 30:30 . . . we should, this is just a strategy 30:33 here . . . the energy produced must be 30:35 sufficiently net positive that useful 30:37 work can be made of it.
what chances do 30:46 we have of doing that?
what would we need 30:48 to do? what are the characteristics of a 30:50 prototype?
what would be the power 30:52 level? what temperature? what gain? 30:55 output power over input power, output 30:57 energy over input energy.
at this 30:59 stage we’re not concerned primarily with 31:01 cost. I don’t care how much it costs. 31:04 safety, I shouldn’t say that . . . but we 31:07 can put it behind whatever barrier you 31:09 want.
I know we’re not going to let this 31:11 loose on civilian population, so we can 31:14 maintain safety in the laboratory.
I 31:16 don’t care if it’s practical and don’t care if 31:19 it’s reliable. I’d like it to do what I 31:22 say it’s going to do, more than half the 31:26 time. 50% would be enough. it 31:29 doesn’t have to be beautiful or elegant, 31:31 that’s always good, but this is not 31:33 something that we need to concern 31:34 ourselves with right now.
there are some 31:38 candidate systems with which I’m 31:40 familiar. I’m sorry for the 31:42 small words.
there are three candidate 31:47 systems that I have direct personal 31:49 experience with. the second one, not so 31:52 much 31:53 but the electrochemical PdD LiOD 31:56 experiment is the one I have 90 percent 31:58 of my familiarity  with.
and this is the 32:01 — you know — I’m looking for my keys under 32:04 the street light. this is my street light, 32:06 the electrochemical PdLiOD experiment. 32:10 I agree, Jean-Paul yesterday said that 32:16 electrochemistry is too hard, right. we 32:19  can’t rely on electrochemistry 32:21 to take us to 32:22 an object that we can 32:26 unleash on a civilian population, 32:29 it’s too hard.
it is too hard, he’s right. 32:32 but the good news about the 32:35 electrochemical experiment is: it has 32:38 already demonstrated all of the 32:40 characteristics that we need for my 32:42 prototype. it’s already been done. it was 32:45 done with by Energetic Technologies, ETI 32:49 in their famous experiment ETI-64 which 32:53 I’ve spoken about very many times. 32:56 I think Dave put the slide up 32:58 yesterday. 40 kilojoules of input, 1.14 33:02 mega joules of output. gain of 27.5, 33:06 and that’s an important number, and it 33:09 boiled water.
so we’ve got 100 degrees C 33:11 out of it. it’s got some bad things 33:14 associated with it: it was only demonstrated 33:16 once or twice, and hasn’t been replicated 33:18 for the last 10 years, and this reliance 33:21 on dirty dangerous electrochemistry. . . .
33:25 there are other systems I’m not going to 33:27 go through them time, is ticking down, but 33:30 you could do this in the metal gas 33:33 hydrogen gas system. my guess is if this 33:36 thing ever becomes widespread and 33:39 large-scale it will be a metal gas 33:41 system and there are very good reasons 33:43 for that, cost being one of them, 33:47 materials degradation being another but 33:49 the there’s a little bit of a pall over 33:53 the nickel hydrogen stuff right now.
33:58 I’m not convinced either way, I 34:03 don’t know, because I haven’t seen the 34:05 effect in my own laboratory with my own 34:07 eyes. I therefore don’t believe 34:09 immediately falling into the region of 34:14 people that I criticized before, 34:18  but 34:19 I’m just going to focus on this first 34:21 one.
how are we gonna do that, if the 34:24 charge is “demonstration prototype”?
34:28 electrochemical system: how much heat do 34:32 we need? well we’re 34:34 like all simians where touch-and-see 34:39 things I believe that to be real .
you 34:41 need to be able to see it or feel it. 34:44 numbers are not good enough especially 34:47 where heat is concerned, because heat is 34:48 ephemeral, unless you can do work with it 34:51 I say that one watt is too small to 34:54 convince, and 100 watts is too hard, in 34:57 least for the electrochemistry, and 34:59 the argument for that was persuasively 35:02 made by Ed Storms in his first book. He 35:06 reviewed 242 successful heat producing 35:12 experiments, and you can see the 35:14 histogram of his results, and his 35:17 original plot is the the bar 35:20 graph, and I’ve superimposed some 35:24 exponential curves on it just to show 35:26 how the trends go, but basically half of 35:30 the successful experiments were less 35:32 than 1.25 watts.
these are too 35:36 small. as you go up there’s some power in 35:40 the tail, and if you look, 35:44 between 1.25 and 2.5 watts — which i think 35:47 still is too small — 35:48 thirty-five experiments, but greater than 35:51 10 watts — I’m gonna say 10 watts is 35:54 enough. 40 experiments, 15 electrochemical 36:00 experiments, 12% of the electrochemical 36:03 experiments fall into this region 36:06 greater than 10 watts.
so it has been done 36:10 we might have a 36:12 shot at producing a 10 watt demonstrator, 36:17 but there’s a savage beast of Sadi Carnot 36:24 comes to beat us down.
if we had 10 watts 36:29 of heat how much could we demonstrate?
36:32 how much power could we put into the 36:34 system to maintain that operation, and 36:37 how much could we get out for 36:40 show.
the blue solid line here is the 36:45 Carnot efficiency 36:47 with a thermal rejection temperature of 20 36:50 degrees, so by the time you get to 500 36:53 degrees your efficiency is 60 odd 36:55 percent.
but remember, the 36:59 Carnot limit is a limit that can’t 37:01 be reached, and it can’t even be 37:03 approached, and it’s harder to approach 37:05 it as you get lower in temperature so 37:08 we’re gonna have to beat that limit by a 37:10 lot.
I’ve plotted some points on here. 37:14 the dotted curves are various 37:17 calculations based upon the gain. 37:19 how much power we can use for show 37:23 is a function of how much power we need 37:26 to keep the beast running.
so but if we 37:31 had ETI-64 . . . you find it, it’s on 37:40 the hundred degree line. the point up 37:42 there is that ETI-64 result. if we 37:45 had that result, pushed it through a 37:48 thermal  heat-to-electrical 37:52 conversion, fed the electricity needed for 37:56 the experiment, we would have five watts 37:58 left over for our demonstrator.
38:02 Dennis 38:03 Craven’s could use it to charge up his 38:06 electrified Model A or Dave could use it 38:09 or Steve could use it to power their 38:12 cell phones.
at five watts to 38:14 demonstrate a flashing light or ring a 38:15 bell would be convincing. it’s a toy but 38:19 it would be convincing. down the bottom 38:22 is the other ETI experiment, a glow 38:27 discharge experiment. I’ve put it at a 38:29 hundred degrees but 38:31 it’s a plasma, so that thing could 38:33 actually be sitting up at 500 degrees 38:37 and that efficiency would then be more 38:39 than 50% .
so you get four watts out of 38:41 that. so either of the ETI experiments 38:44 would serve the purposes of my 38:45 demonstrator.
my best results at SRI are 38:52 the green points which sort of fall on 38:54 the line
and I split them out here. if 38:57 you plot 38:59 gain versus how much power you’re gonna 39:02 get, the thermal efficiency five hundred 39:04 percent thermal efficiencies you get — the 39:10 percentage of input plus so the ETI 39:13 system sitting at the star [unintelligible] 39:16 the star performer is up 39:19 here at five hundred percent. the SRI 39:22 cells, the best of them, the ones I was 39:24 most proud of, will fall below the line. 39:27 all of them needed more power to operate 39:30 then a thermal to electric conversion 39:35 would supply, even at the Carnot limit, 39:38 which you can’t achieve, so to be kind 39:44 to SRI and me our experiments were not 39:48 intended to be demonstrators, but they 39:50 could not have performed as such.
39:52 conclusions.
40:00 reasonable pathways exist to explore or 40:04 exploit the Fleischmann Pons Heat 40:07 Effect.
the pathways have got to involve 40:10 these five “tions:: verification, 40:13 correlation, replication, demonstration, 40:17 utilization.
for wide acceptance, 40:22 demonstration must be made of a 40:25 practical use of this energy. this is my 40:28 assertion. just me saying this, this might 40:31 not be true, there may be other pathways, 40:32 but since I said it in 1996 and forgot 40:36 it, and so I’m saying it again now, it might 40:37 be true. I thought of it 40:39 twice.
we need to identify a 40:43 demonstration prototype.
electrochemical 40:47 methods with which I am most familiar 40:50 could work for this demonstrator. they 40:55 have worked, rarely but if we could 40:58 uncover the circumstances and conditions 41:00 under which these experiments did work, 41:02 and do them better, we could do it by 41:06 electrochemistry. I said before, and I 41:08 agree with Jean-Paul I doubt that 41:12 an electrochemical system can be made 41:16 widely available to the public it’s too 41:19 challenging, it’s it’s too hair-trigger 41:22 on small concentrations of impurities, 41:26 and electrochemistry is a tough game. 41:29 there are very few people that have 41:31 succeeded who weren’t well-trained 41:34 electrochemists, a few who are very 41:35 lucky. Dennis 41:38 Cravens is not an electrochemist but 41:40 at ICCF-4 in Maui in 1993, at the 41:47 end of the session, Martin Fleischmann 41:49 got up and complimented Dennis Cravens 41:53 for his understanding of the system. What 41:56 Dennis had figured out what you needed 41:58 to do, how you needed to do it, even why 42:00 you need to do it. he’s 42:02 not an electrochemist, but he’s a great 42:04 tinkerer, he asks great questions, 42:06 annoyingly sometimes.
and you know you 42:13 got to be lucky too. but you’ve got to try, 42:15 to be lucky. operating temperature 42:19 is important. high gain is 42:24 crucial, and gain is more easily affected 42:29 in the denominator than the numerator 42:31 we’re going to need a certain amount of 42:33 power out, but we want to do everything 42:36 we can to make the power in as small as 42:38 possible, so we can get high gain. its 42:40 gain that’s going to allow us to beat 42:42 Carnot. so our goal is to create the heat 42:46 effect at low input power.
and that’s 42:51 what I have thank you

List of Slides with links:

Slide1 Technical Perspective
Slide2 Preface, Background
Slide3 Technical contributions, Funding
Slide4 for CMNS to be “anomalous” no more, how do we make it happen?
Slide5 In the face of entrenched cognitive dissonance  . . .
Slide6 Verification, Correlation, Replication, Demonstration, Utilization
Slide7 ICCF-20 noted self-censorship, poor communication
Slide8 Grand Challenge, largely unmet
Slide9 Verification, quote Jed Rothwell
Slide10 Correlation, quote Abd ul-Rahman Lomax
Slide11 To make progress . . .
Slide12 What is the problem today?
Slide13 Demonstration strategy
Slide14 Key Characteristics of Prototype
Slide15 Some Candidate Systems
Slide16 How much heat is needed to convince a non-expert?
Slide17 We need to beat Carnot . . . by a lot. . .
Slide18 Gain is the key. The key to gain is reduced PIn.
Slide19 Conclusions
Slide20 Grateful acknowledgment to my esteemed colleagues

McKubre and Staker (2018)

Subpage of SAV

This page shows a draft Power Point presentation delivered at IWAHLM, Greccio, Italy, on or about October 6, 2018, by Michael McKubre, co-authored with Michael Staker, who presented a paper on SAVs and excess heat at ICCF-21 (abstract, mp3 of talk, proceedings forthcoming in JCMNS) (Loyola professor page, links to resume) .

A preprint of Staker’s ICCF-21 presentation: Coupled Calorimetry and Resistivity Measurements, in Conjunction with an Emended and More Complete Phase Diagram of the Palladium – Isotopic Hydrogen System

The last McKubre-Staker version before presentation. If one wants a searchable and copiable version. that would be it. I have posted images of the slides here.

Slide 1

This probably means “Nuclear Active Environment (NAE) is formed in Super Abundant Vacancies (SAV), which may be created with Severe Plastic Deformation (SPD), and then Deuterium (D) added.”

Semantically, I suggest, assuming the evidence presented here is not misleading, the NAE may be SAV even when there is no D.  That is, for an analogy, the gas burner is a burner even if there is no gas burning. But that teaser title has the advantage of being succinct.

The photos show, at ICCF-15 (2009), David Nagel, Martin Fleischmann, and Michael McKubre, with Ed Storms in the background, and at ICCF-2 (1991) , Martin and a much younger Michael Staker, remarkable for that far back. Staker has no prior publications re LENR that have attained much notice. He gave a lecture on cold fusion in 2014, but the paper for that lecture, does not really address the question posed, it merely repeats some experimental results and his conclusions re SAVs, which are now catching on.

As I link above, he presented at ICCF-21 this year. I was impressed. I think I was not the only one.

Slide 2
Slide 3

I want to hang from each of each of those directions a little sign reading “OPPORTUNITY.” Sometimes we think the path to success is to avoid errors. Yet the “BREAKTHROUGH” sign is somehow missing from most signposts, except signs put up by people selling us something. How could it be there, actually? If we knew what would lead us to the breakthrough, we wouldn’t need signs and it would not be a “breakthrough.”

Rather, signs are indications and by following indications, more of reality is revealed. If we pay attention, there is no failure, failure only exists when we stop travelling, declaring we have tried “everything.” I’m amazed when people say that. Over how many lifetimes?

These questions are the questions McKubre has been raising, supporting the development of research focus.

Slide 4

The whole book (506 pages) is Britz Fukai2005. (Anyone seriously interested in researching LENR and the history of the field, contact me for research library access. Anonymous comments may be left on this page, or any CFC page with comments enabled (sometimes I forget to do that), but a real email should be used, and I can then contact you. Email addresses will not be published.

Slide 5

It is a bit misleading to call the positions of the deuterium atoms “vacancies.” They are not vacant and will only be vacant if the deuterium is removed. The language has caused some confusion.

Slide 6

Nazarov et al (2014).
Isaeva et al (2011). and  Copy.
Related paper: Houari et al (arXiv, 2014)

Slide 7
Slide 8
Slide 9

Tripodi et al (2000). Britz P.Trip2000. There is a related paper, Tripodi et al (2009) author copy on

Slide 10

Document not in proceedings of IWAHLM-8. Not mentioned in bibliography.
Abstract. Copy of slides on ResearchGate. 

Slide 11
Slide 12
Slide 13

Arakai et al (2004)

Slide 14
Slide 15

Strain uses time to create effects. The prevention is rate, not time. The metastability of the Beta phase could be better explored.

If the Fukai phases are preferred, I would think that under favorable codeposition conditions, they would be the structures formed. I’d think this would take a balance of Pd concentration in the electrolyte, and electrolytic current. Some codep is not actually codep, it deposits the palladium first, then loads it by raising the voltage above the voltage necessary to evolve deuterium. Is this correct? This plating/loading might still work to a degree if the palladium remains relatively mobile.

Slide 16

Of all these, true co-dep seems the most promising to me. But whatever works, works. I think co-dep at higher initial currents may have an adhesion problem.

Slide 17
Slide 18
Slide 19
Slide 20

Information on the Toulouse meeting used to be on the iscmns site. As with many such pages, it has disappeared, displays an access forbidden message. From the internet archive, the paper was on the program. There would have been an abstract here, but that page was never captured. This paper never made it into the Proceedings. I found related papers by the authors about severe plastic deformation with metal hydrides by searching Google Scholar for “fruchart skryabina”.

Slide 21
Slide 22
Slide 23

Yes, Slide 23 duplicates Slide 1

Slide 24
Slide 25

Color me skeptical that the nuclear active configuration is linear. However, it is reasonable that a linear configuration might be more possible and more stable in SAV sites, as pointed out. Among other implications, SAV theory suggests reviewing codeposition. In particular, “codeposition” that started by plating palladium at a voltage too low to generate deuterium was not really codep. The original codep was a fast protocol, the claim was immediate heat. That makes sense if Fukai phases are being formed. Longer experiments may gunk it up.

This is going to be fun.

Slide 26

So many in the field have passed and are passing. As well, some substantial part of the work is disappearing, not being curated, as if it doesn’t matter.

Perhaps our ordinary state is inadequate to create the transformation we need, and we must be subjected to severe plastic deformation in order to open up enough to allow the magic to happen.

What occurs to me out of this is to explore codeposition more carefully. It’s a cheap technique, within fairly easy reach. It is possible that systematic control of codep conditions may reveal windows of opportunity that have been overlooked. There is much work to do and the problem is not shortage of funding, it is shortage of will, which may boil down to lack of community, i.e, collaboration, coordination, cooperation. Research that is done collaboratively or at least following the same protocols can lead to significant correlations.


Subpage of Fleischmann

Britz Flei1990. Copy of paper on


It is shown that accurate values of the rates of enthalpy generation in the electrolysis of light
and heavy water can be obtained from measurements in simple, single compartment Dewar type
calorimeter cells. This precise evaluation of the rate of enthalpy generation relies on the nonlinear
regression fitting of the “black-box” model of the calorimeter to an extensive set of
temperature time measurements. The method of data analysis gives a systematic underestimate
of the enthalpy output and, in consequence, a slightly negative excess rate of enthalpy generation
for an extensive set of blank experiments using both light and heavy water. By contrast, the
electrolysis of heavy water at palladium electrodes shows a positive excess rate of enthalpy
generation; this rate increases markedly with current density, reaching values of approximately
100 W cm-3 at approximately 1 A cm-2. It is also shown that prolonged polarization of palladium
cathodes in heavy water leads to bursts in the rate of enthalpy generation; the thermal output of
the cells exceeds the enthalpy input (or the total energy input) to the cells by factors in excess of
40 during these bursts. The total specific energy output during the bursts as well as the total
specific energy output of fully charged electrodes subjected to prolonged polarization (5-50 MJ
cm-3) is 10– 10times larger than the enthalpy of reaction of chemical processes.

This paper was intended to be the full monte, the earlier paper Britz Flei1989a being a preliminary note. By this time they knew what a firestorm of critique had been raised. It would be crucial that this paper be bulletproof, as to what it confidently claims, and that any speculations or weaker inferences be stated as such, if at all.

Fleischmann and Pons were suffering from a disability: they had seen the aftermath of a meltdown, probably in late 1984. They had no possible chemical explanation for the extremity of that meltdown. So they were convinced that nuclear-level heat was possible, and they treat that as a fact. But almost nobody else witnessed that meltdown, they appear to have actively concealed it. They published little about it, beyond stating the size of the cathode (1 cm3), nor has there been any report that they kept the materials, what was left of the cathode being the most crucial, as well as fragments from the incident. They did not report if the power supply, when they discovered the meltdown, was on or off, and, in particular, what current it was set to deliver, assuming constant current. It has only been stated (Beaudette, Excess Heat, 2nd edition, 2002, p. 35) that they had raised the current to 1.5 A, and that Pons’ son had been sent to turn it off for the night.

1.5 A , for a 1 cm cube, would be about 250 mA cm-2. In fact, because palladium expands when loaded, by a variable amount depending on exact material conditions, it would be a somewhat lower density than that. Later, their experiments, with substantially smaller cathodes (Morrison calls them “specks,” which was misleading polemic), used a current density as high as “1024 mA cm-2.”

(The implied precision of that figure was overstated, it was purely nominal, obviously based on a series of experiments that set current so that calculated density would be in powers of two. What was actually controlled was current — or voltage under some conditions –, not current density.)

The precision and accuracy of the Fleischmann-Pons calorimetry is still debated. Toward studying this, I have extracted the experimental results found in the subject paper. There is a plot of results on page 26 of the preprint (page 319 as published):

Fig. 12. Log-log plot (excess enthalpy vs. current density) of the data in Tables 3 and A6.1.

And then I used to convert, in a flash, the Tables 3 and A6.1 (preprint pagesˋ19 and 52) to Excel spreadsheets, which can be opened by many spreadsheet programs. On my iPhone, they immediately opened as spreadsheets. There are some errors to be cleaned up, but the data looks good.

Table 3 and the text of the page: 19_Fleischmancalorimetr.xlsx
Table A6.1 and the text of the page: 52_Fleischmancalorimetr.xlsx

Enjoy! (To be continued . . . I will clean up the spreadsheets and create some plots.)



Subpage of Kowalski/cf, recovered from archive

411) Messages from the CMNS list (December 2012) 

Ludwik Kowalski; 12/17/2012

Department of Mathematical Sciences
Montclair State University, Montclair, NJ, USA

Some of you might be interested in the following messages from the private discussion list for CMNS researchers. They were posted in the first week of December 2012.


1) Posted by X1:


… X2 Ludwik Kowalski suggests that some of our distinguished CMNS scientists are in a way accomplices of Rossi’s scam. …  [I am certainly not one of them; my critical comments on Rossi’s claims can be seen at:]


2) Posted by X3:

I have not yet received a response from X2.  Regarding my wager, I am confident that commercial hot fusion energy will not happen in my lifetime despite hearing this promise of abundant energy for as long as I can remember.


3) X6 :

I also do not expect to live long enough to see commercial applications. But should I expect to see the first reproducible-on-demand demonstration of an undeniably nuclear effect resulting from a chemical process, such as electrolysis? This would be a giant step toward practical applications.


4) Posted by X5 (And ul-Rahman Lomax)

It exists. Unfortunately, it’s a fairly expensive experiment. It’s the X6Ős experiment. Use the state of the art to run a substantial series of F&P type cells to see excess heat. Run the cells in such a way as to allow the secure collection of helium and measure it. Compare excess heat and the amount of helium produced. The helium will be proportional to the heat, and if you end up capturing all the helium, which may take some special techniques, the ratio will be as expected for deuterium conversion to helium. The individual cells will vary in heat, but the ratio will be constant.


That is a reproducible experiment, it’s been reproduced many times. There are approaches which have shown excess heat in most cells, such as the Energetics Technologies replications at SRI and ENEA.


It’s much less expensive to do this without the helium collection, but then all you have is heat, which is not an undeniably nuclear effect.


The problem, Ludwik, is that the F&P Heat Effect produces essentially no “nuclear products” other than helium, which is not unmistakably “nuclear” by itself, unless you take the levels above ambient, and still the skeptics will carp, because they did. However, heat correlated with helium at the fusion ratio is strong enough evidence for anyone who is reasonable.


It’s possible that this could be done with tritium, but I don’t see that the *reliable production* of tritium has been studied. The rumor is that tritium is not correlated with heat, but I’ve never seen published values that would show this, and it’s a suspicious claim.


5) Posted by X6:

Yes indeed. Production of 4He from 2H, even without generation of excess heat, is an undeniable nuclear event, like other reported transmutations. But the correlation with excess heat, at the rate of about 24 MeV per atom of He (even if it were 24 +/- 10 MeV), would be very significant.


How much would it cost to reconstruct a setup, and to perform ten experiments? Who would be able to perform such experiments, if money becomes available?


6) Posted by X7


Evidently X2 has not done his research!  None of the persons criticized in his blog are ISCMNS leaders!  And his conclusions regarding the ISCMNS position do not seem to be based on any relevant facts.


I went on record, during the ISCMNS Annual General Meeting at ICCF16 (February 2011) warning the community of the brewing storm.  Allow me to quote some key points from my presentation:-


“Recently a demonstration was made of a prototype energy ŇCatalyzerÓ

If it works as described, it may be a blessing to humanity and vindicate 21 years of patient work by this community. If it fails spectacularly, it will create bad publicity for everyone working in the field.


Some advice to inventors

Get your invention independently validated.

Demonstrations which hide technical details create unease.

Non disclosure agreements can protect secrets.


Advice to Users

If you acquire any technology, whether secret or not, do not accept any clauses which require you to keep quiet if it doesn’t work.

We need whistle blowers.


Advice to Evaluators

It’s probably not appropriate to make a public statement in support of a demo miracle device, if you have not examined it yourself.

If you do make a statement, at least make sure that you can correct any eventual errors.

Take care if you get on film, as film will be edited.”


This is of course a personal perspective, but it was discussed by the ISCMNS members present at the meeting.


If X2 has any evidence of fraud, I suggest he contacts the appropriate authorities.


7) Posted by X5:


First of all, it’s been done. As I recall, Miles performed about six experiments, taking a total of 33 samples for analysis. … This is the kind of work that can be done in many ways. The exact protocol is not important, but I do caution against going outside the basic PdH approach. Other approaches *might* involve different mechanisms.


If one can obtain or make an active cathode — ENEA seems to be able to supply functional cathode material, and seems to have a grip on what sets up the necessary initial conditions — measuring heat is not the most difficult part of this; one should, of course, use good calorimetry, for the accuracy of the ratio will not exceed the accuracy of the calorimetry.


The difficulty, though, is in capturing and measuring all the helium. McKubre followed an approach, in some of his work, that involved rigorously excluding helium from the cell materials and cells. Helium can diffuse through some materials. Seals must be helium tight, and tested to be so. And if the cell needs to be disassembled for any reason — connections fail, etc., — then the whole process must be repeated.


Storms, in “Status of cold fusion (2010)”, working from the results of various studies, comes up with 25 +/- 5 MeV/He-4. That’s rather obviously a bit seat-of-the-pants. I’d say, however, that the results show better than 24 +/- 10 MeV (and I’m not saying that Storms’ result is incorrect).


At this point, the work is solid enough that the default hypothesis as to the ash from the FPHE is that it is helium, with the fuel being deuterium. Transmutations and other products are found at levels far too low to explain the heat, by many orders of magnitude. This does *not* establish mechanism, but it obviously puts some severe constraints on mechanism. If the mechanism involves neutron formation, why the products would so tightly focus on helium would have to be explained — or other products would need to be identified, which has not happened. One possible mystery product, of course, could be deuterium, since it would not be detectable in heavy water experiments, nor, for that matter, in light water experiments, so plentiful is deuterium in light water.


Miles first reported helium somewhere around 1991, and his first extensive correlation report was published in time to be covered in the second revision of Huizenga’s book, “Cold fusion, scientific fiasco of the century.” Huizenga was highly impressed, in fact, saying that, if confirmed, a major mystery of cold fusion would have been solved, i.e., the ash. He held on to his skepticism by saying that, of course, it was unlikely to be confirmed, because no gamma rays were reported.


Huizenga was showing, clearly, how the skeptics thought about cold fusion and why they thought “it” was impossible. “It” was d-d fusion, through “overcoming the Coulomb barrier,” in the classic way or something like it. And “it,” when it produces helium — i.e., rarely — always produces a gamma ray. I consider it likely that they were correct, what they thought of as cold fusion is indeed impossible. They were gloriously and spectacularly incorrect, though, in making the assumption that if there was cold fusion, it would be a new way of making hot fusion.


(And all the theories that involve ideas whereby somehow deuterons in condensed matter attain sufficient energy to directly penetrate the barrier are missing the point. That is not happening. Piezoelectric fusion — used in certain commercial neutron generators — isn’t cold fusion, it’s hot fusion, and that’s why it serves to generate neutrons. But the apparatus is at room temperature ….)


Because a notable author objected to the idea of this “replicable experiment,” I’ll answer his post separately, as to why what he expects has not appeared. But what I described is indeed replicable, and reliably so, I’ll assert — there is going to be a need for some detailed discussion about this — and *it has been replicated*, quite enough that under normal conditions, the result would be a generally accepted fact.


Sitting here twenty years after a cascade, though, conditions still are not normal.


8) Also posted by X5, shortly after the above message:


X6 asked ŇHow much would it cost to reconstruct a setup, and to perform ten experiments? Who would be able to perform such experiments, if money become available?Ó


What it would cost is something that could be estimated by those who have done the work in the first place. Notably, as to those who are active, and off the top of my head, this would be Miles — first and foremost –, McKubre, who did the most accurate work to date, and Violante, who may have done the work at least expense, plus, of course, any of their co-workers and those reported in Storms, 2010.


I doubt that it would cost more than $10,000 per cell, though, as a rough guess, particularly if a worker already had good calorimetry in place or easily adaptable. If a lot of cells are run, the cost per cell may go down. Most of this cost, indeed, is labor.


As to who, my plan is to write a survey of cold fusion criticism, with a goal toward identifying significant and important unresolved issues. The replication of heat/helium is not significant as far as it is not impeding progress significantly, but because there are lingering doubts about it, it may be politically important. If heat/helium is established, if 24 MeV is confirmed, independently, and with greater accuracy, it confirms cold fusion, very amply, as a side-effect, and it narrows the possibilities for theories as to mechanism.


Matters are still at the point where Larsen can suggest that 24 MeV is only approximate and he can attempt to shoehorn his neutron transmutation ideas into it. Note that in spite of what Krivit has implied, Larsen has *confirmed* at least some of McKubre’s work, as to his personal opinion.

This work must be divorced from theory. The goal of any confirmation should be, not to confirm or reject any theory as to mechanism, but simply to measure the ratio of helium to heat. The experiments might as well look for other things that can be done without compromising the heat/helium goal.


An important approach may be to define a protocol to be followed, and the broader the consensus on the protocol, the more likely that multiple workers will attempt it. Because few have access to mass spectrometers that are helium-qualified (He-4 must be resolvable from D2), the protocol will need to include a sampling protocol, which will require cooperation between experimenters and labs ready to do the measurements. If a single and simple protocol for submitting samples is followed, actual helium measurement should be relatively cheap per sample.


If every researcher does their own fabrication, that’s expensive. If a common protocol is agreed upon, with identical cell design, there is *no harm in cooperation in fabrication.* What would be important would be that the cell materials would all be accessible for thorough testing. I.e., someone could analyze them to make sure that someone didn’t sneak helium into the palladium, in particular. Ideally, there would be an independent supplier of materials and cells, with traceability. All that a researcher, then, in a report, need state, is that they used XYZ company’s model NNN cell assembly.


XYZ company, then, is highly motivated to facilitate consensus among its potential customers as to desirable cell design. The Galileo project would have seen much wider participation if there had been such a common fabrication supplier. Indeed, I began working as a supplier of kit materials because, I saw, it should be possible to supply a Galileo-type cell, ready to hook up to a power supply and run, for about $100 per cell *and make a (modest) profit doing it.*


(But that design only looks for radiation evidence, from small palladium-plated cathodes in heavy water, and is utterly inadequate, as such, for heat/helium work.)


9) Posted by X6:

Thank you for interesting posts, X5. You are probably assuming that a high resolution mass spectrometer (able to distinguish the D-2 peak from the He-4 peak) would be available at no cost. Such instruments are not disposable.


10) Another post by X5

Other reported transmutations would be nuclear, but they occur at levels far, far below those of helium in F&P type experiments. Helium itself is problematic because helium is present in ambient air at levels that are generally higher than those expected from the heat. However, that has been addressed in several ways:


  1. If enough heat is accumulated, and helium is accumulated, the helium levels can be expected to — and do — rise above ambient, without slowing, indicating a source of helium other than leakage from ambient.


  1. Controls do not show helium.


  1. If an experiment shows reasonably robust heat, and the cell environment is small, helium as an elevation above ambient can be observed. That this is what Violante did escaped Steven Krivit, who criticized Violante without understanding what he’d done.


The big problem with heat/helium work is that helium has very low mobility in palladium, yet it appears that the reaction does implant helium at some (small) depth in the palladium, so as much as roughly half of the helium can be trapped in the palladium. McKubre attempted to flush the helium by repeated deuterium loading/unloading, which appears to have worked, but this is an unconfirmed technique, and it would be useful if more definitive methods could be used. For example, earlier work looking for nuclear products in Arata/Zhang DS cathodes (hollow palladium with palladium black in the interior) not only looked in the interior gas phase, but also sectioned the cathodes and heated the pieces; helium becomes mobile at high temperatures. I’ve also thought that dissolving the cathodes electrolytically might work and might be simpler, if a researcher doesn’t have direct access to helium measurement and must send off samples to a lab.


(With those Arata/Zhang cathodes, helium was not found above ambient, and the signs are that the cathode interior volume was breached, the helium leaking out. What was found, though, was He-3, at very significant levels, apparently as a decay product from tritium. The He-3 was found trapped in the palladium, at variable distance from the interior, indicating that it was the product of tritium that had decayed to He-3, becoming immobile, as the tritium diffused through the palladium. But this is unconfirmed work; like much cold fusion work, it’s crying out to be replicated.)


11) Also by X5:


There are those on this list with substantial experience with this, perhaps they will help us understand the issue.


However, Miles did not have such a spectrometer. It is not necessary, obviously, for the researcher running the cells to have a mass spectrometer.


SRI has the necessary device, so does Dr. Storms, in his home lab. They are quite expensive, but not impossibly expensive, and, in any case, it is probably a better idea to create a sampling protocol such that a lab or labs can provide analytical services, efficiently.


If one were to run 10 cells, that could only be 10 samples to analyze, plus a few controls. It’s kind of crazy to buy a mass spec to make ten measurements, eh?


It does appear, from what I’ve heard, that modern mass spectrometers are both cheaper and more accurate than the services that were available to Miles.


Yes, for deeper investigational work, in-line, continuous measurement of cell gas could make the investment in a dedicated mass spec worthwhile. But, note: serious exploration of the parameter space leads to a concept of running many cells simultaneously. That can be done through a sampling protocol.


Maybe an advanced cold fusion lab would indeed have a mass spectrometer that could be used for in-line, real-time analysis, and then used for analysis of samples that are stored up for later study.


It looks like a helium mass spectrometer might be rentable for on the order of $2K – $3K per month. These are used as leak detectors. Used mass spectrometers seem to be going for $10K – $40K.


A Varian 979 Helium Mass Spectrometer Leak Detector is on offer on eBay, for quite some time, at $15,000.


I think it likely that someone with access to an adequate helium mass spectrometer would be willing to provide services at a reasonable cost. It’s not impossible that such services could be donated. The cost of equipment does not seem to be so high that, if a analysis services are not available, a provider could be set up for that purpose. The real cost of heat/helium measurements, as to the labor of preparing the equipment, running the experiments, and collection of samples, is quite likely much higher than the cost of helium analysis.



I cited some figures for helium leak detectors. I don’t know how capable these are of separating out the D2 peak. I do know that low-mass mass spectrometers are readily available that can easily resolve the peaks. As I mentioned, Storms has one. D2 can also be eliminated from the gas stream, but that introduces a possible source of error.


It’s pretty much a non-issue, really, because it is not necessary for the researchers to own a mass spectrometer. The key will be a sampling and testing protocol, especially one that allows storage of samples for extended periods if necessary. That could be difficult enough! But it is doable. And blinding the tests so that the helium testers don’t know anything about the sample origins can cover a host of contingencies, assuming that control samples are included, some as ambient air, perhaps, some as coming from dead cells, etc.


12) Posted by X7:


Here is a typical university in-house rental fee for a mass spectrometer:


Students who have a demonstrated need for the unique capabilities of this instrument can be trained to run their own samples.  The training is billed at a rate of $100 per hour, with the usual training session taking 4 hours.  Up to four students can attend the same training session to divide the cost.


Non-routine samples submitted to us to be run on the Q-TOF are billed at $100 per hour.

Student use of the instrument is billed at $50 per hour.


13) Posted by X8:


X5, a leak detector is useless for separating He from D2. These instruments focus on mass 4 but they are not designed to separate D2 from He. After all, no D2 is expected to be present in the apparatus being tested for leaks by applying He.


The only error is just how much He is present.  Several methods can be used to reduce this error by calibration.


14) Posted by X9:


X8, Can your spectrometer distinguish D2 from He-4?  Most cannot do this.


15) Posted by X8:


The spectrometer is made by MKS and has a range of mass 1 to 6. He and D2 are cleanly separated.


16) Posted by X10:


Folks, a brand new MKS MicroVision II for measuring deuterium versus helium cost about $12,000 according to the company rep. It operates at a pressure of 1E-5 torr(?). It has a ten week lead time to order.


17) Posted by X6:


The costs reported in this thread are clearly negligible, in comparison with how much the DOE has been spending yearly to support hot fusion research. Failure to perform replication of 4He experiments, during the second DOE investigation, was certainly not due to prohibitively high costs. [That investigation was described in my article at]:


In philosophically-oriented article (to be published  in 2013?) I wrote that “the DOE experts were not asked to perform correlation experiments; they were asked to read the report submitted by five CF scientists (21), and to vote on whether or not the evidence for the claim was conclusive. Such a way of dealing with a controversy was not consistent with the scientific method of validation or refutation of physical science claims.”


This website contains other cold fusion items.
Click to see the list of links


Subpage of Kowalski/cf recovered from archive

417) Last Updating ? (7/20/2014)

Ludwik Kowalski (see Wikipedia)Department of Mathematical SciencesMontclair State University, Montclair, NJ, 07043

This might be my last item here. I am still reading what CMNS researchers have to say. But penetrating the content conceptually becomes more and more difficult, due to my old-age limitations.In the item 408 I asked “is the device constructed by Andrea Rossi reality or fiction?” Unfortunately, no convincing evidence of realty has been reported on the CMNS website. Neither am I aware of new experimental results. But interpretational debates among highly qualified researchers, from several countries, are going on, as illustrated below.

1) The most significant event was the recent publication of a new book devoted to Cold Fusion. Here How this event was announced by the author, Ed Storms, on July 3, 2014: “My new book will be available shortly from Infinite Energy. To provide a place where discussion can take place, a new website “

has been created and is operational thanks to Ruby Carat. Please go to BLOG to make comments. The comments will be moderated in order to keep the level of debate high. This is not be the place to vent anger, frustration, or to make snide remarks. I hope this discussion can help expand everyone’s understanding of LENR, including mine.” The printed book is already available; the ebook version is expected to be available in August.

2) On July 18 X1 (who is from Rumania) wrote: “I have just now published:

History of LENR will decide if i was too optimist or, on the contrary…
However I have decided to tell you sincerely everything I think, taking all the risks.
We all have to develop active VUCA awareness.

yours faithfully,

3) On July 19, X3 (who is from Ukraine) wrote: “Dear Colleagues. In our new article “Correlated States and Transparency of a Barrier for Low-Energy Particles at Monotonic Deformation of a Potential Well with Dissipation and a Stochastic Force”

(Journal of Experimental and Theoretical Physics, 2014, Vol. 118, No. 4, pp. 534-549.)

the features of the formation of correlated coherent states of a particle at monotonic deformation (EXPANSION or COMPRESSION) of potential well in finite limits have been considered in the presence of dissipation and a stochastic force.

It has been shown that, in both deformation regimes, a correlated coherent state is rapidly formed with a large correlation coefficient r~1, which corresponds at a low energy of the particle to a very significant (by a factor of 10^50…10^100 or larger) increase in the transparency of the potential barrier at its interaction with atoms (nuclei) forming the “walls” of the potential well or other atoms located in the same well. The efficiency of the formation of correlated coherent states, as well as, increases with an increase in the deformation interval and with a decrease in the deformation time.
The presence of the stochastic force acting on the particle can significantly reduce the maximum value and result in the fast relaxation of correlated coherent states with r~0. The effect of dissipation in real systems is weaker than the action of the stochastic force. It has been shown that the formation of correlated coherent states at the fast expansion of the well can underlie the mechanism of nuclear reactions at a low energy, e.g., in MICROCRACKS developing in the bulk of metal hydrides loaded with hydrogen or deuterium, as well as in a low-pressure plasma in a VARIABLE MAGNETIC FIELD in which the motion of ions is similar to a harmonic oscillator with a variable frequency.

PS. This article is in Attachment.

4) On July 20 X4 (who is from Malesia) wrote: “As an experimental physicist, I find models to be very useful for building concepts. As a theoretician, I do as well. However, I don’t have my lab set up; so, I just have to think about them.

The multi-atom, linear, hydrogen molecule does not naturally exist. However, if it were ‘induced’ to form, it might have some interesting properties. One of these could be Rocha’s metallic hydrogen (see the PS below). In 1999, Sinha proposed such a molecule in lattice defects as a potential source of CF. More recently Storms proposed the linear-H model as the ‘only’ possibility for CF. Is there a simple experiment that can convey some of the concepts involved in this structure?

Metallic H requires extremely high pressures to form (maybe! I do not know that it has actually been proven to exist.) Electrolytic loading can provide extremely high pressures for H into a lattice. Is it sufficient? If so, under what circumstances? Under high loading, protons can be inserted into sites that are not ‘natural’ for them, or proton pairs can even be crammed into a single site. Nevertheless, they would not form a linear molecule (at least not of the type we are seeking).

I suggest that the balloon analogy might be useful. The electron ‘cloud’ about a proton has an isotropic distribution. However, in the H ground state, the electron has zero angular momentum (L = ~0). If it had ang mom. it would have a ‘fixed’ vector associated (perhaps nutating and/ or precessing). QM states that it is a ‘probability cloud’. Either way, this distribution, when overlapping with a similar one, does not provide sufficient screening to allow the protons to get close together. Sinha’s Lochon model (paired electrons) and Takahashi’s Tetrahedral model provided possible ways around this problem without requiring a linear structure. (However, Sinha’s model also worked preferentially in such a structure.) The linear lattice is the preferred structure and could exist in special lattices. It might be able form in a crevice (this is not assured). How does a balloon help explain this picture of the linear molecule in a lattice and its consequences?

Consider the balloon:

Actually, we’ll consider two sets of balloons. But first we need to define the nature of the balloon .and the distinction between force F and pressure (P=F/A).
1. When you blow up a balloon, it is necessary to exceed a given pressure before it will expand easily.
2. After that critical pressure is exceeded, the balloon will expand at a lower pressure (see figure, ).

I am skipping the illustration (Dependence of pressure on r/r0)
1. It takes a given force to stretch the balloon
2. the stretch is proportional to the force
3. as the balloon expands, the area A increases; so that, the force available to stretch the balloon (F = P A) for a given pressure increases.
4. this reduced-pressure regime is maintained (extended to the right in the figure) until the elastic limit is approached (not shown in figure).
In most balloons, e.g. 1/2 inch across by 5 inches long (uninflated):
1. the end never expands (until the balloon is blown up very full, perhaps to beyond a 10 inch diameter)
2. it is possible to push a needle thru the end without bursting the balloon or allowing air to leak out. (It is more difficult to get the needle out again, but it can be done.)
In a second set of balloons (e.g. 3/8 by 5 inches uninflated – I may be wrong about the diameters):
1. the diameter never expands much, the balloon stretches out longer until it is blown up very full, perhaps to beyond a 20 inch length; or,
2. unless the balloon is pinched off at some point and the air pressure is raised sufficiently to cause the early section to ‘balloon’
What is the difference?
1. The pressure peak in the figure is different for the sidewalls of the balloons and the ends.
2. the forces needed to stretch the balloons differ in the various directions.
3. for a given pressure, decreasing the diameter of the balloon decreases the force available to expand the balloon diameter
4. increasing the pressure, until the force on the end is sufficient to elongate the balloon rather than to expand its diameter, may result in different local forces from the different geometries
5. this is similar to the effect seen in the coupling of two balloons (see “two balloon” ref above).
How does this all relate to the linear-H molecule? Consider the inflated balloon to be like the Coulomb repulsion field of a proton. It is possible to push your finger into the center of the balloon. (Is this like tunneling?) However, it is much more difficult to press two balloons together to the same depth. Again, the difference is in force vs pressure. The finger has small area, the balloons are large; so for the same force in pushing a finger vs a balloon, the pressure is quite different.
An electron in orbit about a proton acts to reduce the ‘inflation’ of the balloon. It allows two H atoms to come closer together, but only so far. (We can’t simulate the effects of spin coupling.) If a normal balloon were greased and inserted into a tube (e.g., the tube of a vacuum cleaner), then it could only elongate on inflation. If a pressure sensor were placed in the balloon, and pressures were compared for a ‘free’ and confined balloon, the results would not be dramatically different (but, they would depend on the balloon and tube geometries). If another pressure sensor were placed between the balloon and tube, the pressure difference needed to confine the balloon would not be that large. It would be limited to that needed to expand the balloon toward the end of the tube. Thus we have the condition of the multi-H molecule.
For H2, the external forces needed to reduce the diameter of the molecules is not large, but the effect of reducing the electron’s 3-D degrees of freedom to 1 is dramatic (see fig. attached). An order of magnitude decrease in equilibrium spacing between two H atoms will bring the atoms close to a self-sustaining 1-D configuration (meaning that the electrons are no longer isotropically distributed about the proton(s), they more closely align themselves along the potential minimum of the proton axis). There may be a stable or metastable 1-D configuration for H2, if it can be formed. Many balloons, such as the long balloons used to make toy animals, have such a bistable mode.
The 3-D H2 molecule has little attraction for a lone H atom or another H2 molecule. However, a 1-D H2 molecule would likely have more attraction if the atom or molecule were at the end of the line. The added component could then become an addition to the linear molecule and even join the 1-D state with shrunken electron orbital(s) and closer molecular bonds. It is often observed that blowing up longer balloons will fill up one section while leaving the remainder in an unexpanded state. Thus, the growth of multi-H linear molecules, under the proper circumstances, could become an expected event. CF would be a likely consequence and the bistable mode in balloons could represent the configuration changes that lead to cold fusion.
I had an excellent demonstration (in my apartment in Malaysia) of how resonance can overcome very strong barriers. Unfortunately, I did not ‘notice’ it until I was about to leave there for good and did not have time to record it. I had been annoyed by the effect on many occasions, but did not recognize it as the example it was of overcoming the Coulomb barrier.
PS The paired electrons in the ground state of an atom (or molecule) are a boson. If two H2 molecules, each with such a paired boson are combined, then would the bosons not want to share the common H4 molecular orbital? The multi-H linear molecule in a proper lattice or defect would provide such an example and thus would be metallic H at room temperatures and internal lattice pressures. Furthermore, it might even be a high-temperature superconductor. However, it might also lead to CF and ‘spoil’ the whole concept. What a shame!

5) Responding to X4, X5 (who is from the US) wrote (also July 20):
“I like your analogy. I agree, the process needs to be made simple enough for it to be understood by anyone. Being of chemical persuasion, I would like to offer a different description and analogy. The Hydroton is a chemical structure. Therefore, it has to follow the rules that apply to all chemical structures. All chemical structures are held together by bonds that involve electrons. These bonds have certain well defined energies and configurations. In the case of H, two basic electron configurations exist that are designated s and p. The s level is the most stable and normally forms between other H to make the molecule H2. To allow a larger structure to form, the electrons must occupy an energy level that allows electrons to be shared between all H atoms in the structure. In other words, a metallic-type* bond must form. This electron level requires energy to form, hence is not stable under normal conditions. In 1935, Wigner and Huntington proposed using high pressure to force the electron into the required energy level, thereby creating what they called metallic hydrogen (MH). Because the electron would be then able to move freely between nuclei, the structure was proposed to be superconducting. In 1991, Horowitz proposed that this structure would fuse, thereby explaining the extra heat produced in Jupiter. I then took the logic one step further and proposed that LENR was initiated by formation of MH, I call the Hydroton, which initiated the fusion reaction in certain cracks.

The question is, “What is present in the crack that can force the electron into the required metallic state?” I suggest the high concentration of electrons associated with the Pd or Ni atoms in the wall of the gap force the electron associated with H to move to a new energy state in order to avoid the high negative potential in the gap.

A boat can be used as an analogy. The level of the negative sea has been raised by the electrons in the wall, thereby raising the boat, which is the electron associated with hydrogen. The boat is forced to move up the energy scale and into a configuration that is normally not available. This configuration allows the boat to now move from port to port rather than being trapped in a single port by energy barriers, i.e. rocks. This configuration allows the hydrogen nuclei to resonate, thereby acquiring enough energy to periodically and partially overcome the Coulomb barrier. The same process would occur in metallic hydrogen regardless of how it is formed. Therefore, I suggest in the book that the failure to make MH results because it discomposes by fusion immediately upon formation. Looking for the resulting radiation would be one way to test this prediction.

*The three known bond types are designed as ionic, covalent and metallic. The bond in H2 is covalent.”

6) On July 19, X6 (who is from Japan) wrote: (responding to X4 and to another researcher): “Every particle in nature stays only in the 3-dimesional space, and HUP (Heisenberg Uncertainty Principle) rules its special distribution. Therefore, any PURE linear molecule for p-e-p, p-e-p-e-p, e-p-e-p-e-p-e, etc. systems in 1-dimensional alignment cannot exist. However, LINEAR-LIKE molecule as elongated di-cone or elliptic rotator can exist if the freedom of electron motion in other two dimensions were extremely constrained by surrounding Coulombic (or Electro-Magnetic) interactions of many particles charge-field. (I do not know how it is possible in nano-cracks.)

In such an extremely ‘vertically constrained’ linear-like molecule as p-e-p one, supposing it to be treated adiabatically separated from surrounding many charged particles which made constrained field (namely supposing Born-Oppenheimer wave function separation and Variational Principle for minimum energy system: the principle of electron Density Functional Theory), the constrained condition for the vertical two-dimensional space other than the one-dimensional line of linear-like molecule can be realized by requiring the high kinetic energy rotation motion of the QM center of electron moving around the center-of-mass point of the p-e-p system. The required electron kinetic rotation energy will be more than 1 MeV, really in relativity motion. When the electron kinetic rotation energy would become infinite, it approaches to an ideal linear p-e-p molecule (Hydroton?) with very diminished p-p inter-nuclear distance (to make weak-boson interaction between proton and electron efficiently, 2.5 am i.e. 2.5E-18 m is the considering scale). I do not know if some bodies made Time-Dependent Density Functional calculation by using coupled Dirac equations for such cases.

I hope, Andrew and Daniel will get to some rational solutions. How much the nuclear reaction rates are is to be answered for making a theory rational, in any way.”

7) On July 22, X7 (who is from the US) wrote:

Also, if anybody does not yet have the book, but would like to read a thorough treatment of the theory, the JCMNS included a lengthy article from Ed on the theory last year:

8) Responding to a comment of X5, X8 (myself) wrote (July 23):

To most physicists the term logical approach can mean two things:

a) formal logical, which they associate with mathematics,

b) informal logical, which they associate with intuition.

Both play an equally important role in science, as we all know.

9) Responding to X8, X5 wrote (also July 23)

Which of these would you say I use, Ludwik? My model is built on finding a logical structure that explains all observations without violating any law. Yes, intuition is used, but that is not the only feature.

All theory is based on assumptions. These assumptions are used to guide the math, frequently without being acknowledged. I acknowledge all my assumptions and apply them using cause and effect. How does this differ from using mathematical equations? The only difference is that I use words instead of equations. Of course, I make no effect to calculate values. But what good are such calculated values without agreement that the basic model on which the values are based is correct? In other words, the values to not prove the model. Instead the model determines the values. The theoreticians insist that the cart be placed in front of the horse.

The problem is that I do not use the assumptions required by QM. Therefore, my arguments are not acceptable to modern physics. I suggest this conflict between my approach and that used by the various theoreticians has revealed a flaw in the way modern physics explains reality. Mathematical equations based on QM is their god. No explanation that does not use these tools can be accepted. Do you agree?

10) Another post from X5, addressing X8 (July 24):

Ludwik, I suggest philosophers of science such as you might want to address the issue of how physics evaluates reality compared to the other sciences. What criteria should be used to test a theory? The conventional requirements state that a theory must be tested. If so, what role do calculated values have when the values cannot be compared to any measurement? What role does logical consistently with a large data set have in evaluating a theory? Does such consistency not represent a test based on known behavior? Must all tests be made after the theory is proposed rather than before? Something worth discussing?

11) Another post from X8, (July 24):

Yes, the topic is worth discussing. But I am not a philosopher. Let me say this:

Scientific theories are finally accepted or rejected on the basis of laboratory work and observations of our material world. But intuition, inspiration and emotion also play an important role in scientific research, especially at earlier stages of scientific theoretical investigations. Mathematical theories, on the other hand, are rejected only when logical (mathematical) errors are found in derivations.

12) Another post from X5, (July 24):

With what you say being true, how should the theories describing LENR be evaluated? What criteria should be applied to decide which are flawed and which are worth exploring. All the theories at the present time conflict with each other and with observed behavior. Each is justified by a different mathematical analysis. They all conflict with one or more basic natural laws. How can a person who wants to understand LENR decide which theories to use to design future studies and to interpret what is observed. That is the problem I’m trying to address. This is a serious issue. Ed

13) Another post from X8, (July 24):

Ed asked: “how should the theories describing LENR be evaluated?”

1) LENR are physical phenomena; scientific theories describing these phenomena should be evaluated in the same way as other scientific theories. Predictions of all such theories should be tested in laboratories. A theory whose predictions are verified is usually accepted. Confidence in a theory increases when additional predictions are verified. That is what most of us learned in school, long ago.

2) A theory, according to Karl Popper, is not scientific unless it is falsifiable. In other words, a theory is not scientific unless it makes predictions, which can be tested experimentally.

3) In talking about science I often say that falsifiability is a necessary requirement for a scientific theory but not for a scientific hypothesis. That is why a theory is more difficult to formulate than a hypothesis. Yes, I know that nonscientists often identify theories as unreliable guesses.

14) Another post from X5, (July 24)

After quoting my point 1 (see 13 above):

1) Yes Ludwik, that is what I learned as well. However, testing a theory takes time and money. If the test is complex, the interpretation can be ambiguous, requiring many different tests. If only one theory is involved, the tests can be focused on that one idea. But suppose we have a dozen proposed theories? How do we start to decide which deserves the expense and time?

After quoting my point 2 (see 13 above):

2) The test has to be such that the theory is actually tested. Frequently the behavior can be explained several different ways. This is the present situation with LENR where the observed behavior is claimed to support a particular theory, yet the behavior can be explained equally well several different ways. When this happens, which “theory” is tested? What does the test mean?

After quoting my point 3 (see 13 above):

What does “falsify” mean with respect to a theory describing behavior? If an experiment fails to give the predicted result, is this a falsified result or just a failure to do the experiment properly? For example, most efforts to produce LENR fail. Does this failure mean that LENR is not real, as claimed by the skeptics?

I think this idea for the need to “falsify” actually applies to a mathematical theory, not one that describes physical behavior. Confusion has resulted from the mixing of these different concepts.

15) Another post from X8, (July 25)

1) Yes, some projects might not be possible without big money. But the scientific methodology of validation for expensive projects should be the same as for those, which are less expensive. And yes, the problem of initial irreproducibility should be addressed, for each part of a project.

2) Practical considerations, such as costs of experiments, clarity of publications, reputation of authors, etc., will probably determine how to deal with competing theories.

3) All scientific theories describe physical behavior. The “falsibility” requirement–which I would have named the “confirmation” requirement– was introduced to deal with scientific theories, not with mathematical theories. Mathematicians do not perform experiments to validate theorems.

00 project

Subpage of Kowalski/cf recovered from archive

About my “learn cold fusion” project

Ludwik Kowalski, <>
Montclair State University, Upper Montclair, N.J. 07043

Return to the clickable list of items

In the fall of 2002, to my surprise, I discovered that the field of cold fusion is still active. This happened at the International Conference on Emerging Nuclear Systems (ICENES2002 in Albuquerque, New Mexico). Several papers presented at this conference were devoted to cold fusion topics. Intrigued by the discovery I started reading about recent cold fusion findings and sharing what I learned with other physics teachers. I have been doing this over the Internet using Montclair State University web site

What follows is a set of items posted, more or less regularly, on that web site since October of 2002. The items reflect my own process of learning, mostly from articles published by cold fusion researchers. I am still not convinced that excess heat, discovered by Fleischmann and Pons, is real or that nuclear transmutations can occur at ordinary temperatures. But I do think that time is right for the second evaluation of the entire field. I do not believe that extraordinary findings of hundreds of researchers are products of their imagination or fraud. Our scientific establishment should treat cold fusion in the same way in which any other area is treated. Those who study cold fusion do not appear to be pseudo-scientists or con artists. The items on my list are arranged in the order in which they were posted on my web site.


What follows is an email message I received recently:

Dear Mr. Kowalski,
Help! My name is XXX XXXXX and I am a sophomore at XXXXX High School.  In my chemistry class, I am doing a project on Cold Fusion.  I was looking on the Internet for websites on Cold Fusion, and I came across your links to your Cold Fusion items.  I was wondering if you could give me some advice or information?  I would like to know what Cold Fusion is, [and] how Cold Fusion was started. . . . .

I am no longer comfortable saying that “cold fusion is voodoo-science.” I am a physics teacher; how should I answer questions about cold fusion?

Can a nuclear process be triggered by a chemical process? The answer, based on what we know about nuclear phenomena, is negative. On the other hand many experiments seem to indicate the opposite. These experiments were performed many years after the first evaluation of “cold fusion” was made by our Department of Energy. As a teacher I would very much appreciate a second evaluation of the field by a panel of competent investigators. What can one do to make this happen?

Return to the clickable list of items


Subpage of Kowalski/cf, retrieved from archive

418) First 2015 contributions


Ludwik Kowalski (see Wikipedia)

Department of Mathematical Sciences

Montclair State University, Montclair, NJ, 07043

The CMNS discussion group, to which I belong, remains active. Numbered examples of recent contributions are shown below.

1) L.K. (myself) asked: “What is more important, in a published report,

(a) the description of the protocol, which the author wants to be recognized as a reproducible way to generate excess heat, or

(b) the description of the method by which such heat was measured by the author?

I think that (a) is much more important than (b), especially in the context of our present situation.

If I were still experimentally active, and if I had new excess heat results, I would focus on the protocol, and on the main result–how much excess heat, at what mean input power, and for how long. THe rest would be less important. I would not worry about absence of details in the description my calorimeter.
… In fact, new experimental data are more likely to be recognized as reproducible when different methods of measuring excess heat are used, for a given protocol.

Naturally, a description of my calorimeter would be included if it were unusual, or if the goal were to teach calorimetry.

Explaining an experimental result, before it is recognized as reproducible, might become a big wast of time. I would not try to do this, except in an usual situation, for example, if I actively particpated in collection of experimental data.

1) X1 responded: “I agree completely. As to (b) what is important is the data and actual analysis. (a) without (b) is useful as a proposed experimental approach, but won’t necessarily move mountains. (b) without (a) is not reproducible. …”

2) X2 responded: “Ludwik, why would any one want to explore a protocol claimed to make nuclear energy unless it was actually shown to do this? In the present discussion, the protocol claimed to make Ni active seems very simple. Setting up a device to test the protocol is neither simple nor inexpensive. Nevertheless, I agree showing how to make active material is more important than proving it is active once someone cares to test the protocol.”

3) X3, referring to organized suppression of CF, in 1989, wrote: “The suppression of cold fusion is not a “story” or a “narrative.” It is a fact. It was the most savage and effective suppression of academic freedom in the last 200 years. The people who carried out this suppression did not hide their identities or their motives. On the contrary, they bragged about their roles. They still do. Robert Park vowed to ‘root out and fire’ any scientist who supports cold fusion. He said that to me, in person, and to others. He meant it, and he and others damn well did it. …”

4) X2 , responding to this description, wrote: “Well said, X3. The rejection was without mercy and is continuing. No change in response by anyone would have had any effect on the rejection. The rejection was fueled by academic and commercial interests that apply even today. Nothing will change until the effect is made so commercially viable that rejection is no longer an option. The rejection is not stupid, unreasonable, or based on ignorance. It is based on pure self interest. Consequently, nothing we say can have any effect. Nevertheless, a rational effort to explain and advance understanding would accelerate the required commercial application.”

5) L.K. wrote: “I agree with these two observations. Why doesn’t the US government try to end the CF feud, by promoting objective research? The cost of such research would be relatively neglible. But, according to X1, supporting one or two promissing research projects would not be sufficient. In a subsequent post he wrote: “The real issue is the money lost when CF takes the place of conventional energy. The money involved in the various aspects of finding, refining, and moving energy is so great that introduction of LENR will cause significant disruptions. The smart people who run the financial world know this. I predict every effort will be made to slow introduction of this energy into the commercial mix. That is why significant money is not going into the field.

A conspiracy is not required when most scientists react to the same self interest, which is your point. This self interest exists as long as money is not available. Money will not be available because the people who control money would be hurt if LENR works. That is my point. The situation is truly diabolical.”

6) L.K. wrote: “The issue, in other words, is not only morality and science; it is economy. But something is not clear to me. Attempts to develop other nonconventional sources of energy, such as solar, were not blocked by the same immoral politicians? How can this be explained? Didn’t they know that mastering of solar energy might also ’cause significant disruptions’? ”

Selfishness and competition exist in all fields of human activities. But the CF episode seems to be highly unusual, in terms of duration and high caliber of participants. A random fluctuation, I suppose.

7) Addressing X2, X4 wrote: “Actually, you do not need a conspiracy, you need several groups of people having same interest:

A- Most scientists do not want a revolution in science, they want to continue in their career. They have worked hard to reach their position, and entering in a new field, especially like electrochemistry and calorimetry is difficult. When yo are a senior scientist with all your knowledge, you do do not want to start all over again like a graduate student.

B- Energy and finance industries are not interested in a new competitor. There is already plenty of energy in the world, as we can see now with the price of oil. Imagine that the major news agencies announce that with 1g of nickel, and some additives you can produce kilowatts of heat! For a few days, billions or maybe trillions of dollars will evaporate immediately on the stock market. Economy is very fragile and sensitive to any news. Nobody wants that. I am sure that the day the announcement of the rebirth of CF, the opposition will be fierce. The greens will argue that cheap energy will deplete the earth, the nuclear industry will claim that there might be dangerous radiations, since it is nuclear…..

C- The military did not want CF. Martin Fleischmann said that he wanted the field to be classified, but it was probably already classified.”


8) Referring to my post, X3 wrote: “Solar energy was not blocked because until recently it was too expensive to compete, so the fossil fuel industry did not fear it. Recently, power companies and others have begun serious efforts to block it.

Wind energy, on the other hand, has been attacked by the fossil fuel industry for years. It now produces 5% of U.S. electricity, meaning it has taken away roughly 10% of the market for coal. The coal industry is fighting it tooth and nail. For example, a Member of Congress from West Virginia, a coal producing state, tried to pass a law banning the use of wind energy in the U.S., ostensibly because wind turbines kill birds. This is preposterous; coal, nuclear and other steam generators kill millions of birds from steam and smoke, whereas wind turbines kill a few thousand.”



Subpage of Kowalski/cf, recovered from archive.

419 A New Kind of Nuclear Reactor? 

Ludwik Kowalski, Ph.D. (see Wikipedia)
Montclair State University, Montclair, N.J. USA

Consider a short sealed porcelain tube, containing about one gram of white powdered LiAlH4 fuel mixed with ten grams of powdered nickel. Professor Alexander G. Parkhomov, who designed and tested it, calls this small device a nuclear reactor, in a published report. The purpose of this short article is to briefly summarize Parkhomov’s discovery, in as simple a way as possible, and to make some general comments. Such setup, even if scaled up, would not be useful in an industrial electric power generating plant, due to well-known conversion efficiency limit. The expected readers are scientists and educated laymen.

Section 1 Introduction

Consider a sealed porcelain tube 20 cm long, containing about one gram of white powdered fuel mixed with ten grams of powered nickel. Professor Alexander G. Parkhomov, who designed and tested it, calls this small device a nuclear reactor, in a published report (1). The purpose of this short article is to briefly summarize Parkhomov’s discovery, in as simple a way as possible, and to make some general comments. The expected readers are scientists and educated laymen. Hopefully, this article will prepare them to understand Parkhomov’s report, and similar technical publications on the same topic.

The author, a retired nuclear physicist educated in the USSR, Poland, France and the USA, has dedicated this article to his father who died in a Gulag camp, and to his famous mentor Frederic Joliot-Curie. Who is Alexander Parkhomov? He is a Russian scientist and engineer, the author of over one hundred publications. The photo shown below was taken in 1990. Electronic equipment on the table is probably not very different from what he used to measure thermal energy released in the reactor.


Parkhomov in his lab

Section 2 Describing the Reactor 

The title of Parkhomov’s recent report is “A Study of an Analog of Rossi’s High Temperature Generator.” Is the word “reactor,” in the title of this section, appropriate? Yes, it is. A totally unexplained reaction, releasing an extraordinary amount of heat, must be responsible for what is described in Sections 3. Is this reaction nuclear? Parkhomov certainly thinks so; otherwise he would not use instruments designed to detect nuclear radiations. His powdered fuel was 90% natural Ni; the rest was a LiAlH4 compound.

The controversial field of science and technology (2,3), in which Rossi (4) and Parkhomov are active, is Cold Fusion CF), also known under different names, such as CMNS, LENR, etc. Reference to Andrea Rossi in the title of the report is puzzling. Yes, Rossi also thought that thermal energy released in his device was nuclear, rather than chemical. But that is where the similarities end; the two reactors differ in many ways. For example, Rossi’s fuel was hydrogen gas, delivered from an outside bottle.

The illustration below is a simplified diagram of Parkhomov’s setup. The diagram does not show that the porcelain tube (red in the diagram) was closely wrapped by a heating wire. The electric energy delivered to the heater, in each experiment, was measured using several instruments; one of them was a standard kWh meter, similar to those used by electric companies. Heating of the fuel was necessary to keep the fuel temperature very high; the required temperature had to be between 1000 C and 1400 C.

Simplified diagram of Parkhomov’s setup

The reactor container (a covered box) was immersed in an aquarium-like vessel, filled with boiling and steaming water. To keep the water level constant during the experiment, a small amount of hot water (probably 90 grams) was added through a funnel, every three minutes or so. The mass of the escaped steam, turned into liquid water, was measured outside of the setup. Knowing the mass of the steam that escaped during an experiment one can calculate the amount of thermal energy escaping from the aquarium. Parkhomov’s method of measuring excess heat was not very different from that used by the leader of Russian Cold Fusion researchers, Yuri Nikolaevich Bazhutov (5).

Section 3 A Surprising Energy Result 

Here is a description of results from one of three experiments performed by Parkhomov in December 2014. The porcelain tube with the powdered fuel was electrically heated at the rate of 500W. Then the state of thermal equilibrium was reached. The water in the aquarium remained in that state for nearly one hour. The constant fuel temperature, measured with a thermocouple (also not shown in the diagram) was 1290 C. The time interval of 40 minutes was selected for analysis of experimental results. The amount of water evaporated during that interval was 1.2 kg. The amount of electric energy the heater delivered to water in the aquarium, during that time, was 1195 kJ. Most of that energy was used to evaporate water. But 372 kJ of heat escaped from water via conduction. That number was determined on the basis of results from preliminary control experiments
Let XH be the amount of heat the aquarium water received from the reactor that is from the porcelain tube containing the fuel.
Thus the net “input” energy was

INPUT = 1195 – 372 + XH = 823 + XH

It represents thermal energy received by water, during the experiment.
Knowing the water’s “heat of evaporation” (2260 kJ/kg), one can calculate the thermal energy lost by water to sustain evaporation. It was:

OUTPUT = 2260*1.2 = 2712 kJ.

This is the thermal energy lost by water, during the experiment. According to the law of conservation of energy, the INPUT and the OUTPUT must be equal. This leads to:

XH = 2712 – 823 = 1889 kJ.

This is a surprising result. Why surprising? Because it is much larger than what is released when one gram of a familiar fuel is used. Burning one gram of powdered coal, for example, releases about 30 kJ of thermal energy, not 1889 kJ. What is the significance of this? The superficial answer is that “Parkhomov’s fuel is highly unusual, and potentially useful.”

Section 4 Cold Fusion Contoversy 

Parkhomov’s box is not the first device that was introduced as a multiplier in which electric energy is turned into heat, and where outputted thermal energy exceeds the electric energy supplied. A conceptually similar device, based on electrolysis, was introduced in 1989, by Fleischmann and Pons (F&P). Their small electrolytic cell also generated more thermal energy than the electric energy supplied to it. Trying to establish priority, under pressure from University of Utah administration, the scientists announced their results at a sensational press conference (March 23, 1989). They wanted to study the CF phenomenon for another year or so but were forced to prematurely announce the discovery (private information)

The unfortunate term “cold fusion” was imposed on them. Why unfortunate? Because it created the unjustified impression that cold fusion is similar to the well known hot fusion, except that it takes place at much lower temperatures. This conflicted with what had already been known–the probability of nuclear fusion of two heavy hydrogen ions is negligible, except at stellar temperatures (6,7).

Suppose the discovery had not been named cold fusion; suppose it had been named “anomalous electrolysis.” Such a report would not have led to a sensational press conference; it would have been made in the form of an ordinary peer review publication. Only electrochemists would have been aware of the claim; they would have tried to either confirm or refute it. The issue of “how to explain the heat” would have been addressed later, if the reported phenomenon were recognized as reproducible-on-demand. But that is not what happened. Instead of focusing on experimental data (in the area in which F&P were recognized authorities) most critics focused on the disagreements with the suggested theory. Interpretational mistakes were quickly recognized and this contributed to the skepticism toward the experimental data.

5) Engineering Considerations 

The prototype of an industrial nuclear reactor was built in 1942 by Enrico Fermi. It had to be improved and developed in order to “teach us” how to design much larger useful devices. The same would be expected to happen to the tiny Parkhomov’s device.
a) One task would be to develop reactors able to operate reliably for at least 40 months, instead of only 40 minutes. This would call for developing new heat-resisting materials. Another task would be to replace the presently used (LiAlH4 + Ni) powder by a fuel in which energy multiplication would take place at temperatures significantly lower than today’s minimum, which is close to 1000 C .
b) The third task would be to scale up the setup, for example, by placing one hundred tubes, instead of only one, into a larger aquarium-like container. This would indeed increase the amount of released thermal energy by two orders of magnitude. Scaling up, however, would not increase the multiplication factor. The only conceivable way to increase the MF would be to find a more effective fuel.
c) A typical nuclear power plant is a setup in which a nuclear energy multiplier (a uranium-based reactor) feeds thermal energy into a traditional heat-into-electricity convertor. Such multipliers are workhorses of modern industry. Note that MF of an industrial nuclear reactor must be larger than three; otherwise it would not be economically justifiable. This is a well-known fact, related to the limited efficiency of heat engines.
d) Uranium and thorium seem to be the only suitable fuels, in any kind of energy multiplier. Why is it so? Because fission is the only known process in which more than 100 MeV of nuclear energy is released, per event. This number is about four times higher than what is released when two deuterons fuse, producing helium. Will more efficient fuels be found? If not then chances for replacing coal, oil, and gas by a Parkhomov-like fuels are minimal, except in heating applications.

6) Scientific Considerations

Science is at the base of all modern engineering applications. But the main preoccupation of most scientists is to understand laws of nature, not to build practically useful gadgets. Confirmation of claims made by Parkhomov is likely to trigger an avalanche of scientific investigations, both theoretical and experimental, even if the energy multiplication factor remains low.

a) Suppose that Parkhomov’s energy multiplier, described in this article, is already recognized as reproducible on demand, at relatively low cost. Suppose that the “what’s next?” question is asked again, after two or three years of organized investigations. Scientists would want to successfully identify a “mystery process” taking place in the white powder, inside the porcelain tube. Is it chemical, magnetic, pyrometallurgic, biological, nuclear, or something else? Answering such questions, they would say, is our primary obligation, both to us and to society.

b) Parkhomov certainly believes that a nuclear process is responsible for XH, in his multiplier. Otherwise he would not use instruments designed to monitor neutrons and gamma rays. But, unlike Fleischmann and Pons, he does not speculate on what nuclear reaction it might be. He is certainly aware of tragic consequences of premature speculations of that kind.

7) Social Considerations 

The social aspect of Cold Fusion was also debated on an Internet forum for CMNR researchers. Referring to the ongoing CF controversy, X1 wrote: “The long-lasting CF episode is a social situation in which the self-correcting process of scientific development did not work in the expected way. To what extent was this due to extreme difficulties in making progress in the new area, rather than to negative effects of competition, greed, jealousy, and other ‘human nature’ factors? “A future historian of science may well ask “how is it that the controversy ignited in 1989 remained unresolved for so many decades? –who was mainly responsible for this scientific tragedy of the century, scientists or political leaders of scientific establishment, and govrnment agenies, such as NSF and DOE? Discrimination against CF was not based on highly reproducible eperimental data; it was based on the fact that no acceptbal theory was found to explain unextected experimental facts, reported by CF researchers.

Parkhomov’s experimental results will most likely be examined in many laboratories. Are they reproducible? A clear yes-or-no answer to this question is urgently needed, for the benefit of all. What would be the most effective way to speed up the process of getting the answer, after a very detailed description of the reactor (and measurements performed) is released by Parkhomov? The first step, ideally, would be to encourage qualified scientists to examine that description, and to ask questions. The next step would be to agree on the protocol (step-by-step instructions) for potential replicators. Agencies whose responsibility is to use tax money wisely, such as DOE in the USA, and CERN in Europe, should organize and support replications. Replicators would make their results available to all who are interested, via existing channels of communication, such as journals, conferences, etc. A well-organized approach would probably yield the answer in five years, or sooner.


(1) A.K. Parkhomov, “A Study of an Analog of Rossi’s High Temperature Generator”
(2) L. Kowalski, “Social and Philosophical Aspects of a Scientific Controversy;” IVe Congres de la Societe de Philosophy des Sciences (SPS); 1-3 Juin 2012, Montreal (Canada). Available online at:
(3) Ludwik Kowalski,
(4) Ludwik Kowalski, ” Andrea Rossi’s Unbelievable Claims.” a blog entry:
(5) Peter Gluck interviews Bazhutov:

(6) John R. Huizenga, “Cold Fusion, The Scientific Fiasco of the Century.”
Oxford University Press, 1993, 2nd ed. (available at

(7) Edmund Storms, “The Explanation of Low Energy Nuclear Reaction,” Infinite Energy Press, 2014.
(also available at


This website contains other cold fusion items.
Click to see the list of links


Subpage of Kowalski/cf, recovered from archive.

420 Notes About Parkhomov’s Nuclear Reactor)  

Ludwik Kowalski, Ph.D. (see Wikipedia)

Professor Emeritus


I am going to be 84 this year. Why am I still adding items to this website? Because I like to share what I know and think about the still-ongoing CMNS controversy. This item #420, like the previous item:

is devoted to Parkhomov’s mystery reactor. It is an informal set of sections (notes for myself).

Section 1 (3/27/2015)

My article about Parkhomov’s reactor (see the link above) has been submitted to a Russian Conference, ESA. Actually this is a journal, not a conference. The article was accepted at once.  Three weeks later, responding to my email, they wrote:

“your article was already published. Officially date of publication is February 28th 2015. You can see all articles from the ESA conference in our website:

 Here is reference to your article: “


or (text only)

 The article which I sent them (to be translated into Russian and then published) was actually composed before the item #419 (see the link above).  That is why the English and the Russian texts are not exactly identical.


Section 2 (3/27/2015, posted at the Internet CMNS list for researchers)

*) Reading new (3/27/2015) Parkhomov’s reoprt (15 pages in Russian) at:

*) He calls the new setup “a new variant of Rossi’s thermogenerator.” The calorimeter is no longer based on tha amount of evaporated water; this is not practical when time of operation is much longer that in previous variants. (why is the type of calorimeter not described on page 2) Because the COP and excess power are determined without using a calorimeter, as described on page 12.

*) Page 3 is the new schematic diagram. The reactor is red in the diagram.

  1. a) The ceramic tube has the lenght of 29 cm.
  2. b) In the ceter of the tube is about 12 cm long stainless steel container (ered in the diagram) filled with powder (640 mg ofof Ni and 60 mg of LiAlH4).
  3. c) The electic heater (a 12 cm long solenoid), is outside the tube. The conductivity of the ceramic (tube material) is low. Because of this the tube temperature near the edges is about 50 C, when the temperature near the center is 1200 C. The solenoid wire (Kathal A1) can be heated up to 1400 C.
  4. d) The thermocouple is in the body of the ceramic tube, there the the temptrature is the highest.
  5. e) The tube is hermetically plogesd, to minimize the amount of air inside. The pressure inside the tube is measured with a manometer (zero to 25 atm).

*) Page 4 shows how electric heting energy was was measured and regulated.

*) Page 5 is the photo of the setup. Pages 6 and 7 other photos (during testing)

*) Plotting temperature and power (during initial preparations)

*) Page 8 A temperature and pressure plot

*) Page 9 (Approaching desired temperature temperature and pressure plot).

  1. a) What does one learn by measuring pressure, in the new version of Parkhomov’s reactor? Pressure of what? What is the significance of the pressure peak on page 9 ?

*) Page 10 Electric power during 4 days od the experiment, up to the moment at which the heating wire burned.

  1. a) Why was the electric power changing? Because the operator adjusted it to keep the temperature constant. Yes or no? How to interpret narrow (and not so narrow) peaks. ? Sudden changes in the resistance of the sloenoid wire? Why is this significant?

*) Page 11 Electric power versus time after new heater was installed Same questions as for page 10.

  1. a) Why so many different powers produce teh same reactor temperature, 1200 c ?

*) Page 12 Comparing Watts-versus-temoperature curves (with fuel and without fuel). The rough COP=1100/330=3.3  (at constant temp = 1200 C) Excess heating power 800 W

  1. a) To sustain any chosen temperature (see x axis)one should  impse a certan electric heating power (see y axis). This is unambigous when the fuel is in the reactor (upper line). THis is also unambigous for reactors without fuel–provided T<1200 C.
  2. b) Yes, (1100 – 300)=800 W. But also (1100-640)=460 W.
  3. c) The first gives COP=1100/330=3.3; the second gives COP=1100/460=2.4. Which one is correct?

*) Page 13 More accurate COP=800/330=2.4

*) Page 14 Other photos

*) Page 15 Conclusions

  1. a) The operation of the new setup was stable during the time interval exceeeding three days.
  2. b) The thermal energy released by the setup, during that time, was twice as large as the electric energy suppied.
  3. c) The excess heat was 50 kWh or 18 mega-joules. This is equivalent to heat released when 350 grams of oil or gasoline is burned.
  4. d) Chemical and isoptopic analysis (of the original and spent fuel) is in progress.


Section 3

 Describing the last day (4/16/2015) of the ongoing C.F. conference in Italy–(ICCF19)–one participant wrote:”

“A highlight at ICCF-19 was the presence of Dr Parkhomov. At the end of the presentations on Thursday we were invited to attend at Dr Parkhomov’s poster.  This was apparently his preference over the alternative of being on the podium. At the poster his teenage daughter stood by his side.

 Some 200 – 300 people circled in a great crowd, straining to hear his answers to questions being asked.  At first Olga translated and then the granddaughter. It was a very special moment. Dr Parkhomov is small and unassuming, but his contribution is enormous. Those moments were the highlight of ICCF-19.”

Replying to the above, I wrote: “On Page 15 of his Russian report (see my post of 3/27/2015) Parkhomov informed readers that: “chemical and isotopic analysis (of the original and spent fuel) is in progress.” What is the current status of this part of his project?

 On 4/18/2015 Peter Gluck shared with us (the CMNS discussion list) the link:

to an English-written article of A.G. Parkhomov and E.O. Belousova. The title is “Researches of the Heat Generators Similar to High-Temperature Rossi Reactor.” Why is the date of the publication not specified? On page 11 (under conclusions) the authors report that “Preliminary conclusions from the analysis of fuel element and isotope composition indicate minor change of isotope structure and emergence of new elements in the used fuel.” Will this preliminary conclusion be confirmed? This remains to be seen.


 Section 4

Dear Peter, My CMNS post on 4/18/2015

Thank you for the < > link. 

 1) It brings an English-written article of A.G. Parkhomov and E.O. Belousova. Who is Belousova? The title is “Researches of the Heat Generators Similar to High-Temperature Rossi Reactor.”  

 2) Was this their ICCF19 poster presentation? The affiliation is specified, but not the date. 

 3) On page 11 the authors report that “Preliminary conclusions from the analysis of fuel element and isotope composition indicate minor change of isotope structure and emergence of new elements in the used fuel.” 

 4) This preliminary conclusion is exciting. Being an optimist I am assuming that the “minor change” stands for the “statistically significant change. ” 

Ludwik Kowalski (see Wikipedia)

4/19/20150 ==> Dear Ludwik,


To answer your questions:

1)a E.O. Belousova is the young lady who has helped Parkhomov with translations at Padua, a relative of him (grandaughter or niece).

If you make a Google search “E.O. Belousova” “Lomonosov” you will discover more LENR publications in which she is co-author with Parkhomov and/or Bazhutov- so she is a professional physicist. (ICCF17 too)

  1. b) Her name is Ekaterina and Rossi who spoke with her made the word play Ecaterina =

 E-cat- erina, a good omen.

2 t seems that was exactly their poster presentation- not many new fact from the last one, not time for new data.

3)-4) Be realist, the analysis at Lugano was made after 32 days work, at Parkhomov after 3-4 . Less changes. We have to wait for the official data and will see if they are decisive.





 Section 5

 Parkhomov describes the ICCG19, in Russian (at

< >

               Доклад на ICCF19 А.Г. Пархомовадоклад
Конференция ICCF-19 прошла весьма успешно. 470 делегатов, 98 докладов. Это рекордные показатели. Характерен оптимистический настрой, предчувствие больших свершений. Конференция проходила в наиболее престижном помещении Падуи Palazzo della Ragione, в грандиозном зале с 800 летней историей, украшенной фресками Джотто и Мирето.
Я посетил университет в Болонье по приглашению Джузеппе Леви, одного из экспертов, наблюдавших работу реактора Росси в Лугано. Он показал свои экспериментальные установки и организовал связь по скайпу с университетом Упсала (Швеция) с другими экспертами в Лугано Петерсоном и Бо. Они показали свои устройства, которые планируют запустить в середине мая. Затем к нашей скайп – конференции подключился Росси. Впервые удалось поговорить с этим незаурядным человеком. Он планирует посетить Россию.

А.Г. Пархомов


 Section 6 (To be posted at our CMNS list)

The term “Cold Fusion” (CF) can now be used to describe a process in which a nuclear reaction (of any kind) is triggered by a chemical process, at a temperature lower than several thousand degrees. CF must, however, be very different from the so-called “hot fusion,” in which two heavy hydrogen nuclei fuse to form helium, at stellar temperatures. Why are we certain of this? Because nuclear fusion at low temperatures, according to most physical scientists, is impossible, due to mutual electric repulsion of positive charges.

Yet, reality of CF was announced, in 1989, by two chemists, Martin Fleischmann and Stanley Pons. Why do I thnk that their announcement should be called an invention not a discovery? Because what they actually announced was the unaccounted-for amount of thermal energy. This by itself is not an evidence fora nuclear reaction. The idea that the measured heat was due to fusion of two heavy hydrogen nuclei, like in a star, was a pure speculation, at that time.

The CF feud is often characterized as the �Fiasco of the Century.” A more appropriate name would be “Tragedy of the Century.” Why tragedy? Because unlimited clean-nuclear-energy resources are desperately needed while highly qualified scientists offering help are often not supported by those whose obligation is to use tax money wisely.  This is an international phenomenon; CF pioneers from several countries (France, Italy, Israel, India, Japan, and Russia) have also encountered similar treatment. How can it be explained that more than a quarter of a century has not been enough to resolve the CF controversy, one way or another?

Future CF reactors, if any, like today’s reactors, would have to be periodically stopped and refueled, in order to remove and reprocess spent fuel, Will the fresh fuel be more widely abundant and less expensive than now available nuclear fuels? Will spent fuel be practically nonradioactive and save to handling, as expected by some investigators? It is too early to answer such questions.


 Section 7




Subpage of Kowalski/cf, retrieved from

420 Oriani’s Death and Quick Comments on NAE 
Ludwik Kowalski, Ph.D. (see Wikipedia)
Professor Emeritus
Montclair State University, Montclair, N.J. USA

1) Yesterday (9/3/2015) I learned about the death of Richard Oriani at age 94. The obituary in StarTribune, his local newspaper, can be seen at:

My contribution to this formal goodbye is also there.

2) In a private message received today, a colleague quoted Max Planck–“science does progress funeral by funeral.” Another CMNS researcher commented: In this case we are seeing regress, not progress. As I said years ago this is a generational role reversal. Young scientists are conservative while the old, and now dying ones champion new ideas! The world is upside down. …”

Is the CMNS field progressing or regressing? I do not know how to answer this question. One thing is sure, this area of science, often called “Cold Fusion,” is still active.

3) A good example of activity is the “Interview with Dr. Edmund Storms, conducted by Peter Gluck. It was posted on the CMNS forum for active scientists (see the blue italic text below, and my comments, next to it). Dr. Storms is a nuclear chemist with over thirty years of service at Los Alamos National Lab, and now working privately at Kiva Labs. His 2014 book, “The Explanation of Low Energy Nuclear Reaction,” describing the field, is commercially available:

Also see his YouTube presentation at:

Peter Gluck, PhD in chemical engineering, is a retired technologist who has worked many tens of thousands of hours with matter (chemical industries), energy (new sources of energy) and information (web search). He communicates with the world via the blog EGO OUT. >


Based on a discussion stimulated, in part, by the coming CERN Seminar on D/H loaded palladium , Ed Storms has summarized his answers in this way. It is about the essence of the problems of the field.

“LENR [Low Energy Nuclear Reactions] has two aspects, each of which has to be considered separately. The first question is where in the material does the [new kind of] ]nuclear reaction take place. In other words, were is the NAE [Nuclear Active Environment] located. This means where in space is the NAE located, such as near the surface, and what is unique about the NAE. The LENR reaction CAN NOT take place in the normal lattice structure where it would be subjected to the well known laws [such as law of mutual electric repulsion of positive nuclei] that apply to such structures.

I propose the only place able to support such a nuclear reaction while not being subjected to the known chemical requirements are cracks consisting of two surfaces with a critical gap between them.  Before the nature of the nuclear process can be discussed, a NAE must be identified and its existence must be agree to.  Failure to do this has resulted in nothing but useless argument with no progress in understanding or causing the phenomenon[of nuclear fusion].

Once the characteristics of the NAE are identified, a mechanism can be proposed to operate in this NAE with characteristics compatible with this environment.  Attempts to propose a mechanism without identifying the NAE are doomed to failure.  Without knowing the NAE, we are unable to test the characteristics of the nuclear mechanism to see if it can take place in an ordinary material and we are unable to know how to create a potentially active material.

This requirement is so basic, further discussion is pointless unless agreement is achieved.

This is not a normal physics problem where any idea can be made plausible simply by making a few assumptions. The nature of the chemical environment prevents many assumptions. We are proposing to cause a nuclear reaction in ordinary material where none has been seen in spite of enormous effort and none is expected based on well understood theory. A significant change in the material must first take place. This change must be consistent with the known laws of chemistry. Only the creation of cracks meets this requirement.

Once the NAE is identified, the characteristics of the nuclear reaction must be consistent with what is known. Simply proposing behavior based on general physics concepts is useless.  For example, the role of perturbed angular correlations, which you suggest, must be considered in the context of the entire proposed reaction. The question means nothing in isolation.  Like many proposed mechanisms, the idea cannot be tested because it has no clear relationship to the known behavior of LENR or to the variables known to affect the phenomenon.

This is not a guessing game. We now have a large collection of behavior all models most explain.  Why not start by considering models that are consistent with this information?”

4) NAE, in other words, if I understand Storms correctly, is a hypothetical environment in which mutual repulsion of protons is much weaker (and we do not know why) than in the vacuum separating atoms. He is right that cold fusion would take placespontaneously (essentially by definition) in such environment. But is he also right by saying that “attempts to propose a mechanism [of cold fusion] without identifying the NAE are doomed to failure.” Probably not. Theoretical scientists have no other options but to use models that have already been validated.

5) Let me mention something else questionable. On one hand Ed states that we know nothing about NAE; on the other hand he claims that NAE can be created “in cracks only.” How does he justify this?

6) Is it correct to say that NAE is related to nuclear cold fusion like AIR, our well-known “Flying Active Environment,” is related to airplanes on Mars? We know a lot about AIR but we know nothing about NAE.

7) The last paragraph of the interview is profound; it has to do with the essence of scientific methodology. Yes, speculations resulting from arbitrary assumptions belong to mathematics (and to theology), not to physical science, where theories are “made plausible” by reproducible experimental data, as we know.


Subpage of Kowalski

This is from Google’s cache of It is a snapshot of the page as it appeared on 10 Aug 2018 18:06:54 GMT.

Original links may be replaced with local links where I have a page, the link has been bolded when I have recovered the page.

This website contains other cold fusion items.
Click to see the list of links

Links to “cold fusion” items.Ludwik Kowalski
My motivation? Click to see a short introduction. 

Click here to go to the bottom of this long list

0) I am no loger saying “it is woodoo sciece.” click 
1) Introducing Cold Fusion to students. click 
2) A typical “cold-fusion” setup. click 
3) Three kinds of Cold fusion. click 
4) Short biographies of three Cold Fusion Scientists. click 
5) Aberration of the scientific methology. click 
6) On dangers of “second hand” publishing. click 
7) On Pathological Science (N-rays story). click 
8) On Burden of Proof in Science. click 
9) Scientific Method in Cold Fusion. click 
10) A Russian connection. click 
11) Bottom Line. click 
12) What do physics teachers think about CF? click 
13) More about the Russian Connection. click 
14) What is pseudo-scientific in this? click 
15) Or what is pseudo-scientific in this? click 
16) Here is an example of real pseudo-science. click 
17) An Italian connection. click 
18) Nobel Prize for “cold fusion?” click 
19) A French connection. click 
20) Excommunication of heretics? click 
21) If it were up to me I would do it. click 
22) Another good article summarized. click 
23) A Japanese connecion. click 
24) Three short introductionary tutorials. click 
25) A technical tutorial. click 
26) Comments on the 1989 ERAB report. click 
27) Conspiracy? For what purpose? click 
28) Summary of a very impressive paper. click or
29) Another French connection. click 
30) New APS ethics guidelines and the CF issue. click 
31) Excess heat for a student lab? Yes, why not. click 
32) Pathological science or important observations to share? click 
33) How would Richard Feynman react to CF? click 
34) My own proposal. click 
35) On methodology and on difficulties. click 
36) Ethical issues as seen by an active CF researcher. click 
37) On coulomb barrier lowering. click 
38) Producing radioactive tritium. click 
39) Changing isotopic composition. click 
40) My cold fusion lecture plan. click 
41) Comments from a friend. click 
42) More comments.; to publish or not to publish? click 
43) One year after the announcment: click 
44) Before going to Salt Lake City: click 
45) After returning from Salt Lake City: click 
46) Charlatans versus scientists: click 
47) Catalytic fusion: click 
48) Charge Clusters ? click 
49) Not accepted by The Physics Teacher: click 
50) From the last APS meeting: click 
51) US Navy supported cold fusion: click 
52) Alchemy in cold fusion: click 
53) Another way; role of surface structure: click 
54) Criticizing cold fusion: click 
55) The smoking gun?: click 
56) Technological Con Artistry?: click 
57) And what about hydrinos?: click 
58) From a debate on another list: click 
59) A piece to publish in a newsletter: click 
60) Nuclear Alchemy, 1996: click 
61) What are the causes of this conflict?: click 
62) Cold Fusion was compared with creationism: click 
63) Jed’s interesting general observations: click 
64) Stalin’s pseudo-science: click 
65) Pseudo-science in Russia today: click 
66) Cybernetics as pseudo-science: click 
67) Observations made at Texas A&M University: click 
68) Two meanings of “impossible:” click 
69) Conspiracy to deceive? I do not think so:” click 
70) Please help us click 
71) A Nobel Laureate about voodoo science click 
72) Anecdotal Evidence? click 
73) A confirmation of a reproducible excess heat experiment click 
74) E. Mallove describes reproducible excess heat experiments click 
75) Do not mix science with fiction click 
76) Secrecy in cold fusion research click 
77) Another evidence of nuclear reactions in “cold fusion” click 
78) An older fight for acceptance; the story of Arrhenius click 
79) Early beta decay studies compared with cold fusion click 
80) Secular theology? click 
81) Where are theories of cold fusion? click 
82) Speculations of a retired physicist (This unit is being revised by the author) click 
83) Disassociate cold fusion from antigravity, hydrinos, etc. click 
84) A cold fusion opinion statement of a physics teacher click 
85) From a book of a cold fusion researcher in Japan. click 
86) Pseudoscience in Russia. click 
87) Fighting a straw man. click 
88) Rejections of cold fusion papers by editors click 
89) Hydrinos again click 
90) My talk at the 10th International Cold Fusion Conference click 
91) My poster at that conference click 
92) Agenda for the preconference cold fusion workshop click 
93) Back to stories from Kruglyakov’s book click 
94) Browsing the Internet click 
95) Catalysts in cold fusion? click 
96) No gamma rays were found in our experiment click 
97) My published letter to the editor of The Physics Teacher click 
98) Students demonstrating excess heat from cold fusion click 
99) Speeding up radioactive decay? click 
100) Documenting a rejection by Physics Today click 
101) How excess heat was measured. click 
102) A paper by the retired physicist from unit #82. (This unit is being revised by the author) click 
103) Students trying to demonstrate excess heat. Is it nuclear? click 
104) New alchemy? Yes, indeed. click 
105) More about new alchemy experiments. click 
106) Why is Norman Ramsey silent today? click 
107) Biological alchemy ? click 
108) Another experiment for your students ? click 
109) A video cassette “Fire from Water” for your students click 
110) They need a real leader click 
111) Photos of Fleischmann and Jones, August 2003 click 
112) The dilemma of a physics teacher. click 
113) Unexplained neutrons and protons; recent papers of Steven Jones. click 
114) Voices from teachers and students (?) click 
115) They need your suppost click 
116) A negative evaluation of cold fusion claims click 
117) Exposing false claims click 
118) New error analysis versus old? click 
119) Errors in unison click 
120) A Chinese connection click 
121) Just Withering from Scientific Neglect click 
122) Laser-like X-rays in “cold fusion?” click 
123) An important Japanese connection (Iwamura) click 
124) How can one doubt that charged particles are real (WAITNING FOR PERMISSION TO SHARE) click 
125) An article I want to publish click 
126) Reactions or contamination, that is the question click 
127) “Water remembers?” This is pseudoscientific click 
128) Screening in condensed matter or something else? click 
129) Quixotic Fiasco? click 
130) Sonofusion becomes acceptable click 
131) Cold Fusion History described by Steven Jones click 
132) Cold Fusion History described by Martin Fleischmann click 
133) Cold Fusion name was dropped click 
134) Second evaluation by the DOE decided. How certain is this? click 
135) Seek not the golden egg, but the goose click 
136) What is cold fusion? click 
137) An inventor or a con artist? click 
138) Recent Internet messages. click 
139) If I were in charge. click 
140) Kasagi’s papers. click 
141) A paper from Dubna, Russia. click 
142) In memory of Eugene Mallove. click 
143) Questions about science and society. click 
144) Catalytic nuclear reactions. click 
145) Role of the non-equilibrium. click 
146) Scientific or not scientific? click 
147) Extract from an old good summary (E. Storms, 2000). click 
148) On difficulties communicating. click 
149) A message from a young person. click 
150) Answers to some of my questions formulated in unit #148. click 
151) Richard’s simulated debate about excess heat errors. click 
152) My review article on current cold fusion claims. click 
153) TOO LONG (History of rejections of my review article.) click 
154) SHORTER (History of rejections of my review article.) click 
155) Storms’ tutorial on diffcult cases in calorimetry. click 
156) Unexpected charged particls were observed again. click 
157) Detecting cold fusion charge particles with CR-39: Comments and questions. click 
158) An extract from an interesting MIT article. click 
159) Categorization of cold fusion topics. click 
160) Radon background or not? (WAITNING FOR PERMISSION TO SHARE) click 
161) Josephson’s lecture and other comments on cold fusion (mostly from teachers). click 
162) An example of a cold fusion claim that makes no sense to me. click 
163) Absence of 100% reproducibility: What does it mean? click 
164) A case of mutual deception? click 
165) A short comment on names and definitions. click 
166) Non-scientists in cold fusion? click 
167) An unnecessary “open letter?” I think so. click 
168) Nucleosythesis in a lab? A Ukrainian connection. click 
169) An interesting effect was discovered in Texas click 
170) A Swedish connection that became something else. click 
171) A lively and informative discussion? I hope so. click 
172) Cold fusion being presented to students. click 
173) Wikopedia: Philosophical points of view. click 
174) What was the origin of excess power? An experiment worth replicating. click 
175) According to Mizuno et al. excess power can not possibly be chemical. click 
176) Swift nuclear particles from an electrolyte? Check it in a lab. click 
177) List of eleven international cold fusion conferences. click 
178) Sharing recent messages and comments click 
179) A student project. Work in progress. NOT YET POSTED click 
180) Please help to preserve cold fusion history. click 
181) A new cold fusion book. click 
182) Seeing a huge number of cold fusion tracks with my own eyes. click 
183) Pictures and numbers. (continuation from the unit #182). click 
184) Contamination or very long “life after death?” (continuation from the unit #183). click 
185) CR-39 detectors of charged nuclear particles. click 
186) Too good to be true? Turning radioactive isotopes into stable isotopes. click 
187) Magnetic monopoles in cold fusion, and other claims. click 
188) A chemically triggered nuclear process? What else can it be? click 
189) About my four attempts to observe a nuclear “cold fusion” effect. click 
190) A better generic name for “cold fusion?” click 
191) Trying to describe my understanding of Fisher’s polyneutrons. click 
192) Trying to replicate Oriani’s observations in my own cell. An electronic logbook. click 
193) Links to another website. click 
194) Comments about theories. click 
195) A pdf file to share. Click to see my introduction. Then download, if you want. click 
196) Open letter to the DOE scientists who investigated recent CANA claims. click 
197) My second Oriani effects experiment (the first is described in the unit #192). click 
198) Work in progress
199) Nonsense, fraud or very advanced science? click 
200) Teachers discussing scientifc methods click 
201) Cooperating with a high school student performing excess heat experiments. click 
202) Fraudulent claims of a German anthropologist. click 
203) On ending the controversy. click 
204) An Israeli connection. click 
205) A troubling episode. What can be done to prevent such things? click 
206) A new Russian report on nuclear alchemy. click 
207) Controversial cases in science (from New Scientist). click 
208) Haiko’s conversation with Martin Fleischmann click 
209) An Australian connection. click 
210) Making progress toward 100% reproducibility? click 
211) Charles Beaudette writes about the DOE report. click 
212) Answering four questions. click 
213) About the company Energetics Technologies in Israel. click 
214) The power of delusion or healthy optimism? click 
215) Solar Electricity click 
216) Too good to be true click 
217) Ukrainian connection again click 
218) To do or not to do it? click 
219) A workshop at Stevens Institute of Technology. click 
220) Upcoming CF workshops and conferences. click 
221) Work in progress (Mitch) click 
222) The majority of nature’s treasures are still hidden. click 
223) A spectacular excess heat report from Russia. click 
224) A cold fusion colloquium at MIT. click 
225) A student essay (WORK IN PROGRESS) click 
226) Another attempt to commercialize? click 
227) A new version of Fisher’s polyneutron theory. click 
228) Cars running on water? An old US patent. click 
229) A Russian patent of Gnedenko et al. click 
230) Translations of two Russian papers. click 
231) Gold from carrots. click 
232) Free energy and its impact. click 
233) More on free energy. click 
234) Comments on Ellis’ article about laws of complexity. click 
235) One year later. click 
236) Promises promises. click 
237) An MIT professor writes a report on an iESiUSA device shown to him. click 
238) What is cold fusion? click 
239) Identity theft? Cold fusion claims should be justified scientifically. click 
240) Generation of helium in cold fusion. click 
241) Questions concerning the protocol described in unit #240 click 
242) Now I must deal with two slightly different protocols. click 
243) Will sixty letters to the editor be published by Physics Today? click 
244) Coulomb barrier depends on the range of nuclear forces. click 
245) Avoiding a global disaster. click 
246) Manipulating half-lives of radioactive nuclei ? click 
247) Can magnetic forces (resulting from rotation) help deuterons to overcome coulomb barriers? click 
248) A proposed set of better names for known nuclear anomalies. click 
249) Trying to understand a theory explaing Condense Matter Nuclear Science (CMNS) data. click 
250) Stanislaw Szpak et al. — another case of nuclear alchemy. click 
251) Fracto-fusion, crack-fusion, Casimir-fusion, van der Waals fusion, hammer-fusion. click 
252) An invitation to perform a simple excess heat experiment. click 
253) History of Mizuno-type experiments (such as that described in unit #252). click 
254) Comments of a theoretical paper of Windom and Larsen. click 
255) Progress report and comments. click 
256) A possible source of error in some excess heat reports click 
257) A difficult to accept statistical protocol of Bass and McKubre click 
258) Can systematc errors result from sampling of irrecular waveforms? click 
259) The excess heat can be apparent in our next week experiment. click 
260) Is that kind of excess heat real or apparent? click 
261) How much excess heat ? click 
262) Common hydrogen (H2O) verus heavy hydrogen (D2O). click 
263) Fraudulent schemes are probably as old as civilization. click 
264) Measuring electric energy. click 
265) Another Italian connection. click 
266) Scared, reassured and scared again. click 
267) Excess heat not confirmed in our Texas experiment. click 
268) With an apology to Dr. Dean Sinclair click 
269) Analytical methods used in CMNS (condense matter nuclear science) research. click 
270) Colorado experiments also fail to confirm excess heat. click 
271) Another Colorado experiment. click 
272) No excess heat from Mizuno-type experiments. click 
273) Mircobial Transmutations at ICCF12 click 
274) Scientific Fraud ? An article in Washington Post and comments it generated. click 
275) Kasagi and excess fusion cross sections at low energies. click 
276) Low counts statistics (not finished?) click 
277) An outburst of messages. click 
278) New tabletop fusion devices: is it hot fusion or not? click 
279) Fraudulant financial manipulations ? click 
280) No courtesy of replying from Yale Scientific. click 
281) All reliable results should be reported. Hiding negative results is not scientific. click 
282) Velikovsky’s speculations. click 
283) Trying to be a moderator at the ISCMNS meeting. click 
284) Hydrinos versus CMNS click 
285) Our private correspondence before the Colorado-2 experiment. click 
286) An exciting Colorado2 experiment and comments over the Internet. click 
287) Social aspects of our controvery that started 17 years ago. Work in progress click 
288) Voices from a restricted list for CMNS researchers. click 
289) Another Russian connection? click 
290) Unexpected comments from some subscribers of the restricted CMNS discussion list. click 
291) Yes, these experiments are dangerous, but . . . click 
292) Why is this kind of discrimination legal? click 
293) Pathological science? click 
294) A historical overview of cold fusion. click 
295) Chiropractic also had to fight for recognition. click 
296) About the origin of Mizuno-type excess heat. click 
297) Too much sociology? click 
298) Nuclear alchemy in CMNS. click 
299) Randy Mills and his new chemistry. click 
300) Preliminary Colorado2 results. click 
301) Colorado2 results are now much less certain. click 
302) Alarming numbers and comments. click 
303) Well known reactions or something else? click 
304) Researchers discussing excess energy. click 
305) Science versus protoscience. click 
306) How to restrict a Google search to one server? click 
307) Archive of private correspondence about Mizuno-type experiments click 
308) Steven Jones plus an expecting new book about CMNS click 
309) Researchers speculate about NAE (nuclear active environment) click 
310) Alchemy versus CMNS; waiting for the proverbial “proof in the pudding.” click 
311) Reifenschweiler Effect (introducing an expected essay) click 
312) My old speculation about another kind of beta decay click 
313) Are oil companies responsible for conspiring against CMNS? click 
314) Will this be the first simple and truly reprodicible-on-demand demo? click 
315) A new phenomenon or a wrong interpretation of experimental data? click 
316) A new paradigm at the next stage! Why not? click 
317) About CR39 and other things click 
318) Theories, metatheories and philosophy click 
319) Our Phase 1 of The Galileo Project experiment click 
320) Our first steps in Phase 2 of The Galileo Project click 
321) My rejected publication + references click 
322) Rutherford-Bohr model being questioned. click 
323) This publication was not rejected; it was withdrawn. click 
324) Additional validation of our claim (made in unit #319). click 
325) More about SPAWAR results. click 
326) Online logbook of an experiment (continuation of unit #320) click 
327) Online logbook of the next PACA experiment (continuation of unit #326) click 
328) Srategy and scientific methodology: Recent comments and observations. click 
329) Continuation of after item 327; the online logbook. Experiment #5. click 
330) Trusting authotities in science click 
331) An illustration of propagation of errors via calibration. click 
332) Sonofusion is also struggling for recognition. click 
333) Oriani’s paper that was rejected by Phys Rev C without sending it referees. click 
334) For an item devoted to an ongoing Canada project (to be shown to me). still waiting 
335) A draft of my Catania 2007 workshop paper. click 
336) Catania 2007 paper as submitted, after the workshop. click 
337) Catania 2007 paper on nuclear radiation inside a glow discharge cell. click 
338) Voices from an interesting discussion about theories. click 
339) Three body orbiting: macroscopic and submicroscopic. click 
340) Speeding up radioactive decay: why is it not used to destroy radioactive waste? click 
341) Bazhutov’s search for erzions and enions click 
342) Two speculative messages from theoretically-oriented people click 
343) Work in progress click 
344) A new book about cold fusion (plus 4 recent messages from the CMNS list). click 
345) My own comments on the new book about cold fusion. click 
346) Calibration of CR-39 and other useful data. click 
347) About a new cold fusion paper published in a European mainstream physics journal. click 
348) Replying to a student interested in cold fusion click 
349) Modeling CR-39 tracks click 
350) What is it, unexplained alpha particles or something else? click 
351) High voltage electrolysis experiments (updates) click 
352) Ludwik’s paper for the next Cold Fusion conference (Washington DC, August, 2008) click 
353) After the Cold Fusion conference (notes and reflections) click 
354) Excess-heat cell of John Dash. click 
355) Neutrons ? click 
356) 20th anniversary is approaching click 
357) Discussing SPAWAR interpretation in a mainstream refereed journal. click 
358) Summary of Ludwik’s CMNS projects click 
359) SPAWAR high energy neutrons (plus other things) click 
360) CBS broadcased a unit about cold fusion click 
361) SPAWAR triple tracks click 
362) Curie Project click 
363) About Alchemy and CMNS click 
364) Do polyneutrons explain CMNS? click 
365) My shot-in-a-dark experiment click 
366) Cold Nuclear Fusion: Does it exist? A recent review by a Russian scientist click 
367) Discussing theories click 
368) The Curie Project (a difficult start) click 
369) History of my CR-39 cooperation with Oriani click 
370) Spawar new results and new interpretation click 
371) Physics Teachers discuss our energy options (not a cold fusion item) click 
372) Technical information about CR-39, mylar, etc. click 
373) The Curie Project (update) click 
374) New Scientist thread (NOT READY discussing cold fusion) click 
375) Results from The Curie Project (NOT READY. To be shown after results are published) click 
376) Arata-type experiments click 
377) Scientific method click 
378) Destruction of radioactivity by cavitation or a false alarm? click 
379) My paper (comments about SPAWAR results) was rejected by a mainstream journal. click 
380) Destruction of radioactivity by cavitation or a false alarm? click 
381) Free proceeding from the 4th cold fusion conference (ICCF4) click 
382) Four most important cognitive terms to discuss scientific validations. click 
383) Other sets of CR-39 results click 
384) Loose ends: The debate is going on. click 
385) More speculations. click 
386) Integrity or hypocrisy (on the Physics Today web site)? click 
387) Voices from the private discussion list for researchers.? click 
388) A patent for a spectacular energy amplifier click 
389) This article might be a joke. click 
390) Another set of spectacular claims. But the two papers are poorly written. click 
391) Topic to be assigned click 
392) A potentiall damaging episode click 
393) Rejections of CF manuscripts click 
394) Draft of the Montreal article click 
395) A new SPAWAR paper (emission of high enegy neutrons). click 
396) What is new in March 2012 ? click 
397) Ludwik’s first Progress in Physics article (about Rossi) to download. click 
398) Ludwik’s second Progress in Physics article (Social aspects of CF) to download. click 
399) Spectacular claims of Andrea Rossi click 
400) Why no follow-up investigations? click 
401) Curie Project and SPAWAR project (July 2011). click 
402) Bacterial transmutations (to download). click 
403) Ludwik’s 10 Years With Cold Fusion: A Memoir. click 
404) Cold fusion is not the same as hot fusion. click 
405) AmoTerra Process Destroying Radioactive Waste Again (see Unit 186). click 
406) History of the biological alchemy controversy. click 
407) Our Curie Project did not confirm this CF claim. click 
408) Rossi’s claims conflict with traditional nuclear physics. click 
409) Social aspects of the cold fusion controversy. click 
410) Cold Fusion Energy Levels. click 
411) Interesting Fall 2012 messages (production of He4). click 
412) NAE again; Storm’s summary click 
413) Philosophical and Social Aspects click 
414) Sample of interesting posts click 
415) Another Cold Fusion conference is approaching click 
416) Discussing reproducibility click 
417) Recent posts click 
418) Voices from the CMNS list (January 2015) click 
419) Parkhomov’s Nuclear Reactor (March 2015) click 
420) Loose notes on his Nuclear Reactor (May 2015) click 

420) Peter Gluck interviews Edd Storms (September 2015) click 

Return to the top of this list of items.

This website contains other cold fusion items.
Click to see the list of links

============================================ Comments will be appreciated



Ludwik Kowalski (Wikipedia, archived) maintained a set of pages commenting on cold fusion issues, hosted by his university. That site is down at the moment, so I’ve decided to mirror what I can find of it on the internet archive.

I did attempt to contact Dr. Kowalski, but he did not respond as far as I know. However, the site returned, and I am mirroring it at

This is as a subpage here, Kowalski/cf




On levels of reality and bears in the neighborhood

In my training, they talk about three realities: personal reality, social reality, and the ultimate test of reality. Very simple:

In personal reality, I draw conclusions from my own experience. I saw a bear in our back yard, so I say, “there are bears — at least one — in our neighborhood.” That’s personal reality. (And yes, I did see one, years ago.)

In social reality, people agree. Others may have seen bears. Someone still might say, “they could all be mistaken,” but this becomes less and less likely, the more people who agree. (There is a general consensus in our neighborhood, in fact, that bears sometimes show up.)

In the ultimate test, the bear tears your head off.

Now, for the kicker. There is a bear in my back yard right now! Proof: Meet Percy, named by my children.

I didn’t say what kind of bear! Percy is life-size, and from the road, could look for a moment like the animal. (The paint is fading a bit, Percy was slightly more realistic years ago, when I moved in. I used to live down the street, and that’s where I saw the actual animal.)

Continue reading “On levels of reality and bears in the neighborhood”

Hagelstein on theory and science

On Theory and Science Generally in Connection with the Fleischmann-Pons Experiment

Peter Hagelstein

This is an editorial from Infinite Energy, March/April 2013, p. 5, copied here for purposes of study and commentary. This article was cited to me as if it were in contradiction to certain ideas I have expressed. Reading it carefully, I find it is, for the most part, a confirmation of these ideas, and so I was motivated to study this here. Some of what Peter wrote in 2013 is being disregarded, not to mention by pseudoskeptics, but also by people within the community. He presents some cautions, which are commonly ignored.

I was encouraged to contribute to an editorial generally on the topic of theory in science, in connection with publication of a paper focused on some recent ideas that Ed Storms has put forth regarding a model for how excess heat works in the Fleischmann-Pons experiment. Such a project would compete for my time with other commitments, including teaching, research and family-related commitments; so I was reluctant to take it on. On the other hand I found myself tempted, since over the years I have been musing about theory, and also about science, as a result of having been involved in research on the Fleischmann-Pons experiment. As you can see from what follows, I ended up succumbing to temptation.

I have listened to Peter talk many times in person. He has a manner that is quite distinctive, and it’s a pleasure to remember the sound of his voice. He is dispassionate and thoughtful, and often quietly humorous.

Science as an imperfect human endeavor 

In order to figure out the role of theory in science, probably we should start by figuring out what science is. Had you asked me years ago what science is, I would have replied with confidence. I would have rambled on at length about discovering how nature works, the scientific method, accumulation and systematization of scientific knowledge, about the benefits of science to mankind, and about those who do science. But alas, I wasn’t asked years ago.

[Cue laugh track.]

In this day and age, we might turn to Wikipedia as a resource to figure out what science is.

[Cue more laughter.] But he’s right, many might turn to Wikipedia, and even though I know very well how Wikipedia works and fails to work, I also use it every day. Wikipedia is unstable, often constantly changing. Rather arbitrarily, I picked the March 1, 2013 version by PhaseChanger for a permanent link. Science, as we will see, does depend on consensus, and in theory, Wikipedia also does, but, in practice, Wikipedia editors are anonymous, their real qualifications are generally unknown, and there is no responsible and reliable governance. So Wikipedia is even more vulnerable to information cascades and hidden factional dominance than the “scientific community,” which is poorly defined.

We see on the Wikipedia page pictures of an imposing collection of famous scientists, discussion of the history of science, the scientific method, philosophical issues, science and society, impact on public policy and the like. One comes away with the impression of science as something sensible with a long and respected lineage, as a rational enterprise involving many very smart people, lots of work and systematic accumulation and organization of knowledge—in essence an honorable endeavor that we might look up to and be proud of. This is very much the spirit in which I viewed science a quarter century ago.

Me too. I still am proud of science, but there is a dark side to nearly everything human.

I wanted to be part of this great and noble enterprise. It was good; it advanced humanity by providing understanding. I respected science and scientists greatly.

Mixed up on Wikipedia, and to some extent here in Peter’s article, is “understanding” as the goal, with “knowledge,” the root meaning. “Understanding” is transient and that we believe we understand something is probably a particular brain chemistry that responds to particular kinds of neural patterns and reactions. The real and practical value of science is in prediction, not some mere personal satisfaction, and that reaction is rooted in a sense of control and safety. The pursuit of that brain chemistry, which is probably addictive, may motivate many scientists (and people in general). Threaten a person’s sense that they understand reality, strong reactions will be common.

We can see the tension in the Wikipedia article. The lede defines science:

Science (from Latin scientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] In an older and closely related meaning (found, for example, in Aristotle), “science” refers to the body of reliable knowledge itself, of the type that can be logically and rationally explained (see History and philosophy below).[2]

There are obviously two major kinds of knowledge: One is memory, a record of witnessing. The other is explanation. The difference is routinely understood at law: a witness will be asked to report what they witnessed, not how they interpreted it (except possibly as an explanatory detail; in general, interpretation is the province of “expert witnesses” who must be qualified before the court. Adversarial systems (as in the U.S.) create much confusion by not having the court choose experts to consult. Rather, each side hires its own experts, and some make a career out of testifying with some particular slant. Those differences of opinion are assessed by juries, subject to arguments from the plaintiff and defendant. It’s a place where the system can break down, though any system can break down. It’s better than some and worse than others.

Science, historically and practically (as we apply science in our lives), begins, not with explanations, but with observation and memory and, later in life, written records of observations. However, the human mind, it is well-known, tends to lose observational detail and instead will most strongly remember conclusions and impressions, especially those with some emotional impact.

So the foundation of science is the enormous body of experimental and other records. This is, however, often “systematized” through the explanations that developed, and the scientific method harnesses these to make the organization of knowledge more efficient through testing predictions and, over time, deprecating explanations that are less predictive, in favor of those more precise and comprehensive in prediction. This easily becomes confused with truth. As I will be repeating, however, the map is not the reality.

Today I still have great respect for science and for many scientists, probably much more respect than in days past. But my view is different today. Now I would describe science as very much a human endeavor; and as a human activity, science is imperfect. This is not intended as a criticism; instead I view it as a reflection that we as humans are imperfect. Which in a sense makes it much more amazing that we have managed to make as much progress as we have. The advances in our understanding of nature resulting from science generally might be seen as a much greater accomplishment in light of how imperfect humans sometimes are, especially in connection with science.

Yes. Peter has matured. He is no longer so outraged by the obvious.

The scientific method as an ideal

Often in talking with muggles (non-scientists in this context) about science, it seems first and foremost the discussion turns to the notion of the “scientific method,” which muggles have been exposed to and imagine is actually what scientists make use of when doing science. Ah, the wonderful idealization which is this scientific method! Once again, we turn to Wikipedia as our modern source for clarification of all things mysterious: the scientific method in summary involves the formulation of a question, a hypothesis, a prediction, a test and subsequent analysis. Without doubt, this method is effective for figuring out what is right and also what is wrong as to how nature works, and can be even more so when applied repeatedly on a given problem by many people over a long time.

The version of the Wikipedia article  as edited by Crazynas:  22:30, 14 February 2013.

However, the scientific method, as it was conveyed to me (by Feynman at Cal Tech, 1961-63) requires something that runs in radical contradiction to how most people are socially conditioned, how they have been trained or have chosen to live. and actually live in practice. It requires a strenous attempt to prove one’s own ideas wrong, whereas normal socialization expects us to try to prove we are right. While most scientists understand this, actual practice can be wildly off, hence confirmation bias is common.

In years past I was an ardent supporter of this scientific method. Even more, I would probably have argued that pretty much any other approach would be guaranteed to produce unreliable results.

Well, less reliable.

At present I think of the scientific method as presented here more as an ideal, a method that one would like to use, and should definitely use if and when possible. Sadly, there are circumstances where it isn’t practical to make use of the scientific method. For example, to carry out a test it might require resources (such as funding, people, laboratories and so forth), and if the resources are not available then the test part of the method simply isn’t going to get done.

I disagree. It is always practical to use the method, provided that one understands that results may not be immediate. For example, one may design tests that may only later (maybe even much later) be performed. When an idea (hypothesis) has not been tested and shown to generate reliable predictions, the idea is properly not yet “scientific,” but rather proposed, awaiting confirmation. As well, it is, in some cases, possible to test an idea against a body of existing experimental evidence. This is less satisfactory than performing tests specifically designed with controls, but nevertheless can create progress, preliminary results to guide later work.

In the case Peter will be looking at, there was a rush to judgment, a political impulse to find quick answers, and the ideas that arose (experimental error, artifacts, etc.) were never well-tested. Rather, impressions were created and communicated widely, based on limited and inconclusive evidence, becoming the general “consensus” that Peter will talk about.

In practice, simple application of the scientific method isn’t enough. Consider the situation when several scientists contemplate the same question: They all have an excellent understanding of the various hypotheses put forth; there are no questions about the predictions; and they all do tests and subsequent analyses. This, for example, was the situation in the area of the Fleischmann-Pons experiment back in 1989. So, what happens when different scientists that do the tests get different answers?

Again, it’s necessary to distinguish between observation and interpretation. The answers only seemed different when viewed from within a very limited perspective. In fact, as we now can see it, there was a high consistency between the various experiments, including the so-called negative replications. Essentially, given condition X, Y was seen, at least occasionally. With condition X missing, Y was never seen. That is enough to conclude, first pass, a causal relationship between X and Y. X, of course, would be high deuterium loading, of at least about 90%. Y would be excess heat. There were also other necessary conditions for excess heat. But in 1989, few knew this and it was widely assumed that it was enough to put “two electrodes in a jam-jar” to show that the FP Heat Effect did not exist. And there was more, of course.

More succinctly, the tests did not get “different answers.” Reality is a single Answer. When reality is observed from more than one perspective or in different situations, it may look different. That does not make any of the observations wrong, merely incomplete, not the whole affair. What we actually observe is an aspect of reality, it is the reality of our experience, hence the training of scientists properly focuses on careful observation and careful reporting of what is actually observed.

You might think that the right thing to do might be to go back to do more tests. Unfortunately, the scientific method doesn’t tell you how many tests you need to do, or what to do when people get different answers. The scientific method doesn’t provide for a guarantee that resources will be made available to carry out more tests, or that anyone will still be listening if more tests happen to get done.

Right. However, there is a hidden assumption here, that one must find the “correct answers” by some deadline. Historically, pressure arose from the political conditions around the 1989 announcement, so corners were cut. It was clear that the tests that were done were inadequate and the 1989 DoE review included acknowledgement of that. There was never a definitive review showing that the FP measurements of heat were artifact. Of course, eventually, positive confirmations started to show up. By that time, though, a massive information cascade had developed, and most scientists were no longer paying any attention. I call it a Perfect Storm.

Consensus as a possible extension of the scientific method

I was astonished by the resolution to this that I saw take place. The important question on the table from my perspective was whether there exists an excess heat effect in the Fleischmann-Pons experiment. The leading hypotheses included: (1) yes, the effect was real; (2) no, the initial results were an artifact.

Peter is not mentioning a crucial aspect of this, the pressure developed by the “nuclear” claim. Had Pons and Fleischmann merely announced a heat anomaly, leaving the “nuclear” speculations or conclusions to others, preferably physicists, history might have been very different. A heat anomaly? So perhaps some chemistry isn’t understood! Let’s not run around like headless chickens, let’s first see if this anomaly can be confirmed! If not, we can forget about it, until it is.

Instead, because of the nuclear claim and some unfortunate aspects of how this was announced and published, there was a massive uproar, much premature attention, and, then, partly because Pons and Fleischmann had made some errors in reporting nuclear products, premature rejection, tossing out the baby with the bathwater.

Yes, scientifically, and after the initial smoke cleared, the reality of the heat was the basic scientific question. As Peter will make clear, and he is quite correct, “excess heat” does not mean that physics textbooks must be revised, it is not in contradiction to known physics, it merely shows that something isn’t understood. Exactly what remains unclear, until it is clarified. So, yes, the heat might be real, or there might be some error in interpretation of the experiments (which is another way of saying “artifact.”)

Predictions were made, which largely centered around the possibility that either excess heat would be seen, or that excess heat would not be seen. A very large number of tests were done. A few people saw excess heat, and most didn’t.

Now, this is fascinating, in fact. There is a consistency here, underneath apparent contradiction. Those who saw excess heat commonly failed to see it in most experiments. Obvious conclusion: generating the excess heat effect was not well-understood. There was another approach available, one usable under such chaotic conditions: correlations of conditions and effects. By the time a clear correlated nuclear product was known, research had slowed. To truly beat the problem, probably, collaboration was required, so that multiple experiments could be subject to common correlation study. That mostly did not happen.

With a correlation study, the “negative” results are part of the useful data. Actually essential. Instead, oversimplified conclusions were drawn from incomplete data. 

A very large number of analyses were done, many of which focused on the experimental approach and calorimetry of Fleischmann and Pons. Some focused on nuclear measurements (the idea here was that if the energy was produced by nuclear reactions, then commensurate energetic particles should be present);

Peter is describing history, that “commensurate energetic particles should be present” was part of the inexplicit assumption that if there was a heat effect, it must be nuclear, and if it were nuclear, it must be d-d fusion, and if it were d-d fusion, and given the reported heat, there must be massive energetic particles. Fatal levels, actually. The search for neutrons, in particular, was mostly doomed from the start, useless. Whatever the FP Heat Effect is, it either produces no neutrons or very, very few. (At least not fast neutrons, as with hot fusion. WL Theory is a hoax, in my view, but it takes some sophistication to see that, so slow neutrons remain as possibly being involved, first-pass.)

What is remarkable is how obvious this was from the beginning, but many papers were written that ignored the obvious.

and some focused on the integrity and competence of Fleischmann and Pons. How was this resolved? For me the astonishment came when arguments were made that if members of the scientific community were to vote, that the overwhelming majority of the scientific community would conclude that there was no effect based on the tests.

That is not an argument, it is an observation based on extrapolation from experience. As Peter well knows, it is not based on a review of the tests. The only reviews actually done, especially the later ones, concluded that the effect is real. Even the DoE review in 2004, Peter was there, reported that half of the 18 panelists considered the evidence for excess heat “conclusive.” Now, if you don’t consider it “conclusive”, what do you think? Anywhere from impossible to possible! That was a “vote” from a very brief review, and I think only half the panel actually attended the physical meeting, and it was only one day. More definitive, and hopefully more considered, in science, is peer-reviewed review in mainstream journals. Those have been uniformly positive for a long time.

So what the conditions holding at the time Peter is writing about show is that “scientists” get their news from the newspaper — and from gossip — and put their pants on one leg at a time.

The “argument” would be that decisions on funding and access to academic resources should be based on such a vote. Normally, in science, one does not ask about general consensus among “scientists,” but among those actually working in a field, it is the “consensus of the informed” which is sought. Someone with a general science degree might have the tools to be able to understand papers, but that doesn’t mean that they actually read and study and understand them. I just critiqued a book review by a respected seismologist, actually a professor at a major university, who clearly knew practically nothing about LENR, but considered himself to be a decent spokesperson for the mainstream. There are many like him. A little knowledge is a dangerous thing.

I have no doubt whatsoever that a vote at that time (or now) would have gone poorly for Fleischmann and Pons.

There was a vote in 2004, of a kind. The results were not “poor,” and show substantial progress over the 1989 review. However, yes, if one were to snag random scientists and pop the question, it might go “poorly.” But I’m not sure. I talk with a lot of scientists, in contexts not biased toward LENR, and there is more understanding out there than we might think. I really don’t know, and nobody has done the survey, nor is it particularly valuable. What matters everywhere is not the consensus of all people or all scientists, but all accepted as knowledgeable on the subject. One of the massive errors of 1989 and often repeated is that expertise on, say, nuclear physics, conveys expertise on LENR. But most of the work and the techniques are chemistry. Heat is most commonly a chemical phenomenon.

To actually review LENR fairly requires a multidisciplinary approach. Polling random scientists, garbage in, garbage out. Running reviews, with extensive discussion between those with experimental knowledge and others, hammering out real consensus instead of just knee-jerk opinion, that is what would be desirable. It’s happened here and there, simply not enough yet to make the kind of difference Peter and I would like to see.

The idea of a vote among scientists seems to be very democratic; in some countries leaders are selected and issues are resolved through the application of democracy. What to me was astonishing at the time was that this argument was used in connection with the question of the existence of an excess heat effect in the Fleischmann-Pons experiment.

And a legislature declared that pi was 22/7. Not a bad approximation, to be sure. What were they actually declaring? (So I looked this up. No, they did not declare that. “Common knowledge” is often quite distorted. And then, because Wikipedia is unreliable, I checked the Straight Dope, which is truly reliable, and if you doubt that, be prepared to be treated severely. I can tolerate dissent, but not heresy. Also, likewise.  Remarkably, Cecil Adams managed to write about cold fusion without making an idiot out of himself. “As the recent cold fusion fiasco makes clear, scientists are as prone to self-delusion as anybody else.” True, too true. Present company excepted, of course!

Our society does not use ordinary “democratic process” to make decisions on fact. Rather, this mostly happens with juries, in courts of law. Yes, there is a vote, but to gain a result on a serious matter (criminal, say), unanimity is required, after a hopefully thorough review of evidence and arguments. 

In the years following I tried this approach out with students in the classroom. I would pose a technical question concerning some issue under discussion, and elicit an answer from the student. At issue would be the question as to whether the answer was right, or wrong. I proposed that we make use of a more modern version of the scientific method, which was to include voting in order to check the correctness of the result. If the students voted that the result was correct, then I would argue that we had made use of this augmentation of the scientific method in order to determine whether the result was correct or not. Of course, we would go on only when the result was actually correct.

Correct according to whom? Rather obviously, the professor. Appeal to authority. I would hope that the professor refrained from intervening unless it was absolutely necessary; rather, that he would recognize that the minority is, not uncommonly, right, but may not have expressed itself well enough, or the truth is more complex than one view or another, “right and wrong.” Consensus organizations exist where finding full consensus is considered desirable, actually misssion-critical. When a decision has massive consequences, perhaps paralyzing progress in science for a long time, perhaps “no agreement, but majority X,”with a defined process, is better than concluding that X is the truth and other ideas are wrong. In real organizations, with full discussion, consensus is much more accessible than most think. The key is “full discussion,” which often actually takes facilitation, from people who know how to guide participants toward agreements.

I love that Peter actually tried this. He’s living like a scientist, testing ideas.

In such a discussion, if a consensus appeared that the professor believed was wrong, then it’s a powerful teaching opportunity. How does the professor know it’s wrong? Is there experimental evidence of which the students were not aware, or failed to consider? Are there defective arguments being used, and if, so, how did it happen that the students agreed on them? Social pressures? Laziness? Or something missing in their education? Simply declaring the consensus “wrong,” would avoid the deeper education possible.

There is consensus process that works, that is far more likely to come up with deep conclusions than any individual, and there is so-called consensus that is a social majority bullying a minority. A crucial difference is respect and tolerance for differing points of view, instead of pushing particular points of view as “true,” and others as “false.”

The students understood that such a vote had nothing to do with verifying whether a result was correct or not. To figure out whether a result is correct, we can derive results, we can verify results mathematically, we can turn to unambiguous experimental results and we can do tests; but in general the correctness of a technical result in the hard sciences should probably not be determined from the result of this kind of vote.

Voting will occur in groups created to recommend courses of action. Courts will avoid attempts to decide “truth,” absent action proposed. One of the defects in the 2004 U.S. DoE review, as far as I know, was the lack of a specific, practical (within political reach) and actionable proposal. What has eventually come to me has been the creation of a “LENR desk” at the DoE, a specific person or small office with the task of maintaining knowledge of the state of research, with the job of making recommendations on research, i.e., identifying the kinds of fundamental questions to ask, tests to perform, to address what the 2004 panel unanimously agreed to recommend. That was apparently a genuine consensus, and obviously could lead to resolving all the other issues, but we didn’t focus on that, the CMNS community instead, chip on shoulder, focused on what was wrong with that review (and mistakes were made, for sure.)

Scientific method and the scientific community

I have argued that using the scientific method can be an effective way to clarify a technical issue. However, it could be argued that the scientific method should come with a warning, something to the effect that actually using it might be detrimental to your career and to your personal life. There are, of course, many examples that could be used for illustration. A colleague of mine recently related the story of Ignaz Semmelweis to me. Semmelweis (according to Wikipedia) earned a doctorate in medicine in 1844, and subsequently became interested in the question of why the mortality rate was so high at the obstetrical clinics at the Vienna General Hospital. He proposed a hypothesis that led to a testable prediction (that washing hands would improve the mortality rate), carried out the test and analyzed the result. In fact, the mortality rate did drop, and dropped by a large factor.

In this case Semmelweis made use of the scientific method to learn something important that saved lives. Probably you have figured out by now that his result was not immediately recognized or accepted by the medical and scientific communities, and the unfortunate consequences of his discovery to his career and personal life serve to underscore that science is very much an imperfect human enterprise. His career did not advance as it probably should have, or as he might have wished, following this important discovery. His personal life was negatively impacted.

This story is often told. I was a midwife, and trained midwives, and knew about Semmelweiss long ago. The Wikipedia article.  A sentence from the Wikipedia article:

It has been contended that Semmelweis could have had an even greater impact if he had managed to communicate his findings more effectively and avoid antagonising the medical establishment, even given the opposition from entrenched viewpoints.[56]

Semmelweiss became obsessed about his finding and the apparent rejection. In fact, there was substantial acceptance, but also widespread misunderstanding and denial. Semmelweiss was telling doctors that they were killing their patients and he was irate that they didn’t believe him.

How to accomplish that kind of information transfer remains tricky. It can still be the case that, at least for individuals, “standard of practice” can be deadly.

Semmelweiss literally lost his mind, and died when committed to a mental hospital, having been injured by a guard. 

The scientific community is a social entity, and scientists within the scientific community have to interact from day to day with other members of the scientific community, as well as with those not in science. How a scientist navigates these treacherous waters can have an impact. For example, Fleischmann once described what happened to him following putting forth the claim of excess power in the Fleischmann-Pons experiment; he described the experience as one of being “extruded” out of the scientific community. From my own discussions with him, I suspect that he suffered from depression in his later years that resulted in part from the non-acceptance of his research.

Right. That, however, presents Fleischmann as a victim, along with all the other researchers “extruded.” However, he wasn’t rejected because he claimed excess heat. That simply isn’t what happened. The real story is substantially more complex. Bottom line, the depth of the rejection was related to the “nuclear claim,” made with only circumstantial evidence that depended entirely on his own expertise, together with an error in nuclear measurements, a first publication that called attention to the standard d+d reactions as if they were relevant, when they obviously were not, and then a series of decisions made, reactive to attack, that made it all worse. The secrecy, the failure to disclose difficulties promptly, the decision to withhold helium measurement results, the decision to avoid helium measurements for the future, the failure to honor the agreement in the Morrey collaboration, all amplified the impression of incompetence. He was not actually incompetent, certainly not as to electrochemistry! He was, however, human, dealing with a political situation outside his competence. However, his later debate with Morrison was based on an article that purported simplicity, but that was far from simple to understand. Fleischmann needed guidance, and didn’t have it, apparently. Or if he had sound guidance, he wasn’t listening to it. 

If he was depressed later, I would ascribe that to a failure to recognize and acknowledge what he had done and not done to create the situation. Doing so would have given him power. Instead, mostly, he remained silent. (People will tell themselves “I did the best I could,” which is BS, typically, how could we possibly know that nothing better was possible? We may tell ourselves that it was all someone else’s fault, but that, then, assigns power to “someone else,” not to us. Power is created by “The buck stops here!”) But we now have his correspondence with Miles, and I have not studied it yet. What I know is that when we own and take full responsibility for whatever happened in our lives, we can them move on to much more than we might think possible. 

Those who have worked on anomalies connected with the Fleischmann-Pons experience have a wide variety of experiences. For example, one friend became very interested in the experiments and decided to put time into this area of research. Almost immediately it became difficult to bring in research funding on any topic. From these experiences my friend consciously made the decision to back away from the field, after which it again became possible to get funding. Some others in the field have found it difficult to obtain resources to pursue research on the Fleischmann-Pons effect, and also difficult to publish.

Indeed. There are very many personal accounts. Too many are anonymous rumors, like this, which makes them less credible. I don’t doubt the general idea. Yes, I think many did make the decision to back away. I once had a conversation with a user on Wikipedia, who wanted his anonymity preserved, though he was taking a skeptical position on LENR. Why? Because, he claimed, if it were known that he was even willing to talk about LENR, it would damage his career as a scientist. That would have been in 2009 or so.

I would argue that instead of being an aberration of science (as many of my friends have told me), this is a part of science. The social aspects of science are important, and strongly impact what science is done and the careers and lives of scientists. I think that the excess heat effect in the Fleischmann-Pons experiment is important; however, we need to be aware of the associated social aspects. In a recent short course class on the topic I included slides with a warning, in an attempt to make sure that no one young and naive would remain unaware of the danger associated with cultivating an interest in the field. Working in this field can result in your career being destroyed.

Unfortunately, perhaps, the students may think you are joking. I would prefer to find and communicate ways to work in the field without such damage. There are hints in Peter’s essay, to possibilities. Definitely, anyone considering getting involved should know the risks, but also how, possibly, to handle them. Some activities in life are dangerous, but still worth doing.

It follows that the scientific method probably needs to be placed in context. Although the “question” to be addressed in the scientific method seems to be general, it is not. There is a filter implicit in connection with the scientific community, in that the question to be addressed through the use of the scientific method must be one either approved by, or likely to be approved by, the scientific community.

Peter is here beginning what he later calls the “outrageous parody.” If we take this as descriptive, there is a reality behind what he is writing. If a question is outside the boundaries being described, it’s at the edge of a cliff, or over it. Walking in such a place, with a naive sense of safety, is very dangerous. People die doing such, commonly. People aware of the danger still sometimes die, but not nearly so commonly.

The parody begins with his usage of “must.” There is no must, but there are natural consequences to working “outside the box.” Pons and Fleischmann knew that their work would be controversial, but somehow failed to treat it as the hot potato it was, if they mentioned “nuclear.” It’s ironic. Had they not mentioned they could have patented a method for producing heat, without the N word. If someone else had asked about “nuclear,” they could have said, “We don’t see adequate evidence to make such a claim. We don’t know what is causing the heat.”

And they could have continued with this profession of “inadequate evidence” until they had such evidence and it was bulletproof. It might only have taken a few years, maybe even less (i.e., to establish “nuclear.” Establishing a specific mechanism might still not have been accomplished, but … without the rejection cascade, we would probably know much more, and, I suspect, we’d have a lab rat, at least.

Otherwise, the associated endeavor will not be considered to be part of science, and whatever results come from the application of the scientific method are not going to be included in the canon of science.

Yes, again if descriptive, not prescriptive. This should be obvious: what is not understood and well-confirmed does not belong in the “canon.”

If one decides to focus on a question in this context that is outside of the body of questions of interest to the scientific community, then one must understand that this will lead to an exclusion from the scientific community.

Again, yes, but with a conditions In my training, they told us, “If they are not shooting at you, you are not doing anything worth wasting bullets on.”

The condition is that it may be possible to work in such a way as to not arouse this response. With LENR, the rejection cascade was established in full force long ago, and is persistent. However, there may be ways to phrase “the question of interest” to keep it well within what the scientific community as a whole will accept. Others may find support and funding such that they can disregard that problem. Certainly McKubre was successful, I see no sign that he suffered an impact to his career, indeed LENR became the major focus of that career.

But why do people go into science? If it’s to make money, some do better getting an MBA, or going into industry. There would naturally be few that would choose LENR out of the many career possibilities, but eventually, in any field, one can come up against entrenched and factional belief. Scientists are not trained to face these issues powerfully, and many are socially unskilled.

Also, if one attempts to apply the scientific method to a problem or area that is not approved, then the scientific community will not be supportive of the endeavor, and it will be problematic to find resources to carry out the scientific method.

Resources are controlled by whom? Has it ever been the case that scientists could expect support for whatever wild-hair idea they want to pursue? However, in fact, resources can be found for any reasonably interesting research. They may have strings attached. TANSTAAFL. One can set aside LENR, work in academia and go for tenure, and then do pretty much whatever, but … if more than very basic funding is needed, it may take special work to find it.

One of the suggestions for this community is to create structures to assess proposed projects, generating facilitated consensus, and to recommend funding for projects considered likely to produce value, and then to facilitate connecting sources of funding with such projects.

Funding does exist. In not very long after Peter wrote this essay, he did receive some support from Industrial Heat. Modest projects of value and interest can be funded. Major projects, that’s more difficult, but it’s happening.

A possible improvement of the scientific method

This leads us back to the question of what is science, and to further contemplation of the scientific method. From my experience over the past quarter century, I have come to view the question of what science is perhaps as the wrong question. The more important issue concerns the scientific community; you see, science is what the scientific community says science is.

It all depends on what “is” is. It also depends on the exact definition of the “scientific community,” and, further, on how the “scientific community” actually “says” something.

Lost as well, is the distinction between general opinion, expert opinion, majority opinion, and consensus. If there is a genuine and widespread consensus, it is, first, very unlikely (as a general rule) to be seriously useless. I would write “wrong,” but as will be seen, I’m siding with Peter in denying that right and wrong are measurable phenomena. However, utility can be measured, at least comparatively. Secondly, rejecting the consensus is highly dangerous, not just for career, but for sanity as well. You’d better have good cause! And be prepared for a difficult road ahead! Those who do this rarely do well, by any definition.

This is not intended as a truism; quite the contrary.

There are two ways of defining words. One is by the intention of the speaker, the other is by the effect on the audience. The speaker has authority over the first, but who has authority over the second? Words have effects regardless of what we want. But, in fact, as I have tested again and again, every day, we may declare possibilities, using words, and something happens. Often, miracles happen. But I don’t actually control the effect of a given word, normally, rather I use already-established effects (in my own experience and in what I observe with others). If I have some personal definition, but the word has a different effect on a listener, the word will create that effect, not what I “say it means,” or imagine is my intention.

So, from this point of view, and as to something that might be measurable, science is not what the scientific community says it is, but is the effect that the word has. The “saying” of the scientific community may or may not make a difference.

In these days the scientific community has become very powerful. It has an important voice in our society. It has a powerful impact on the lives and careers of individual scientists. It helps to decide what science gets done; it also helps to decide what science doesn’t get done. And importantly, in connection with this discussion, it decides what lies within the boundaries of science, and also it decides what is not science (if you have doubts about this, an experiment can help clarify the issue: pick any topic that is controversial in the sense under discussion; stand up to argue in the media that not only is the topic part of science, but that the controversial position constitutes good science, then wait a bit and then start taking measurements).

Measurements of what? Lost in this parody is that words are intended to communicate, and in communication the target matters. So “science” means one thing to one audience, and something else to another. I argue within the media just as Peter suggests, sometimes. I measure my readership and my upvotes. Results vary with the nature of the audience. With specific readers, the variance may be dramatic.

“Boundaries of science” here refers to a fuzzy abstraction. Yet the effect on an individual of crossing those boundaries can be strong, very real. It’s like any social condition. 

What science includes, and perhaps more importantly does not include, has become extremely important; the only opinion that counts is that of the scientific community. This is a reflection of the increasing power of the scientific community.

Yet if the general community — or those with power and influence within it — decides that scientists are bourgeois counter-revolutionaries, they are screwed, except for those who conform to the vanguard of the proletariat. Off to the communal farm for re-education!

In light of this, perhaps this might be a good time to think about updating the scientific method; a more modern version might look something like the following:

So, yes, this is a parody, but I’m going to look at it as if it is descriptive of reality, under some conditions. It’s only an “outrageous parody” if proposed as prescriptive, normative.

1) The question: The process might start with a question like “why is the sky blue” (according to our source Wikipedia for this discussion), that involves some issue concerning the physical world. As remarked upon by Wikipedia, in many cases there already exists information relevant to the question (for example, you can look up in texts on classical electromagnetism to find the reason that the sky is blue). In the case of the Fleischmann-Pons effect, the scientific community has already studied the effect in sufficient detail with the result that it lies outside of science; so as with other areas determined to be outside of science, the scientific method cannot be used. We recognize in this that certain questions cannot be addressed using the scientific method.

If one wants to look at the blue sky question “scientifically,” it would begin backed up, for, before “why,” comes observation. Is the sky “blue”? What does that mean, exactly? Who measures the color of the sky? Is it blue from everywhere and in every part? What is the “sky,” indeed, where is it? Yes, we have a direction for it, “up,” but how far up? With data on all this, on the sky and its color, then we can look at causes, at “why” or “how.”

And the question, the way that Peter phrases it, is reductionist. How about this answer to “why is the sky blue”: “Because God likes blue, you dummy!” That’s a very different meaning for “why” than what is really “how,” i.e., how is light transformed in color by various processes? The “God” answer describes an intention. That answer is not “wrong,” but incomplete.

There is another answer to the question: “Because we say so!” This has far more truth to it than may meet the eye. “Blue” is a name for a series of reactions and responses that we, in English, lump together as if they were unitary, single. Other languages and cultures may associate things differently.

To be sure, however, when I look at the sky, my reaction is normally “blue,” unless its a sunset or sunrise sky, when sometimes that part of the sky has a different color. I also see something else in the sky, less commonly perceived.

2) The hypothesis: Largely we should follow the discussion in Wikipedia regarding the hypothesis regarding it as a conjecture. For example, from our textbooks we find that the sky is blue because large angle scattering from molecules is more efficient for shorter wavelength light. However, we understand that since certain conjectures lie outside of science, those would need to be discarded before continuing (otherwise any result that we obtain may not lie within science).  For example, the hypothesis that excess heat is a real effect in the Fleischmann-Pons experiment is one that lies outside of science, whereas the hypothesis that excess heat is due to errors in calorimetry lies within science and is allowed.

Now, if we understand “science” as the “canon,” the body of accepted fact and explanations, then the first hypothesis is indeed, outside the canon, it is not an accepted fact, if the canon is taken most broadly, to indicate what is almost universally accepted. On the other hand, this hypothesis is supported by nearly all reviews in peer-reviewed mainstream journals since about 2005, so is it actually “outside of science”? It came one vote short of being a majority opinion in the 2004 DoE review, the closest event we have to a vote. The 18-expert panel was equally divided between “conclusive” and “not conclusive” on the heat question. (And if a more sophisticated question had been asked, it might have shown that a majority of the panel showed an allowance leaning toward reality, because “not conclusive” is not equivalent to “wrong.”) The alleged majority, Peter is assuming is “consensus,” would be agreement on “wrong,” but that was apparently not the case in 2004.

But the “inside-science” hypothesis is the more powerful one to test, and this is what is so ironic here. If we think that the excess heat is real, then our effort should be, as I learned the scientific method, to attempt to prove the null hypothesis, that it’s artifact. So how do we test that? Then, by comparison, how would we test the first hypothesis? So many papers I have seen in this field where a researcher set out to prove that the heat effect is real. That’s a setup for confirmation bias. No, the deeper scientific approach is a strong attempt to show that the heat effect is artifact. And, in fact, often it is! That is, not all reports of excess heat are showing actual excess heat.

But some do, apparently. How would we know the difference? There is a simple answer: correlation between conditions and effects, across many experiments with controls well-chosen to prove artifact, and failing to find artifact. All of these would be investigating a question, that by the terms here, is clearly within science, and, not only that, is useful research. Understanding possible artifacts is obviously useful and within science!

After all, if we can show that the heat effect is only artifactual, we can then stop the waste of countless hours of blind-alley investigations and millions of dollars in funding that could otherwise be devoted to Good Stuff, like enormous machines to demonstrate thermonuclear fusion, that provide jobs for many deserving particle physicists and other Good Scientists.

For that matter, we could avoid Peter Hagelstein wasting his time with this nonsense, when he could be doing something far more useful, like designing weapons of mass destruction.

3) Prediction: We would like to understand the consequence that follows from the hypothesis, once again following Wikipedia here. Regarding scattering of blue light by molecules, we might predict that the scattered light will be polarized, which we can test. However, it is important to make sure that what we predict lies within science. For example, a prediction that excess heat can be observed as a consequence of the existence of a new physical effect in the Fleischmann-Pons experiment would likely be outside of science, and cannot be put forth. A prediction that a calorimetric artifact can occur in connection with the experiment (as advocated by Lewis, Huizenga, Shanahan and also by the Wikipedia page on cold fusion) definitely lies within the boundaries of science.

I notice that to be testable, a specific explanation must be created, i.e., scattering of light by molecules. That, then (with what is known or believed about molecules and light scattering), allows a prediction, polarization, which can be tested. The FP hypothesis here is odd. A “new physical effect” is not a specific testable hypothesis. That an artifact can occur is obvious, and is not the issue. Rather, the general idea is that the excess heat reported is artifact, and then so many have proposed specific artifacts, such as Shanahan. These are testable. That a specific artifact is shown not to be occurring does not take an experimental result outside of accepted science, this would require showing this for all possible artifacts, which is impossible. Rather, something else happens when investigations are careful. Again, testing a specific proposed artifact is clearly, as stated, within science, and useful as explained above. 

4) Test: One would think the most important part of the scientific method is to test the hypothesis and see how the world works. As such, this is the most problematic. Generally a test requires resources to carry out, so whether a test can be done or not depends on funding, lab facilities, people, time and on other issues. The scientific community aids here by helping to make sure that resources (which are always scarce) are not wasted testing things that do not need to be tested (such as excess heat in the Fleischmann-Pons experiment).  Another important issue concerns who is doing the test; for example, in experiments on the Fleischmann-Pons experiment, tests have been discounted because the experimentalist involved was biased in thinking that a positive result could have been obtained.

To the extent that the rejection of the FP heat is a genuine consensus, of course funding will be scarce, but some research requires little or no funding. For example, literature studies.

“Need to be tested” is an opinion, and is individual or collective. It’s almost never a universal, and so, imagine that one has become aware of the heat/helium correlation and the status of research on this, and sees that, while the correlation appears solidly established, with multiple confirmed verifications, the ratio itself has only been measured twice with even rough precision, after possibly capturing all the helium. Now, demonstrating that the heat/helium ratio is artifact would have massive benefits, because heat/helium is the evidence that is most convincing to newcomers (like me).

So the idea occurs of using what is already known, repeating work that has already been done, but with increased precision and using the simple technique discovered to, apparently, capture all the helium. Yes, it’s expensive work. However, in fact, this was funded with a donation from a major donor, well-known, to the tune of $6 million, in 2014, to be matched by another $6 million in Texas state funds. All to prove that the heat/helium correlation is bogus, and like normal pathological science, disappears with increased precision! Right?

Had it been realized, this could have been done many years ago. Think of the millions of dollars that would have been saved! Why did it take a quarter century after the heat/helium correlation was discovered to set up a test of this with precision and the necessary controls? 

Blaming that on the skeptics is delusion. This was us.

5) Analysis: Once again we defer to the discussion in Wikipedia concerning connecting the results of the experiment with the hypothesis and predictions. However, we probably need to generalize the notion of analysis in recognition of the accumulated experience within the scientific community. For example, if the test yields a result that is outside of science, then one would want to re-do the test enough times until a different result is obtained. If the test result stubbornly remains outside of acceptable science, then the best option is to regard the test as inconclusive (since a result that lies outside of science cannot be a conclusion resulting from the application of the method).

In reality, few results are totally conclusive. There is always some possible artifact left untested. Science (real science, and not merely the social-test science being proposed here) is served when all those experimental results are reported, and if it’s necessary to categorize them, fine. But if they are reported, later analysis, particularly when combined with other reports, can look more deeply. The version of science being described is obviously a fixed thing, not open to any change or modification, it’s dead, not living. Real science — and even the social-test science — does change, it merely can take much longer than some of us would like, because of social forces. Once again, the advice here if one wants to stay within accepted science is to frame the work as an attempt to confirm mainstream opinion through specific tests, perhaps with increased precision (which is often done to extend the accuracy of known constants). If someone tries to prove artifact in an FP type experiment, one of the signs of artifact would be that major variables and results would not correlate (such as heat and helium). Other variable pairs exist as well, the same. The results may be null (no heat found) and perhaps no helium found above background as well. Now, suppose one does this experiment twenty times. And most of these times, there is no heat and no helium. But,say, five times, there is heat, and the amount of heat correlates with helium. The more heat, the more helium. This is, again, simply an experimental finding. One may make mistakes in measuring heat and in measuring helium. If anodic reversal is used to release trapped helium, what is the ratio found between heat and helium? And how does this compare to other similar experiments?

When reviewing experimental findings, with decently-done work, the motivation of the workers is not terribly relevant. If they set out to show, and state this, that their goal was to show that heat/helium correlation was artifact, and they considered all reasonably possible artifacts, and failed to confirm any of them, in spite of diligent efforts, what effect would this have when reported?

And what happens, over time, when results like these accumulate? Does the “official consensus of bogosity” still stand?

In fact, as I’ve stated, that has not been a genuine scientific consensus for a long time, clearly it was dead by 2004, persisting only in pockets that each imagine they represent the mainstream. There is a persistence of delusion.

If ultimately the analysis step shows that the test result lies outside of science, then one must terminate the scientific method, in recognition that it is a logical impossibility that a result which lies outside of science can be the result of the application of the scientific method. It is helpful in this case to forget the question; it would be best (but not yet required) that documentation or evidence that the test was done be eliminated.

Ah, but a result outside of “science,” i.e., normal expectations, is simply an anomaly, it proves nothing. Anomalies show that something about the experiment is not understood, and that therefore there is something to be learned. The parody is here advising people how to avoid social disapproval, and if that is the main force driving them, then real science is not their interest at all. Rather, they are technologists, like robotic parrots. Useful for some purposes, not for others. If you knew this about them, would you hire them?

The analysis step created a problem for Pons and Fleischmann because they mixed up their own ideas and conclusions with their experimental facts, and announced conclusions that challenged the scientific status quo — and seriously — without having the very strong evidence needed to manage that. Once that context was established, later work was tarred with the same brush, too often. So the damage extended far beyond their own reputations.

6) Communication with others, peer review: When the process is sufficiently complete that a conclusion has been reached, it is important for the research to be reviewed by others, and possibly published so that others can make use of the results; yet again we must defer to Wikipedia on this discussion. However, we need to be mindful of certain issues in connection with this. If the results lie outside of science then there is really no point in sending it out for review; the scientific community is very helpful by restricting publication of such results, and one’s career can be in jeopardy if one’s colleagues become aware that the test was done. As it sometimes happens that the scientific community changes its view on what is outside of science, one strategy is to wait and publish later on (one can still get priority). If years pass and there are no changes, it would seem a reasonable strategy to find a much younger trusted colleague to arrange for posthumous publication.

Or wait until one has tenure. Basically, this is the real world: political considerations matter, and, in fact, it can be argued that they should matter. Instead of railing against the unfairness of it all, access to power requires learning how to use the system as it exists, not as we wish it were. Sometimes we may work for transformation of existing structurs (or creation of structure that has not yet existed), but this takes time, typically, and it also takes community and communication, cooperation, and coordination, around which much of the CMNS community lacks skill. Nevertheless, anyone and everyone can assist, once what is missing is distinguished.

Or we can continue to blame the skeptics for doing what comes naturally for them, while doing what comes naturally for us, i.e., blaming and complaining and doing nothing to transform the situation, not even investigating the possibilities, not looking for people to support, and not supporting those others.

7) Re-evaluation: In the event that this augmented version of the scientific method has been used, it may be that in spite of efforts to the contrary, results are published which end up outside of science (with the possibility of exclusion from scientific community to follow).

Remember, it is not “results” which are outside of science, ever! It is interpretations of them. So avoid unnecessary interpretation! Report verifiable facts! If they appear to imply some conclusion that is outside science, address this with high caution. Disclaim those conclusions, proclaim that while some conclusion might seem possible, that this is outside what is accepted and cannot be asserted without more evidence, and speculate on as many artifacts as one can imagine, even if total bullshit, and then seek funding to test them, to defend Science from being sullied by immature and premature conclusions.

Just report all the damn data and then let the community interpret it. Never get into a position of needing to defend your own interpretations, that will take you out of science, and not just the social-test science, but the real thing. Let someone else do that. Trust the future, it is really amazing what the future can do. It’s actually unlimited!

If this occurs, the simplest approach is simply a retraction of results (if the results lie outside of science, then they must be wrong, which means there must be an error—more than enough grounds for retraction).

The parody is now suggesting actually lying to avoid blame. Anyone who does that deserves to be totally ostracized from the scientific community! I will be making a “modest proposal” regarding this and other offenses. (Converting offenders into something useful.)

Retracting results should not be necessary if they have been carefully reported and if conclusions have been avoided, and if appropriate protective magic incantations have been uttered. (Such as, “We do not understand this result, but are publishing it for review and to seek explanations consistent with scientific consensus, blah blah.”) If one believes that one does understand the result, nevertheless, one is never obligated to incriminate oneself, and since, if one is sophisticated, one knows that some failure of understanding is always possible, it is honest to note that. Depending on context, one may be able to be more assertive without harm. 

If the result supports someone who has been selected for career destruction, then a timely retraction may be well received by the scientific community. A researcher may wish to avoid standing up for a result that is outside of science (unless one is seeking near-term career change).

The actual damage I have seen is mostly from researchers standing for and reporting conclusions, not mere experimental facts. To really examine this would require a much deeper study. What should be known is that working on LENR in any way can sometimes have negative consequences for career. I would not recommend anyone go into the field unless they are aware of this, fully prepared to face it, and as well, willing to learn what it takes to minimize damage (to themselves and others). LENR is, face it, a very difficult field, not a slam dunk for anyone.

There are, of course, many examples in times past when a researcher was able to persuade other scientists of the validity of a contested result; one might naively be inspired from these examples to take up a cause because it is the right thing to do.

Bad Idea, actually. Naive. Again, under this is the idea that results are subject to “contest.” That’s actually rare. What really happens, long-term, is that harmonization is discovered, explanations that tie all the results together into a combination of explanations that support all of them. Certainly this happened with the original negative replications of the FPHE. The problem with those was not the results, but how the results were interpreted and used. I support much wider education on the distinction between fact and interpretation, because only among demagogues and fanatics does fact come into serious question. Normal people can actually agree on fact, with relative ease, with skilled facilitation. It’s interpretations which cause more difficulty. And then there is more process to deepen consensus.

But that was before modern delineation, before the existence of correct fundamental physical law and before the modern identification of areas lying outside of science.

“Correct.” Who has been using that term a lot lately? This is a parody, and the mindset being parodied is deeply regressive and outside of traditional science, and basically ignorant of the understanding of the great scientists of the last century, who didn’t think like this at all. But Peter knows that.

The reality here is that a “scientific establishment” has developed that, being more successful in many ways, also has more power, and institutions always act to preserve themselves and consolidate their power. But such power is, nevertheless, limited and vulnerable, and it may be subverted, if necessary. The scientific establishment is still dependent on the full society and its political institutions for support.

There are no examples of any researcher fighting for an area outside of science and winning in modern times. The conclusion that might be drawn is of course clear: modern boundaries are also correct; areas that are outside of science remain outside of science because the claims associated with them are simply wrong.

That was the position of the seismologist I mentioned. So a real scientist, credentialed, actually believed in “wrong” without having investigated, depending merely on rumor and general impressions. But what is “wrong”? Claims! Carefully reported, fact is never wrong. I may report that I measured a voltage as 1.03 V. That is what I saw on the meter. In reality, the meter’s calibration might be off. I might have had the scale set differently than I thought (I have a nice large analog meter, which allows errors like this). However, it is a fact that I reported what I did. Hence truly careful reporting attributes all the various assumptions that must be made, by assigning them to a person.

Claims are interpretations of evidence, not evidence itself. I claim, for example, that the preponderance of the evidence shows that the FP Heat Effect is the result of the conversion of deuterium to helium. I call that the “Conjecture.” It’s fully testable and well-enough described to be tested. It’s already been tested, and confirmed well enough that if this were an effective treatment for any disease, it would be ubiquitous, approved by authorities, but it can be tested — and is being tested — with increased precision.

That’s a claim. One can disagree with a claim. However, disagreeing with evidence is generally crazy. Evidence is evidence, consider this rule of evidence at law: Testimony is presumed true unless controverted. It is a fact that so-and-so testified to such-and-such, if the record shows that. It is a fact that certain experimental results were reported. We may then discuss and debate interpretations. We might claim that the lab was infected with some disease that caused everyone to report random data, but how likely is this? Rather, the evidence is what it is, and legitimate arguments are over interpretations. Have I mentioned that enough?

Such a modern generalization of the scientific method could be helpful in avoiding difficulties. For example, Semmelweis might have enjoyed a long and successful career by following this version of the scientific method, while getting credit for his discovery (perhaps posthumously). Had Fleischmann and Pons followed this version, they might conceivably have continued as well-respected members of the scientific community.

Semmelweiss was doomed, not because of his discover, but from how he then handled it, and his own demons. Fleischmann, toward the end of his life, acknowledged that it was probably a mistake to use the word “fusion” or “nuclear.” That was weak. Probably? (Actually, I should look up the actual comment, to get it right.). This was largely too late. That could have been recognized immediately, it could have been anticipated. Why wasn’t it? I don’t know. Fairly rapidly, the scientific world polarized around cold fusion, as if there were two competing political parties in a zero-sum game. There were some who attempted to foster communication, the example that comes to my mind is the late Nate Hoffman. Dieter Britz as well. There are others who don’t assume what might be called “hot” positions. 

The take-home message is actually not subservience that would have saved these scientists, but respect and reliance on the full community. Not always easy, sometimes it can look really bad! But necessary.

Where delineation is not needed

It might be worth thinking a bit about boundaries in science, and perhaps it would be useful first to examine where boundaries are not needed. In 1989 a variety of arguments were put forth in connection with excess heat in the Fleischmann-Pons experiment, and one of the most powerful was that such an effect is not consistent with condensed matter physics, and also not consistent with nuclear physics. In essence, it is impossible based on existing theory in these fields.

Peter is here repeating a common trope. Is he still in the parody? There is nothing about “excess heat” that creates a conflict with either condensed matter physics or nuclear physics. There is no impossibility proof. Rather, what was considered impossible was d-d fusion at significant levels under those conditions. That position can be well-supported, though it’s still possible that some exception might exist. Just very unlikely. Most reasonable theories at this point rely on collective effects, not simple d-d fusion.

There is no question as to whether this is true or not (it is true);

If that statement is true, I’ve never seen evidence for it, never a clear explanation of how anomalous heat, i.e., heat not understood, is “impossible.” To know that we would need to be omniscient. Rather, it is specific nuclear explanations that may more legitimately be considered impossible.

but the implication that seems to follow is that excess heat in the Fleischmann-Pons experiment in a sense constitutes an attack on two important, established and mature areas of physics.

When it was framed as nuclear, and even more, when it was implied that it was d-d fusion, it was exactly such an attack. Pons and Fleischmann knew that there would be controversy, but how well did they understand that, and why did they go ahead and poke the establishment in the eye with that news conference? It was not legally necessary. They have blamed university legal, but I’m suspicious of that. Priority could have been established for patent purposes in a different way. 

A further implication is that the scientific community needed to rally to defend two large areas firmly within the boundaries of science.

Some certainly saw it that way, saw “cold fusion” as an attack of pseudoscience and wishful thinking on real science. The name certainly didn’t help, because it placed the topic firmly within nuclear physics, when, in fact, it was originally an experimental result in electrochemistry.

One might think that this should have led to establishment of the boundary as to what is, and what isn’t, science in the vicinity of the part of science relevant to the Fleischmann-Pons experiment. I would like to argue that no such delineation is necessary for the defense of either science as a whole, or any particular area of science. Through the scientific method (and certainly not the outrageous parody proposed above) we have a powerful tool to tell what is true and what is not when it comes to questions of science.

The tool as I understand it is guidance for the individual, not necessarily a community. However, if a collection of individuals use it, are dedicated to using it, they may collectively use it and develop substantial power, because the tool actually has implications in every area of life, wherever we need to develop power (which includes the ability to predict the effects of actions). Peter may be misrepresenting the effectiveness of the method, it does not determine truth. It develops and tests models which predict behavior, so the models are more or less useful, not true or false. The model is not reality, the map is not the territory. When we forget this and believe that a model is “truth,” we are then trapped, because opposing the truth is morally reprehensible. Rather, it is always possible for a model to be improved; for a map to become more detailed and more clear; the only model that fully explains reality is reality itself. Nothing else has the necessary detail.

Chaos theory and quantum mechanics, together, demolished the idea that with accurate enough models we could predict the future, precisely.

Science is robust, especially modern science; and both condensed matter and nuclear physics have no need for anyone to rally to defend anything.

Yes. However, there are people with careers and organizations dependent on funding based on particular beliefs and approaches. Whether or not they “need” to be defended, they will defend themselves. That’s human!

If one views the Fleischmann-Pons experiment as an attack on any part of physics, then so be it.

One may do that, and it’s a personal choice, but it is essentially dumb, because nothing about the experiment attacks any part of physics, and how can an experiment attack a science? Only interpreters and interpretations can do that! What Pons and Fleischmann did was look where nobody had looked, at PdD above 90% loading. If looking at reality were an attack on existing science, “existing science” would deserve to die. But it isn’t such an attack, and this was a social phenomenon, a mass delusion, if you will.

A robust science should welcome such a challenge. If excess heat in the Fleischmann-Pons experiment shows up in the lab as a real effect, challenging both areas, then we should embrace the associated challenge. If either area is weak in some way, or has some error or flaw somehow that it cannot accommodate what nature does, then we should be eager to understand what nature is doing and to fix whatever is wrong.

It is, quite simply, unnecessary to go there. Until we have a far better understanding of the mechanism involved in the FP Heat Effect, it is no challenge at all to existing theory, other than a weak one, i.e., it is possible that something has not been understood. That is always possible and would have been possible without the FP experiment. Doesn’t mean that a lot of effort would be justified to investigate it.

However, some theories proposed to explain LENR do challenge existing physics, some more than others. Some don’t challenge it at all, other than possibly pointing to incomplete understanding in some areas. The one statement I remember from those physics lectures with Feynman in 1961-63 is that we didn’t have the math to calculate the solid state. Hence there has been reliance on approximations, and approximations can easily break down under some conditions. At this point, we don’t know enough about what is happening in the FP experiment (and other LENR experiments), to be able to clearly show any conflict with existing physics, and those who claim that major revisions are needed are blowing smoke, they don’t actually have a basis for that claim, and it continues to cause harm.

The situation becomes a little more fraught with the Conjecture, but, again, without a mechanism (and the Conjecture is mechanism-independent), there is no challenge. Huizenga wrote that the Miles result (heat/helium correlation within an order of magnitude of the deuterium conversion ratio) was astonishing, but thought it likely that this would not be confirmed (because no gammas). But gammas are only necessary for d+d -> 4He, not necessarily for all pathways. So this simply betrayed how widespread and easily accepted was the idea that the FP Heat Effect, if real, must be d-d fusion. After all, what else could it be? This demonstrates the massive problem with the thinking that was common in 1989 (and which still is, for many).

The current view within the scientific community is that these fields have things right, and if that is not reflected in measurements in the lab, then the problem is with those doing the experiments.

Probably! And “probably useful” is where funding is practical. Obtaining funding for research into improbable ideas is far more difficult, eh? (In reality, “improbable” is subjective, and the beauty of the world as it is, is that the full human community is diverse, and there is no single way of thinking, merely some that are more common than others. It is not necessary for everyone to be convinced that something is useful, but only one person, or a few, those with resources.) 

Such a view prevailed in 1989, but now nearly a quarter century later, the situation in cold fusion labs is much clearer. There is excess heat, which can be a very big effect; it is reproducible in some labs;

That’s true, properly understood. In fact, reliability remains a problem in all labs. That is why correlation is so important, because for correlation it is not necessary to have a reliable effect, and reliable relationship is adequate. “It is reproducible” is a claim that, to be made safely under the more conservative rules proposed when swimming upstream, would require actual confirmation, of a specific protocol, this cannot be properly asserted by a single lab. And then, when we try to document this, we run into the problem that few actually replicate, they keep trying to “improve.” And so results are different and often the improvements have no effect or even demolish the results.

there are not [sic] commensurate energetic products; there are many replications; and there are other anomalies as well. Condensed matter physics and nuclear physics together are not sufficiently robust to account for these anomalies. No defense of these fields is required, since if some aspect of the associated theories is incomplete or can be broken, we would very much like to break it, so that we can focus on developing new theory that is more closely matched to experiment.

There is a commensurate product that may be energetic, but, as to significant levels, below the Hagelstein limit. By the way, Peter, thanks for that paper! 

Theory and fundamental physical laws

From the discussion above, things are complicated when it comes to science; it should come as no surprise that things are similarly complicated when it comes to theory.

Creating theory with inadequate experimental data is even more complicated. It could be argued that it might be better to wait, but people like the exercise and are welcome to spend as much time as they like on puzzles. As to funding for theory, at this point, I would not recommend much! If the theoretical community can collaborate, maybe. Can they? What is needed is vigorous critique, because some theories propose preposterousnesses, but the practice in the field became, as Kim told me when I asked him about Takahashi theory, “I don’t comment on the work of others.” Whereas Takahashi looks to me like a more detailed statement of what Kim proposes in more general terms. And if that’s wrong, I’d like to know! This reserve is not normal in mature science, because scientists are all working together, at least in theory, building on each other’s work. And for funding, normally, there must be vetting and critique.

In fact, were I funding theory, I’d contract with theorists to generate critique of the theories of others and then create process for reviewing that. The point would be to stimulate wider consideration of all the ideas, and, as well, to find if there are areas of agreement. If not, where are the specific disagreements and can they be tested?

Perhaps the place to begin in this discussion is with the fundamental physical laws, since in this case things are clearest. For the condensed matter part of the problem, a great deal can be understood by working with nonrelativistic electrons and nuclei as quantum mechanical particles, and Coulomb interactions. The associated fundamental laws were known in the late 1920s, and people routinely take advantage of them even now (after more than 80 years). Since so many experiments have followed, and so many calculations have been done, if something were wrong with this basic picture it would very probably have been noticed by now; consequently, I do not expect anomalies associated with Fleischmann-Pons experiments to change these fundamental nonrelativistic laws (in my view the anomalies are due to a funny kind of relativistic effect).

Nor do I expect that, for similar reasons. I don’t think it’s “relativistic,” but rather is more likely a collective effect (such as Takahashi’s TSC fusion or similar ideas). But this I know about Peter: it could be the theory du jour. He wrote the above in 2013. At the Short Course at ICCF-21, Peter described a theory, he had just developed the week before. To noobs. Is that a good idea? What do you think, Peter? How did the theory du jour come across at the DoE review in 2004?

Peter is thinking furiously, has been for years. He doesn’t stay stuck on a single approach. Maybe he will find something, maybe he already has. And maybe not. Without solid data, it’s damn hard to tell.

There are, of course, magnetic interactions, relativistic effects, couplings generally with the radiation field and higher-order effects; these do not fit into the fundamental simplistic picture from the late 1920s. We can account for them using quantum electrodynamics (QED), which came into existence between the late 1920s and about 1950. From the simplest possible perspective, the physical content of the theory associated with the construction includes a description of electrons and positrons (and their relativistic dynamics in free space), photons (and their relativistic dynamics in free space) and the simplest possible coupling between them. This basic construction is a reductionist’s dream, and everything more complicated (atoms, molecules, solids, lasers, transistors and so forth) can be thought of as a consequence of the fundamental construction of this theory. In the 60 years or more of experience with QED, there has accumulated pretty much only repeated successes and triumphs of the theory following many thousands of experiments and calculations, with no sign that there is anything wrong with it. Once again, I would not expect a consideration of the Fleischmann-Pons experiment to result in a revision of this QED construction; for example, if there were to be a revision, would we want to change the specification of the electron or photon, the interaction between them, relativity, or quantum mechanical principles? (The answer here should be none of the above.)

Again, he is here preaching to the choir. Can I get a witness?

We could make similar arguments in the case of nuclear physics. For the fundamental nonrelativistic laws, the description of nuclei as made up of neutrons and protons as quantum particles with potential interactions goes back to around 1930, but in this case there have been improvements over the years in the specification of the interaction potentials. Basic quantitative agreement between theory and experiment could be obtained for many problems with the potentials of the late 1950s; and subsequent improvements in the specification of the potentials have improved quantitative agreement between theory and experiment in this picture (but no fundamental change in how the theory works).

But neutrons and protons are compound particles, and new fundamental laws which describe component quarks and gluons, and the interaction between them, are captured in quantum chromodynamics (QCD); the associated field theory involves a reductionist construction similar to QED. This fundamental theory came into existence by the mid-1960s, and subsequent experience with it has produced a great many successes. I would not expect any change to result to QCD, or to the analogous (but somewhat less fundamental) field theory developed for neutrons and protons—quantum hadrodynamics, or QHD—as a result of research on the Fleischmann-Pons experiment.

Because nuclei can undergo beta decay, to be complete we should probably reference the discussion to the standard model, which includes QED, QCD and electro-weak interaction physics.

Yes. In my view it is, at this point, crazy to challenge standard physics without a necessity, and until there is much better data, there is no necessity.

In a sense then, the fundamental theory that is going to provide the foundation for the Fleischmann-Pons experiment is already known (and has been known for 40-60 years, depending on whether we think about QED, QCD or the standard model). Since these fundamental models do not include gravitational particles or forces, we know that they are incomplete, and physicists are currently putting in a great deal of effort on string theory and generalizations to unify the basic forces and particles. Why nature obeys quantum mechanics, and whether quantum mechanics can be derived from some more fundamental theory, are issues that some physicists are thinking about at present. So, unless the excess heat effect is mediated somehow by gravitational effects, unless it operates somehow outside of quantum mechanics, unless it somehow lies outside of relativity, or involves exotic physics such as dark matter, then we expect it to follow from the fundamental embodied by the standard model.

Agreed, as to what I expect.

I would not expect the resolution of anomalies in Fleischmann-Pons experiments to result in the overturn of quantum mechanics (there are some who have proposed exactly that); nor require a revision of QED (also argued for); nor any change in QCD or the standard model (as contemplated by some authors); nor involve gravitational effects (again, as has been proposed). Even though the excess heat effect by itself challenges the fields of condensed matter and nuclear physics, I expect no loss or negation of the accumulated science in either area; instead I think we will come to understand that there is some fine print associated with one of the theorems that we rely on which we hadn’t appreciated. I think both fields will be added to as a result of the research on anomalies, becoming even more robust in the process, and coming closer than they have been in the past.

Agreed, but I don’t see how the “excess heat effect by itself challenges the fields,” other than by presenting a mystery that is as yet unexplained. That is a kind of challenge, but not a claim that basic models are “wrong.” By itself, it does not contradict what is well-known, other than unsubstantiated assumptions and speculations. Yes, I look forward to the synthesis.

Theory, experiment and fundamental physical law

My view as a theorist generally is that experiment has to come first. If theory is in conflict with experiment (and if the experiment is correct), then a new theory is needed.

Yes, but caution is required, because “theory in conflict with experiment” is an interpretation, and defects can arise, not only the experiment, but also in the interpretations of the theory and the experiment and the comparison. What would be a better statement for me is that new interpretations are required. If the theory is otherwise well-established, revision of the theory is not a sane place to start. Normally.

Among those seeking theoretical explanations for the Fleischmann-Pons experiment there tends to be agreement on this point. However, there is less agreement concerning the implications. There have been proposals for theories which involve a revision of quantum mechanics, or that adopt a starting place which goes against the standard model. The associated argument is that since experiment comes first, theory has to accommodate the experimental results; and so we can forget about quantum mechanics, field theory and the fundamental laws (an argument I don’t agree with). From my perspective, we live at a time where the relevant fundamental physical laws are known; and so when we are revising theory in connection with the Fleischmann-Pons experiment, we do so only within a limited range that starts from fundamental physical law, and seek some feature of the subsequent development where something got missed.

This is the political reality: If we advance explanations of cold fusion that contradict existing physics, we create resistance, not only to the new theories, but to the underlying experimental basis for even thinking a theory is necessary. So the baby gets tossed with the bathwater. It causes damage. It increases pressure for the Garwin theory (“They must be doing something wrong.”)

If so, then what about those in the field that advocate for the overturn of fundamental physical law based on experimental results from the Fleischmann-Pons experiment? Certainly those who broadcast such views impact the credibility of the field in a very negative way, and it is the case that the credibility of the field is pretty low in the eyes of the scientific community and the public these days.

Yes. This is what I’ve been saying, to some substantial resistance. We are better off with no theory, with only what is clearly established by experimental results, a collection of phenomena, and, where possible, clear correlations, with only the simplest of “explanations” that cover what is known, not what is speculated or weakly inferred.

One can find many examples of critics in the early years (and also in recent times) who draw attention to suggestions from our community that large parts of existing physics must be overturned as a response to excess heat in the Fleischmann-Pons experiment. These clever critics have understood clearly how damaging such statements can be to the field, and have exploited the situation. An obvious solution might be to exclude those making the offending statements from this community, as has been recommended to me by senior people who understand just how much damage can be done by association with people who say things that are perceived as not credible. I am not able to explain in return that people who have experienced exclusion from the scientific community tend for some reason not to want to exclude others from their own community.

That’s understandable, to be sure. However, we need to clearly discriminate and distinguish between what is individual opinion and what is community consensus. We need to disavow as our consensus what is only individual opinion, particularly where that can cause harm as described, and it can. We need to establish mechanisms for speaking as a community, for developing genuine consensus, and for deciding what we will and will not allow and support. It can be done.

Some in the field argue that until the new effects are understood completely, all theory has to be on the table for possible revision. If one holds back some theory as protected or sacrosanct, then one will never find out what is wrong if the problems happen to be in a protected area. I used to agree with this, and doggedly kept all possibilities open when contemplating different theories and models. However, somewhere over the years it became clear that the associated theoretical parameter space was fully as large as the experimental parameter space; that a model for the anomalies is very much stronger when derived from more fundamental accepted theories; and that there are a great many potential opportunities for new models that build on top of the solid foundation provided by the fundamental theories. We know now that there are examples of models consistent with the fundamental laws that can be very relevant to experiment. It is not that I have more respect or more appreciation now for the fundamental laws than before; instead, it is that I simply view them differently. Rather than being restrictive telling me what can’t be done (as some of my colleagues think), I view the fundamental laws as exceptionally helpful and knowledgeable friends pointing the way toward fruitful areas likely to be most productive.

That’s well-stated, and a stand that may take you far, Peter. Until we have far better understanding and clear experimental evidence to back it, all theories might in some sense be “on the table,” but there may be a pile of them that won’t get much attention, and others that will naturally receive more. The street-light effect is actually a guide to more efficient search: do look first where the light is good. And especially test and look first at ideas that create clearly testable predictions, rather than vaguer ideas and “explanations.” Tests create valuable data even if the theory is itself useless. “Useless” is not a final judgment, because what is not useful today might be modified and become useful tomorrow. 

In recent years I have found myself engaged in discussions concerning particular theoretical models, some of which would go very much against the fundamental laws. There would be spirited arguments in which it became clear that others held dear the right to challenge anything (including quantum mechanics, QED, the standard model and more) in the pursuit of the holy grail which is the theoretical resolution of experiments showing anomalies. The picture that comes to mind is that of a prospector determined to head out into an area known to be totally devoid of gold for generations, where modern high resolution maps are available for free to anyone who wants to look to see where the gold isn’t. The displeasure and frustration that results has more than once ended up producing assertions that I was personally responsible for the lack of progress in solving the theoretical problem.

Hey, Peter, good news! You are personally responsible, so there is hope!

Personally, I like the idea of mystery, mysteries are fun, and that’s the Lomax theory: The mechanism of cold fusion is a mystery! I look forward to the day when I become wrong, but I don’t know if I’ll see that in my lifetime. I kind of doubt it, but it doesn’t really matter. We were able to use fire, long, long before we had “explanations.” 

Theory and experiment

We might think of the scientific method as involving two fundamental parts of science: experiment and theory. Theory comes into play ideally as providing input for the hypothesis and prediction part of the method, while experiment comes into play providing the test against nature to see whether the ideas are correct.

Forgotten, too often, is pre-theory exploration and observation. Science developed out of a large body of observation. The method is designed to test models, but before accurate models are developed, there is normally much observation that creates familiarity and sets up intuition. Theory does not spring up with no foundation in observation, and is best developed with one familiar with experimental evidence, which only partially includes controlled studies, which develop correlations between variables.

My experimentalist colleagues have emphasized the importance of theory to me in connection with Fleischmann-Pons studies; they have said (a great many times) that experimental parameter space is essentially infinitely large (and each experiment takes time, effort, money and sweat), so that theory is absolutely essential to provide some guidance to make the experimenting more efficient.

No wonder there has been a slow pace! It’s an inverse vicious circle: theorists need data to develop and vet theories, and experimentalists believe they need theories to generate data. Yes, the parameter space can be thought of as enormous, but sane exploration does not attempt to document all of it at once; rather, experimentation can begin with confirmation of what has already been observed and exploring the edges, with the development of OOPs and other observation of the effects of controlled variables. It can simply measure what has been observed before with increased precision. It can repeat experiments many times to develop data on reliability.

If so, then has there been any input from the theorists? After all, the picture of the experimentalists toiling late into the night forever exploring an infinitely large parameter space is one that is particularly depressing (you see, some of my friends are experimentalists…).

As it turns out, there has been guidance from the theorists—lots of guidance. I can cite as one example input from Douglas Morrison (a theorist from CERN and a critic), who suggested that tests should be done where elaborate calorimetric measurements should be carried out at the same time as elaborate neutron, gamma, charged particle and tritium measurements. Morrison held firmly to a picture in which nuclear energy is produced with commensurate energetic products; since there are no commensurate energetic particles produced in connection with the excess power, Morrison was able to reject all positive results systematically.

Ah, Peter, you are simply coat-racking a complaint about Morrison onto this. Morrison had an obvious case of head-wedged syndrome. By the time Morrison would have been demanding this, it was known that helium was the main product, so the sane demand would have been accurate calorimetry combined with accurate helium measurement, at least, with both, as accurate as possible. Morrison’s idea was good, looking for correlations, but he was demanding products that simply are not produced. There was no law of physics behind his picture of “energetic products,” merely ordinary and common behavior, not necessarily universal, and it depended on assuming that the reaction was d+d fusion. Again, this was all a result of claiming “nuclear” based only on heat evidence. Bad Idea.

“Commensurate” depended on a theory of a fuel/product relationship, otherwise there is no way of knowing what ratio to expect. Rejecting helium as a product based on no gammas depended on assumptions of d+d -> 4He, which, it can be strongly argued, must produce a gamma. Yes, maybe a way can be found around that. But we can start with something much simpler. I write about “conversion of deuterium to helium,” advisedly, not “interaction of deuterons to form helium,” because the former is broader. The latter may theoretically include collective effects, but in practice, the image it creates is standard fusion. (Notice, “deuterons” refers to the ionized nuclei, generally, whereas “deuterium” is the element, including the molecular form. I state Takahashi theory as involving two deuterium molecules, instead of four deuterons, to emphasize that the electrons are included in the collapse, and it’s a lot easier to consider two molecules coming together like that, than four independent deuterons. Language matters!

The headache I had with this approach is that the initial experimental claim was for an excess heat effect that occurs without commensurate energetic nuclear radiation. Morrison’s starting place was that nuclear energy generation must occur with commensurate energetic nuclear radiation, and would have been perfectly happy to accept the calorimetric energy as real with a corresponding observation of commensurate energetic nuclear radiation.

So the real challenge for Morrison was the heat/helium correlation. There was a debate between Morrison and Fleischmann and Pons, in the pages of Physics Letters A, and I have begun to cover it on this page. F&P could have blown the Morrison arguments out of the water with helium evidence, but, as far as we know, they never collected that evidence in those boil-off experiments, with allegedly high heat production. Why didn’t they? In the answer to that is much explanation for the continuance of the rejection cascade. In their article, they maintained the idea of a nuclear explanation, without providing any evidence for it other than their own calorimetry. They did design a simple test (boil-off-time), but complicated it with unnecessarily complex explanations. I did not understand that “simplicity” until I had read the article several times. Nor did Morrison, obviously.

However, somewhere in all of this it seems that Fleischmann and Pons’ excess heat effect (in which the initial claim was for a large energy effect without commensurate energetic nuclear products) was implicitly discarded at the beginning of the discussion.

Yes, obviously. What I wonder is why someone who believes that a claim is impossible would spend so much effort arguing about it. But I think we know why.

Morrison also held in high regard the high-energy physics community (he had somewhat less respect for electrochemist experimentalists who reported positive results); so he argued that the experiment needed to be done by competent physicists, such as the group at the pre-eminent Japanese KEK high energy physics lab. Year after year the KEK group reported negative results, and year after year Morrison would single out this group publicly in support of his contention that when competent experimentalists did the experiment, no excess heat was observed. This was true until the KEK group reported a positive result, which was rejected by Morrison (energetic products were not measured in amounts commensurate with the energy produced); coincidentally, the KEK effort was subsequently terminated (this presumably was unrelated to the results obtained in their experiments).

That’s hilarious. Did KEK measure helium? Helium is a nuclear product. Conversion of deuterium to helium has a known Q and if the heat matches that Q, in a situation where the fuel is likely deuterium, it is direct evidence that nuclear energy is being converted to heat without energetic radiation, unless the radiation is fully absorbed within the device, entirely converted to heat. 

Isagawa (1992)Isagawa (1995). Isagawa (1998). Yes, from the 1998 report, “Helium was observed, but no decisive conclusion could be drawn due to incompleteness of the then used detecting system.” It looks like they made extensive efforts to measure helium, but never nailed it. As they did find significant excess heat, that could have been very useful.

There have been an enormous number of theoretical proposals. Each theorist in the field has largely followed his own approach (with notable exceptions where some theorists have followed Preparata’s ideas, and others have followed Takahashi’s), and the majority of experimentalists have put forth conjectures as well. There are more than 1000 papers that are either theoretical, or combined experimental and theoretical with a nontrivial theoretical component. Individual theorists have put forth multiple proposals (in my own case, the number is up close to 300 approaches, models, sub-models and variants at this point, not all of which have been published or described in public). At ICCF conferences, more theoretical papers are generally submitted than experimental papers. In essence, there is enough theoretical input (some helpful, and some less so) to keep the experimentalists busy until well into the next millennium.

This was 2013, after he’d been at it for 24 years, so it’s not really the “theory du jour,” as I often quip, but more like the “theory du mois.”

You might argue there is an easy solution to this problem: simply sort the wheat from the chaff! Just take the strong theoretical proposals and focus on them, and put aside the ones that are weak. If you were to address this challenge to the theorists, the result can be predicted; pretty much all theorists would point to their own proposals as by far the strongest in the field, and recommend that all others be shelved.

Obvious, then, we don’t ask them about their own theories, but about those of others. And if two theorists cannot be found to support a particular theory for further investigation, then nobody is ready. Shelve them all, until some level of consensus emerges. Forget theory except for the very simplest organizing principles. 

If you address the same challenge to the experimentalists, you would likely find that some of the experimentalists would point to their own conjectures as most promising, and dismiss most of the others; other experimentalist would object to taking any of the theories off the table. If we were to consider a vote on this, probably there is more support for the Widom and Larsen proposal at present than any of the others, due in part to the spirited advocacy of Krivit at New Energy Times; in Italy Preparata’s approach looms large, even at this time; and the ideas of Takahashi and of Kim have wide support within the community. I note that objections are known for these models, and for most others as well.

Yes. Fortunately, theory has only a minor impact on the necessary experimental work. Most theories are not well enough developed to be of much use in designing experiments and at present the research priority is strongly toward developing and characterizing reliability and reproducibility. However, if an idea from theory is easy to test, that might see more rapid response.

I have just watched a Hagelstein video from last year it’s excellent and begins with a hilarious summary of the history of cold fusion, and Peter is hot on the trail and has been developing what might be called “minor hits” in creating theoretical predictions, and in particular, phonon frequencies. I knew about his prediction of effective THz beat frequencies in the dual laser stimulation work of Dennis Letts, but I was not aware of how Peter was using this as a general guide, nor of other results he has seen, venturing into experiment himself. 

Widom and Larsen attracted a lot of attention for the reasons given, and the promulgated myth that it doesn’t involve new physics, but has produced no results that benefited from it. Basically, no new physics  — if one ignores quantitative issues — but no useful understanding, either.

To make progress

Given this situation, how might progress be made? In connection with the very large number of theoretical ideas put forth to date, some obvious things come to mind. There is an enormous body of existing experimental results that could be used already to check models against experiment.

Yes. But who is going to do this? 

We know that excess heat production in the Fleischmann-Pons experiment in one mode is sensitive to loading, to current density, to temperature, probably to magnetic field and that 4He has been identified in the gas phase as a product correlated with energy.

Again, yes. As an example of work to do, magnetic field effects have been shown, apparently, with permanent magnets, but not studying the effect as the field is varied. Given the wide variability in the experiments, the simple work reported so far is not satisfactory.

It would be possible in principle to work with any particular model in order to check consistency with these basic observations. In the case of excess heat in the NiH experiments, there is less to test against, but one can find many things to test against in the papers of the Piantelli group, and in the studies of Miley and coworkers. Perhaps the biggest issue for a particular model is the absence of commensurate energetic products, and in my view the majority of the 1000 or so theoretical papers out there have problems of consistency with experiment in this area.

As a general rule, there is a great deal of work to be done to confirm and strengthen (or discredit!) existing findings. There are many results of interest in the almost thirty year history of the field that could benefit from replication, and replication work is the most likely to produce results of value at this time, if they are repeated with controlled variation to expand the useful data available.

As an example screaming for confirmation, Storms found that excess heat was maintained even after electrolysis was turned off, as loading declined, if he simply maintained cell temperature with a heater, showing, on the face of it, that temperature was a critical variable, even more than loading, once the reaction conditions are established. (Storms’ theory ascribes the formation of nuclear active environment to the effect of repeated loading on palladium, hence the appearance that loading is a major necessity.) This is of high interest and great practical import, but, to my knowledge, has not been confirmed.

There are issues which require experimental clarification. For example, the issue of the Q-value in connection with the correlation of 4He with excess energy for PdD experiments
remains a major headache for theorists (and for the field in general), and needs to be clarified.

Measurement of the Q with increased precision is an obvious and major priority, with high value both as a confirmation of heat, and a nuclear product, but also because it sets constraints on the major reaction taking place. Existing evidence indicates that, in PdD experiments, almost all that is happening is the conversion of deuterium to helium and heat, everything else reported (tritium, etc.) is a detail. But a more precise ratio will nail this, or suggest the existence of other reactions.

As well, a search should be maintained as practical for other correlations. Often, because a product was not “commensurate” with heat (from some theory of reaction), and even though the product was detected, the levels found and correlations with heat were not reported. A product may be correlated without being “commensurate,” and it might also be correlated with other conditions, such as the level of protium in PdD experiments.

The analogous issue of 3He production in connection with NiH and PdH is at present
essentially unexplored, and requires experimental input as a way for theory to be better grounded in reality. I personally think that the collimated X-rays in the Karabut
experiment are very important and need to be understood in connection with energy exchange, and an understanding of it would impact how we view excess heat experiments (but I note that other theorists would not agree).

What matters really is what is found by experiment. What is actually found, what is correlated, what are the effects of variables?

As a purely practical matter, rather than requiring a complete and global solution to all issues (an approach advocated, for example, by Storms), I would think that focusing on a single theoretical issue or statement that is accessible to experiment will be most advantageous in moving things forward on the theoretical front.

I strongly agree. If we can explain one aspect of the effect, we may be able, then, to explain others. It is not necessary to explain everything. Explanations start with correlations that then imply causal connections. Correlation is not causation, not intrinsically, but causation generally produces correlation. We may be dealing with more than one effect, indeed, that could explain some of the difficulties in the field.

Now there are a very large number of theoretical proposals, a very large number of experiments (and as yet relatively little connection between experiment and theory for the most part); but aside from the existence of an excess heat effect, there is very little that our community agrees on. What is needed is the proverbial theoretical flag in the ground. We would like to associate a theoretical interpretation with an experimental result in a way that is unambiguous, and which is agreed upon by the community.

I am suggesting starting with the Conjecture, not with mechanism. The Conjecture is not an attempt to foreclose on all other possibilities. But the evidence at this point is preponderant that helium is the only major product in the FP experiment. It is the general nature of the community, born as it was of defiant necessity, that we are not likely to agree on everything, so the priority I suggest is finding what we do agree upon, not as to conclusions, but to approach. I have found that, as an example, sincere skeptics agree as to the value of measuring the heat/helium ratio on PdD experiments with increased precision. So that is an agreement that is possible, without requiring a conclusion (i.e., that the ratio is some particular value, or even that it will be constant. The actual data will then guide and suggest further exploration.

(and a side effect of the technique suggested for releasing all the helium, anodic reversal, which dissolves the palladium surface, is that it could also provide a depth profile, which then provides possible information on NAE location and birth energy of the helium).

Historically there has been little effort focused in this way. Sadly, there are precious few resources now, and we have been losing people who have been in the field for a long time (and who have experience); the prospects for significant new experimentation is not good. There seems to be little in the way of transfer of what has been learned from the old guard to the new generation, and only recently has there seemed to be the beginnings of a new generation in the field at all.

Concluding thoughts

There are not [sic] simple solutions to the issues discussed above. It is the case that the scientific method provides us with a reliable tool to clarify what is right from what is wrong in our understanding of how nature works. But it is also the case that scientists would generally prefer not to be excluded from the scientific community, and this sets up a fundamental conflict between the use of the scientific method and issues connected with social aspects involving the scientific community. In a controversial area (such as excess heat in the Fleischmann-Pons experiment), it almost seems that you can do research, or you can remain a part of the scientific community; pick one.

There is evidence that this Hobson’s choice is real. However, as I’ve been pointing out for years, the field was complicated by premature claims, creating a strong bias in response. It really shouldn’t matter, for abstract science, what mistakes were made almost thirty years ago. But it does matter, because of persistence of vision. So anyone who chooses to work in the field, I suggest, should be fully aware of how what they publish will appear. Special caution is required. One of the devices I’m suggesting is relatively simple: back off from conclusions and leave conclusions to the community. Do not attach to them. Let conclusions come from elsewhere, and support them only with great caution. This allows the use of the scientific method, because tests of theories can still be performed, being framed to appear within science.

As argued above, the scientific method provides a powerful tool to figure out how nature works, but the scientific method provides no guarantee that resources will be available to apply it to any particular question; or that the results obtained using the scientific method will be recognized or accepted by other scientists; or that a scientist’s career will not be destroyed subsequently as a result of making use of the scientific method and coming up with a result that lies outside of the boundaries of science. Our drawing attention to the issue here should be viewed akin to reporting a measurement; we have data that can be used to see that this is so, but in this case I will defer to others on the question of what to do about it.

Peter here mixes “results” with conclusions about them. Evidence for harm to career from results is thinner than harm from conclusions that appeared premature or wrong.

“What to do about it,” is generic to problem-solving: first become aware of the problem. More powerfully, avoid allowing conclusions to affect the gathering of information, other than carefully and provisionally.

The degree to which fundamental theories provide a correct description of nature (within their domains), we are able to understand what is possible and what is not.

Only within narrow domains. “What is possible” cannot apply to the unknown, it is always possible that something is unknown. We can certainly be surprised by some result, where we may think some domain has been thoroughly explored. But the domain of highly loaded PdD was terra incognita, PdD had only been explored up to about 70%, and it appears to have been believed that that was a limit, at least at atmospheric pressure. McKubre realized immediately that Pons and Fleischmann must have created loading above that value, as I understand the story, but this was not documented in the original paper (and when did this become known?). Hence replication efforts were largely doomed, what became, later, known as a basic requirement for the effect to occur, was often not even measured, and when measured, was low compared to what was needed.

In the event that the theories are taken to be correct absolutely, experimentation would no longer be needed in areas where the outcome can be computed (enough experiments have already been done); physics in the associated domain could evolve to a purely mathematical science, and experimental physics could join the engineering sciences. Excess heat in the Fleischmann-Pons experiment is viewed by many as being inconsistent with fundamental physical law, which implies that inasmuch as relevant fundamental physical law is held to be correct, there is no need to look at any of the positive experimental results (since they must be wrong); nor is there any need for further experimentation to clarify the situation.

He is continuing the parody. “Viewed as inconsistent” arose as a reaction to premature claims. The original FP paper led readers to look, first, at d-d fusion and to reactions that clearly were not happening at high levels, if at all. The title of the paper encouraged this, as well: “Electrochemically induced nuclear fusion of deuterium.” Interpreted within that framework, the anomalous heat appeared impossible. To move beyond this, it was necessary to disentangle the results from the nuclear claim. That, eventually, evidence was found supporting “deuterium fusion” — which is not equivalent to “d-d fusion,” — does not negate this. It was not enough that they were “right.” That a guess is lucky does not make a premature claim acceptable. (Pons and Fleischmann were operating on a speculation that was probably false, the effect is not due to the high density of deuterium in PdD, but high loading probably created other conditions in the lattice that then catalyzed a new form of reaction. Problems with the speculation were also apparent to skeptical physicists, and they capitalized on it.)

From my perspective experimentation remains a critical part of the scientific method,

This should be obvious. We do not know that a theory is testable unless we test it, and, for the long term, that it remains testable. Experimentation to test accepted theory is routine in science education. If it cannot be tested it is “pseudoscientific.” Why it cannot be tested is irrelevant. So the criteria for science that the parody set up destroys “science” as being science. The question becomes how to confront and handle the social issue. What I expect from training is that this starts with distinguishing what actually happened, setting aside the understandable reactions that it was all “unfair,” which commonly confuse us. (“Unfair” is not a “truth.” It’s a reaction.) The guidance I have suggests that if we take responsibility for the situation, we gain power; when we blame it on others, we are claiming that we are powerless, and it should be no surprise that we then have little or no power.

and we also have great respect for the fundamental physical laws; the headache in connection with the Fleischmann-Pons experiment is not that it goes against fundamental physical law, but instead that there has been a lack of understanding in how to go from the fundamental physical laws to a model that accounts for experiment.

Yes. And this is to be expected if the anomaly is unexpected and requires a complex condition that is difficult to understand, and especially that, even if imagined, it is difficult to calculate adequately. This all becomes doubly difficult if the effect is, again, difficult to reliably demonstrate. Physicists are not accustomed to that in something appearing as simple as “cold fusion in a jam jar.” I can imagine high distaste for attempting to deal with the mess created on the surface of an electrolytic cathode. There might be more sympathy for gas-loading. Physicists, of course, want the even simpler conditions of a plasma, where two-body analysis is more likely to be accurate. Sorry. Nature has something else in mind.

Experimentation provides a route (even in the presence of such strong fundamental theory) to understand what nature does.

Right. Actually, the role of simple report gets lost in the blizzard of “knowledge.” We become so accustomed to being able to explain most anything that we then become unable to recognize an anomaly when it punches us in the nose. The FPHE was probably seen before, Mizuno has a credible report. But he did not realize the significance. Even when he was, later, investigating the FPHE, he had a massive heat after death event, and it was like he was in a fog. It’s a remarkable story. It can be very difficult to see anomalies, and they may be much more common than we realize.

An anomaly does *not* negate known physics, because all that “anomaly” means is that we don’t understand something. While it is theoretically possible — and should always remain possible — that accepted laws are inaccurate (a clearer term than “wrong”) it is just as likely, or even more likely, that we simply don’t understand what we are looking at, and that an explanation may be possible within existing physics. And Peter has made a strong point that this is where we should first look. Not at wild ideas that break what is already understood quite well. I will repeat this, it is a variation on “extraordinary claims require extraordinary evidence,” which gets a lot of abuse.

If an anomaly is found, before investing in new physics to explain it, the first order of business is to establish that the anomaly is not just an appearance from a misunderstood experiment, i.e., that it is not artifact. Only if this is established — and confirmed — is, then, major effort justified in attempting to explain it, with existing physics. As part of the experimentation involved, it is possible that clear evidence will arise that does, indeed, require new physics, but before that will become a conversation accepted as legitimate, the anomaly must be (1) clearly verified and confirmed, no longer within reasonable question, and (2) shown to be unexplainable with existing physics, where existing physics, applied to the conditions discovered to be operating in the effect, is inaccurate in prediction, and the failure to explain is persistent, possibly for a long time! Only then will new territory open up, supported by at least a major fraction of the mainstream.

In my view there should be no issue with experimentation that questions the correctness of both fundamental, and less fundamental, physical law, since our science is robust and will only become more robust when subject to continued tests.

The words I would use are “that tests the continued accuracy of known laws.” It is totally normal and expected that work continues to find ever-more precise measurements of basic constants. The world is vast, and it is possible that basic physics is tested by experiment somewhere in the world, and sane pedagogy will not reject such experimentation merely because the results appear wrong. Rather, if a student gets the “wrong answers,” there is an educational opportunity. Normally — after all, we are talking about well-established basic physics — something was not understood about the experiment. And if we create the idea that there are “correct results,” we would encourage students to fudge and cherry-pick results to get those “correct answers.” No, we want them to design clear tests and make accurate measurements, and to separate the process of measuring and recording from expectation.

The worst sin in science is fudging results to create a match to expectation. So it should be discouraged to, in the experimental process, review results for “correctness.” There is an analytical stage where this would be done, i.e., results would be compared with predictions from established theory. When results don’t match theory, and are outside of normal experimental error, then, obviously, one would carefully review the whole process. Pons and Fleischmann knew that “existing theory” used the Born-Oppenheimer approximation, which, as applied, predicted unmeasurable fusion rate for deuterium in palladium. But precisely because they knew it was an approximation, they decided to look. The Approximation was not a law, it was a calculation heuristic, and they thought, with everyone else, that it was probably good enough that they would be unable to measure the deviation. But they decided to look.

Collectively, if we allow it, that looking can and will look at almost everything. “Looking” is fundamental to science, even more fundamental than testing theories. What do we see? I look at the sky and see “sprites.” Small white objects darting about. Obviously, energy beings! (That’s been believed by some. Actually, they are living things!)

But what are they? What is known is fascinating, to me, and unexpected. Most people don’t see them, but, in fact, I’m pretty sure that most people could see them if they look, but because they are unexpected, they are not noticed,  we learned not to see them as children, because they distract from what we need to see in the sky, that large raptor or a rock flying at us.

So some kid notices them and tells his teacher, who tells him, “It’s your imagination, there is nothing there!” And so one more kid gets crushed by social expectations.

But what happens if an experimental result is reported that seems to go against relevant fundamental physical law?

(1) Believe the result is the result. I.e., that measurements were made and accurately reported.

(2) Question the interpretation, because it is very likely flawed. That is far more likely than “relevant fundamental physical law” being flawed.

Obviously, as well, errors can be made in measurement, and what we call “measurement” is often a kind of interpretation. Example: “measurement” of excess heat is commonly an interpretation of the actual measurements, which are commonly of temperature and input power. I am always suspicious of LENR claims where “anomalous heat” is plotted as a primary claim, rather than explicitly as an interpretation of the primary data, which, ideally, should be presented first. Consider this: an experiment, within a constant-temperature environment, is heated with a supplemental heater, to maintain a constant elevated temperature, and the power necessary for that is calibrated for the exact conditions, insofar as possible. This is used with an electrolysis experiment, looking for anomalous heat. There is also “input power” (to the electrolysis). So the report plots, against time, the difference between the steady-state supplemental heating power and the actual power to maintain temperature, less the other input power. This would be a relatively direct display of excess power, and that this power is also inferred (as a product of current and voltage) would be a minor quibble. But when excess power is a more complex calculation, presenting it as if it were measured is problematic.

Since the fundamental physical laws have emerged as a consequence of previous experimentation, such a new experimental result might be viewed as going against the earlier accumulated body of experiment. But the argument is much stronger in the case of fundamental theory, because in this case one has the additional component of being able to say why the outlying experimental result is incorrect. In this case reasons are needed if we are to disregard the experimental result. I note that due to the great respect we have for experimental results generally in connection with the scientific method, the notion that we should disregard particular experimental results should not be considered lightly.

Right. However, logically, unidentified experimental error always has a certain level of possibility. This is routinely handled, and one of the major methods is confirmation. Cold fusion presented a special problem: first, a large number of confirmation attempts that failed, and then reasonable suspicion of the file-drawer effect having an impact. This is why the reporting of full experimental series, as distinct from just the “best results” is so important. This is why encouraging full reporting, including of “negative results” could be helpful. From a pure scientific point of view, results are not “positive” or “negative,” but are far more complex data sets. 

Reasons that you might be persuaded to disregard an experimental result include: a lack of confirmation in other experiments; a lack of support in theory; an experiment carried out improperly; or perhaps the experimentalists involved are not credible. In the case of the Fleischmann-Pons experiment, many experiments were performed early on (based on an incomplete understanding of the experimental requirements) that did not obtain the same result; a great deal of effort was made to argue (incorrectly, as we are beginning to understand) that the experimental result is inconsistent with theory (and hence lies outside of science); it was argued that the calorimetry was not done properly; and a great deal of effort has been put into destroying the credibility of Fleischmann and Pons (as well as the credibility of other experimentalists who claimed to see the what Fleischmann and Pons saw).

The argument that results were inconsistent with established theory was defective from the beginning. There were clear sociological pathologies, and pseudoskeptical argument became common. This was recognizable even if an observer believed that cold fusion was not real. That is, to be sure, an observer who is able to assess arguments even if the observer agrees with the conclusions from the argument. Too many will support an argument because they agree with the conclusion. Just because a conclusion is sound does not make all the arguments advanced for it correct, but this is, again, common and very unscientific thinking. Ultimately the established rejection cascade came to be supported in continued existence by the repetition of alleged facts that either never were fact, or that became obsolete. “Nobody could replicate” is often repeated, even tough it is blatantly false. This was complicated, though, by the vast proliferation of protocols such that exact replication was relatively rare.

There was little or no discipline in the field. Perhaps we might notice that there is little profit or glory in replication. That kind of work, if I understand correctly, is often done by graduate students. Because the results were chaotic and unreliable, there was a constant effort to “improve” them, instead of studying the precise reliability of a particular protocol, with single-variable controls in repeated experiments.

Whether it is right, or whether it is wrong, to destroy the career of a scientist who has applied the scientific method and obtained a result thought by others to be incorrect, is not a question of science.

Correct. It’s a moral and social issue. If we want real science, science that is living, that can deepen and grow, we need protect intellectual freedom, and avoid “punishing” simple error — or what appears to be error. Scientists must be free to make mistakes. There is one kind of error that warrants heavy sanctions, and that is falsifying data. The Parkhomov fabrication of data in one of his reports might seem harmless — because that data probably just relatively flat — but he was, I find obvious, concealing fact, that he was recording data using a floating notebook computer to record his data, and the battery went low. However, given that it would have been easier and harmless, we might think, to just show the data he had with a note explaining the gap, I think he wanted to conceal the fact, and why? I have a suggestion: it would reveal that he needed to run this way because of heavy noise caused by the proximity of chopped power to his heater coil, immediately adjacent to the thermocouple. And that heavy noise could be causing problems! Concealing relevant fact is almost as offensive as falsifying data.

There are no scientific instruments capable of measuring whether what people do is right or wrong; we cannot construct a test within the scientific method capable of telling us whether what we do is right or wrong; hence we can agree that this question very much lies outside of science.

I will certainly agree, and it’s a point I often make, but it is also often derided.

It is a fact that the careers of Fleischmann and Pons were destroyed (in part because their results appeared not to be in agreement with theory), and the sense I get from discussions with colleagues not in the field is that this was appropriate (or at the very least expected).

However, this was complicated, not as simple as “results not in agreement with theory.” I’d say that anyone who reads the fuller accounts of what happened in 1989-1990 is likely to notice far more than that problem. For example, a common bete noir among cold fusion supporters is Robert Park. Park describes how he came to be so strongly skeptical: it was that F&P promised to reveal helium test results, and then they were never released.

The Morrey collaboration was a large-scale, many-laboratory effort to study helium in FP cathodes. Pons, we have testimony, violated a clear agreement, refusing to turn over the coding of the blinded cathodes, when Morrey gave him the helium results. There were legal threats if Morrey et al published, from Pons. Before that, the experimental cathode provided for testing was punk, with low excess heat, whereas the test had been designed, with the controls, to use a cathode with far higher generated energy. (Three cathodes were ion-implanted to simulate palladium loaded with helium from the reaction, at a level expected from the energy allegedly released.) The “as-received” cathode was heavily contaminated with implanted helium, may have been mixed up by Johnson-Matthey. And all this was never squarely faced by Pons and Fleischmann, and even though it was known by the mid-1990s that helium was the major product, and F&P were generating substantial heat — they claim — in France, there is no record of helium measurements from them.

It’s a mess. Yes, we know that they were right, they found an previously “unknown nuclear reaction.”But how they conducted themselves was clearly outside of scientific norms. (As with others, in the other direction or on the other side, by the way, there are many lessons for the future in this “scientific fiasco of the century,” once we fully examine it. 

I am generally not familiar with voices being raised outside of our community suggesting that there might have been anything wrong with this.

Few outside of “our community” — the community of interest in LENR — are aware of it, just as few are aware of the evidence for the reality of the Anomalous Heat Effect and its nuclear nature. Fewer still have any concept of what might be done about this, so when others do become aware, little or nothing happens. Nevertheless, it is becoming more possible to write about this. I have written about LENR on Quora, and it’s reasonably popular. In fact, I ran into one of the early negative replicators, and I blogged about it. He appeared completely unaware that there was a problem with his conclusions, that there had been any developments. The actual paper was fine, a standard negative replication. 

Were we to pursue the use of this kind of delineation in science, we very quickly enter into rather dark territory: for example, how many careers should be destroyed in order to achieve whatever goal is proposed as justification? Who decides on behalf of the scientific community which researchers should have their careers destroyed? Should we recognize the successes achieved in the destruction of careers by giving out awards and monetary compensation? Should we arrange for associated outplacement and mental health services for the newly delineated? And what happens if a mistake is made? Should the scientific community issue an apology (and what happens if the researcher is no longer with us when it is recognized that a mistake was made)? We are sure that careers get destroyed as part of delineation in science, but on the question of what to do about this observation we defer to others.

There is no collective, deliberative process behind the “destruction of careers.” This is an information cascade, there is no specific responsible party. Most believe that they are simply accepting and believing what everyone else believes, excepting, of course, those die-hard fanatics. There is a potential ally here, who thoroughly understands information cascades, Gary Taubes. I have established good communication with him, and am waiting for confirmation from the excess helium work in Texas before rattling his cage again. Cold fusion is not the only alleged Bad Science to be afflicted, and Taubes has actually exposed much more, including Bad Science that became an alleged consensus, on the rule of fat in human nutrition and with relationship to cardiovascular disease and obesity.

There are analogies. Racism is an information cascade, for the most part. Many racist policies existed without any formal deliberative process to create them. Waking up white is an excellent book, I highly recommend it. So what could be done about racism? It’s the same question, actually. The general answer is what has become a mantra for Mike McKubre and myself: communicate, cooperate, collaborate. And, by the way, correlate. As Peter may have noticed, remarkable findings without correlations are, not useless, but ineffective in transforming reaction to the unexpected. Correlation provides meat for the theory hamburger. Correlation can be quantified, it can be analyzed statistically.

Arguments were put forth by critics in 1989 that excess heat in the Fleischmann-Pons effect was impossible based on theory, in connection with the delineation process. At the time these arguments were widely accepted—an acceptance that persists generally even today.

Information cascades are instinctive processes that developed in human society for survival reasons, like all such common phenomena. They operate through affiliation and other emotional responses, and are amygdala-mediated. The lizard brain. It is designed for quick response, not for depth. When we see a flash of orange and white in the jungle, we may have a fraction of a second to act, we have no time to sit back and analyze what it might be.

Once the information cascade is in place, people — scientists are people, have you noticed? — are aware of the consequences of deviating from the “consensus.” They won’t do it unless faced with not only strong evidence, but also necessity. Depending on the specific personality, they might not even allow themselves to think outside the box. After all, Joe, their friend who became a believer in cold fusion, that obvious nonsense, used to be sane, so there is obviously something about cold fusion that is dangerous, like a dangerous drug. And, of course, Tom Darden joked about this. “Cold fusion addiction.” It’s a thing.

There is, associated with cold fusion, a conspiracy theory. I see people succumb to it. It is very tempting to accept an organizing principle, for that impulse is even behind interest in science. To be sure, “just because you are paranoid does not mean that they are not out to get you.”

What people may learn to do is to recognize an “amygdala hijack.”  This very common phenomenon shuts down the normal operation of the cerebral cortex. The first reaction most have, to learning about this, is to think that a “hijack” is wrong. We shouldn’t do that! We should always think clearly, right?

I linked to a video that explains why it is absolutely necessary to respect this primitive brain operation. It’s designed to save our lives! However, it is an emergency response. Respecting it does not require being dominated by it, other than momentarily. We can make a fast assessment: “Do I have time to think about this? Yes, I’m afraid of ‘cold fusion addiction.’ But if I think about cold fusion, will I actually become unable to think clearly?” And most normal people will become curious, seeing no demons, anywhere close, about to take over their mind. Some won’t. Some will remain dominated by fear, a fear so deeply rooted that it is not even recognized as fear.

How can we communicate with such people. Well, how do porcupines make love?

Very carefully.

We will avoid sudden movements. We will focus on what is comfortable and familiar. We will avoid anything likely to arouse more fear. And if this is a physicist, want to make him or her afraid? Tell them that everything they know is wrong, that textbooks must be revised, because you have proof (absolute proof, I tell you!) that the anomalous heat called “cold fusion” is real and that therefore basic physics is complete bullshit.

That original idea of contradiction, a leap from something not understood (an “anomaly”), to “everything we know is wrong,” was utterly unnecessary, and it was caused by premature conclusions, on all sides. Yet once those fears are aroused. . . . 

It is possible to talk someone down. It takes skill, and if you think the issue is scientific fact, you will probably not be able to manage it. The issue is a frightened human being, possibly reacting to fear by becoming highly controlling.

Someone telling us that there is no danger, that it is just their imagination, will not be trusted, that is also instinctive. Even if it is just their imagination.

Most parents, though, know how to do this with a frightened child. Some, unfortunately, lack the skill, possibly because their parents lacked it. It can be learned.

From my perspective the arguments put forth by critics that the excess heat effect is inconsistent with the laws of physics fall short in at least one important aspect: what is concluded is now in disagreement with a very large number of experiments. And if somehow that were not sufficient, the associated technical arguments which have been given are badly broken.

Yes, but you may be leaping ahead, before first leading the audience to recognize the original error. You are correct, but not addressing the fear directly and the cause of it. Those “technical arguments” are what they think, they have nodded their heads in agreement for many years. You are telling them that they are wrong. And if you want to set up communication failure, tell people at the outset that they are wrong. And, we often don’t realize this, but even thinking that can so color our communication that people react to what is behind what we say, not just to what we say.

But wait, what if I think they are wrong? The advice here is to recognize that idea as amygdala-mediated, an emotional response to our own imagination of how the other is thinking. As one of my friends would put it, we may need to eat our own dog food before feeding it to others.

So my stand is that the skeptics were not “wrong.” Rather, the thinking was incomplete, and that’s actually totally obvious. It also isn’t a moral defect, because our thinking is, necessarily and forever, incomplete.

In dealing with amygdala hijack in one of my children, I saw strong evidence that the amygdala is programmable with language, and any healthy mother knows how to do it. The child has fallen and has a busted lip, it’s bleeding profusely, and the child is frightened and in pain. The mother realizes she is afraid that there will be scars. Does she tell the child she is afraid? Does she blame the child because he was careless? No, she’s a mother! She tells the child, “Yes, it hurts. We are on the way to the doctor and they will fix it, and you are going to be fine, here, let me give you a kiss!”

But wait, she doesn’t actually know that the child will be fine! Is she lying? No, she is creating reality by declaring it. “Fine” is like “right” and “wrong,” it is not factual, it’s a reaction, so her statement is a prediction, not a fact. And it happens to be a prediction that can create what is predicted.

I use this constantly, in my own life. Declare possibilities as if they are real and already exist! We don’t do this, because of two common reasons. We don’t want to be wrong, which is Bad, right? And we are afraid of being disappointed. I just heard this one yesterday, a woman justified to her friend her constant recitation of how nothing was going to work and bad things will happen, saying that she “is thinking the worst.” Why does she do that? So that she won’t be disappointed!

What she is creating in her life, constant fear and stress, is far worse than mere disappointment, which is transient at worst, unless we really were crazy in belief in some fantasy. Underneath most life advice is the ancient recognition of attachment as causing suffering.

So the stockbroker in 1929, even though it’s a beautiful day and he could have a fantastic lunch and we never do know what is going to happen tomorrow, jumps out the window because he thought he was rich, but wasn’t, because the market collapsed.

The sunset that day was just as beautiful as ever. Life still had endless possibilities, and, yes, one can be poor and happy, but this person would only be poor if they remained stuck in old ways that, at least for a while, weren’t working any more. People can even go to prison and be happy. (I was a prison chaplain, and human beings are amazingly flexible, once we accept present reality, what is actually happening.)

In my view the new effects are a consequence of working in a regime that we hadn’t noticed before, where some fine print associated with the rotation from the relativistic problem to the nonrelativistic problem causes it not to be as helpful as what we have grown used to.

Well, that’s Peter’s explanation, five years ago. There are other ways to say more or less the same thing. “Collective effects” is one. Notice that Widom and Larsen get away with this, as long as their specifics aren’t so seriously questioned. The goal I generally have is to deconstruct the “impossible” argument, not by claiming experimental proof, because there is, for someone not very familiar with the evidence, a long series of possible experimental errors and artifacts that can be plausibly asserted, and “they must be making some mistake” is actually plausible,  it happens. Researchers do make mistakes. And, in fact, Pons and Fleischmann made mistakes. I just listened to a really excellent talk by Peter, which convinced me that there might be something to his theoretical approach, in which he pointed out an error, in Fleischmann’s electrochemistry. Horrors! Unthinkable! Saint Fleischmann? Impossible!

This is part of how we recover from that “scientific fiasco of the century”: letting go of attachment, developing tolerance of ideas different from our own, distinguishing between reality (what actually happened) and interpretation and reaction, and opening up communication with people with whom we might have disagreements, and listening well! 

If so, we can keep what we know about condensed matter physics and nuclear physics unchanged in their applicable regimes, and make use of rather obvious generalizations in the new regime. Experimental results in the case of the Fleischmann-Pons experiment will likely be seen (retrospectively) as in agreement with (improved) theory.

Right. That is the future and it will happen (and it is already happening in places and in part). Meanwhile, we aren’t there yet, as to the full mainstream, the possibility has not been actualized, but we can, based entirely on the historical record, show that there is no necessary contradiction with known physics, there is merely something not yet explained. The rejection was of an immature and vague explanation: “fusion! nuclear!” with these words triggering a host of immediate reactions, all quite predictable, by the way.

I just read from Miles that Fleischmann later claimed that he and Pons were “against” holding that press conference. Sorry! This was self-justifying rationalization, chatter. They may well have argued against it, but, in the end, the record does not show anyone holding guns to their heads to force them to say what they said. They clearly knew, well before this, that this would be highly controversial, but were driven by their own demons to barge ahead instead of creating something different and more effective. (We all have these demons, but we usually don’t recognize them, we think that their voices are just us thinking. And they are, but I learned years ago, dealing with my own demons, that they lie to us. Once we back up from attachment to believing that what we think is right, it’s actually easy to recognize. This is behind most addiction, and people who are dealing with addition, up close and personally, come to know these things.)

Even though there may not be simple answers to some of the issues considered in this editorial, some very simple statements can be made. Excess heat in the Fleischmann-Pons experiment is a real effect.

I do say that, and frequently, but I don’t necessarily start there. Rather, where I will start depends on the audience.  Before I will slap them in the face with that particular trout, I will explore the evidence, what is actually found, how it has been confirmed, and how researchers are proceeding to strengthen this, and how very smart money is betting on this, with cash and reputable scientists involved. For some audiences, I prefer to let the reader decide on “real,” and to engage them with the question. How do we know what is “real”?

Do we use theory or experimental testing? It is actually an ancient question, where the answer was, often, “It’s up to the authorities.” Such as the Church. Or, “up to me, because I’m an expert.” Or “up to my friends, because they are experts and they wouldn’t lie.”

What I’ve found, in many discussions, is that genuine skeptics actually support that effort. What happens when precision is increased in the measurement of the heat/helium ratio in the FP experiment? Classic to “pathological science,” the effect disappears when measured with increased precision.

That was used against cold fusion by applying it to the chaotic excess heat experiments, where it was really inappropriate, because, if I’m correct, precision of calorimetry did not correlate with “positive” or “negative” reports. Correlation generates numbers that can then be compared.

But that’s difficult to study retrospectively, because papers are so different in approach, and this was the problem with uncorrelated heat. Nevertheless, that’s an idea for a research paper, looking at precision vs excess heat calculated. I haven’t seen one.

There are big implications for science, and for society. Without resources science in this area will not advance. With the continued destruction of the careers of those who venture to work in the area, progress will be slow, and there will be no continuity of effort.

While it is true that resources are needed for advance, I caution against the idea that we don’t have the resources. We do. We often, though, don’t know how to access them, and when we believe that they don’t exist, we are extremely unlikely to connect with them. The problem of harm to career is generic to any challenge to a broad consensus. I would recommend to anyone thinking of working in the field that they also recognize the need for personal training. It’s available, and far less expensive than a college education. Otherwise they will be babes in the woods. Scientists often go into science because of wanting to escape from the social jungle, imagining it to be a safe place, where truth matters more than popularity. So it’s not surprising to find major naivete on this among scientists.

I’ve been trained. That doesn’t mean that I don’t make mistakes, I do, plenty of them. But I also learn from them. Mistakes are, in fact, the fastest way to learn, and not realizing this, we may bend over backwards to avoid them. The trick is to recognize and let go of attachment to being right. That, in many ways, suppresses our ability to learn rapidly, and it also suppresses intuition, because intuition, by definition, is not rationally circumscribed and thus “safe.”

I’ll end with one of my favorite Feynman stories, I heard this from him, but it’s also in Surely You’re Joking, Mr. Feynman! (pp 144-146). It is about the Oak Ridge Gaseous Diffusion Plant (a later name), a crucial part of the Manhattan Project. This version I have copied from this page.

How do you look at a plant that ain’t built yet? I don’t know. Well, Lieutenant Zumwalt, who was always coming around with me because I had to have an escort everywhere, takes me into this room where there are these two engineers and a loooooong table cover, a stack of large, long blueprints representing the various floors of the proposed plant.

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they start to explain it to me, because they think I am a genius. Now, one of the things they had to avoid in the plant was accumulation. So they had problems like when there’s an evaporator working, which is trying to accumulate the stuff, if the valve gets stuck or something like that and too much stuff accumulates, it’ll explode. So they explained to me that this plant is designed so that if any one valve gets stuck nothing will happen. It needs at least two valves everywhere.

Then they explain how it works. The carbon tetrachloride comes in here, the uranium nitrate from here comes in here, it goes up and down, it goes up through the floor, comes up through the pipes, coming up from the second floor, bluuuuurp – going through the stack of blueprints, down-up-down-up, talking very fast, explaining the very, very complicated chemical plant.

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

I don’t know what to do. (You are not going to believe this story, but I swear it’s absolutely true – it’s such sensational luck.) I thought, what am I going to do? I got an idea. Maybe it’s a valve? So, in order to find out whether it’s a valve or not, I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page number 3, and I say, “What happens if this valve gets stuck?” figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck — ” and he goes up and down on the blueprint, up and down, the other guy up and down, back and forth, back and forth, and they both look at each other and they tchk, tchk, tchk, and they turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning, “ he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

In the version I recall, he mentioned that there were a million valves in the system, and that, when they later checked more thoroughly, the one he had pointed to was the only one not backed up. I take “million” as meaning “a lot,” not necessarily as an accurate number. From the Wikipedia article: “When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world’s largest building, comprising over 1,640,000 square feet (152,000 m2) of floor space and a volume of 97,500,000 cubic feet (2,760,000 m3).”

Why do I tell this story? Life is full of mysteries, but rather than his “lucky guess” being considered purely coincidental, from which we would learn nothing, I would rather give it a name. This was intuition. Feynman was receiving vast quantities of information during that session, and what might have been normal analytical thinking (which filters)  was interrupted by his puzzlement. So that information was going into his mind subconsciously. I’ve seen this happen again and again. We do something with no particular reason that turns out to be practically a miracle. But this does not require any woo, simply the possibility that conscious thought is quite limited compared to what the human brain actually can do, under some conditions. Feynman, as a child, developed habits that fully fostered intuition. He was curious, and an iconoclast. There are many, many other stories. I have always said, for many years, that I learned to think from Feynman. And then I learned how not to think. 


In case anyone hasn’t noticed, I’m a fan of Michael McKubre. He invited me to visit SRI in 2012, and encouraged me to take on a relatively skeptical role within the community.

So I was pleased today that he sent me the slide deck for his ICCF-21 presentation, and, with the good quality audio supplied by Ruby Carat of Cold Fusion Now, his full presentation is now accessible. I have created a review page at iccf-21/abstracts/review/mckubre

There is, here, an embarrassment of riches, in terms of defining a way forward.


subpage of iccf-21/abstracts/review/


Slides: ICCF21 Main McKubre

introductory summary by Ruby Carat:

Michael McKubre followed up making a plea that “condensed matter nuclear science is anomalous no more!” He echoes Tom Darden’s sentiment that CMNS must be integrated into the mainstream of science.

“I needed to see it with my own eyes to believe that it was true”, says McKubre. “At the same time, cold fusion is reproduced somewhere on the planet every day. Verification has already happened. But self-censorship is a problem in the CMNS field. Are we guarding our secrets for fear that someone else might take credit? Yes.”

Michael McKubre with The Fleischmann Pons Heat and Ancillary Effects: What Do We Know, and Why? How Might We Proceed? (copy on ColdFusionNow, 74.16 MB)

Local copy on CFC: (1:02:32)

But energy is a primary problem and you must “collaborate, cooperate, and communicate”, McKubre says to the scientists in the room.

That’s been my message for years. . . . the three C’s.

McKubre thanked Jed Rothwell and Jean-Paul Biberian for all the work on and the Journal of Condensed Matter Nuclear Science, respectively. Beyond that, the communication in the CMNS field is very poor and needs to be remedied.

He also supports a multi-laboratory approach where reproductions are conducted. Verification of this science has already occurred in the 90s, with the confirmation of tritium, and the heat-helium correlation. He believes that all the many variables must be correlated to move forward. Unfortunately, he believes the same thing he said in 1996, according to a Jed Rothwell article, that “acceptance of this field will only come about when a viable technology is achieved.”

To make progress, a procedure for replication must be codified, and a set of papers should be packaged for newbies to the field. A demonstration cell is third important effort to pursue.

Electrochemical PdD/LiOD is already proven, despite the problem with “electrochemisty”, and has not been demonstrated for >10 years. Energetics Technologies cell 64 a few years back gave 40 kJ input 1.14 MJ output, gain= 27.5 Sadly, the magic materials issue prevented replication.

“1 watt excess power is too small to convince a skeptic, and 100 Watts too hard (at least for electrochemistry)”, said McKubre. The goal is to create the heat effect at the lowest input power possible.

According to McKubre, Verification, Correlation, Replication, Demonstration, Utilization are the five marks of exploring and exploiting the FPHE.

Task for a learner/volunteer: transcribe the talk, key it to the minutes in the audio and to the slide deck.

I’m postponing major review until I have the text. I’ll have a lot to say (as he predicted!).